Customizing ApplicationDbContext in ASP.NET MVC 5 and ASP.NET Identity 2.0

As of Visual Studio 2013 Update 1, the ASP.NET MVC 5 templates with Authentication enabled will create a project that contains a class named ApplicationDbContext. This is the Entity Framework DbContext that is used by the ASP.NET Identity libraries to manage user records.

By default, here is the generated class:

public class ApplicationDbContext : IdentityDbContext<ApplicationUser>
{
     public ApplicationDbContext()
          : base("DefaultConnection")
     {
     }
}

You’ll notice that it inherits from IdentityDbContext with a generic type of ApplicationUser. The base DbContext handles whatever is needed by the ASP.NET Identity libraries, and the ApplicationUser is the model that describes the authenticated user. If you’re like me, and you don’t want to create a ton of separate DbContext classes for different repositories, you can just mash it all together into ApplicationDbContext like so.

public class ApplicationDbContext : IdentityDbContext<ApplicationUser>
{
    public ApplicationDbContext()
        : base("DefaultConnection")
    {
    }

    public DbSet<Blog> Blogs { get; set; }
    public DbSet<Post> Posts { get; set; }

    protected override void OnModelCreating(DbModelBuilder modelBuilder)
    {
        base.OnModelCreating(modelBuilder);

        // do fluent API stuff below
    }
}

At a glance, it seems weird that a DbContext which is inherited from a very specific library contains other non-related DbSet objects, but it gets the job done. If you’re more into separation of concerns and don’t want a huge explosion of fluent API in a single class, look into creating separate DbContext classes as appropriate. Just remember that you need to maintain a connection string per DbContext in your web.config.

Upgrading an Existing Project from ASP.NET Identity 1.0 to 2.0

I have recently been playing around with ASP.NET MVC 5 via Visual Studio 2013 and the new ASP.NET Identity libraries. While the project templates mostly help you get to where you need to go when starting a brand new application, upgrading from the first Identity package (1.0) to the latest stable (2.0) wasn’t quite as smooth as I expected. For reference, Visual Studio 2013 Update 1 project templates use the 1.0 version of the ASP.NET Identity libraries. Visual Studio 2013 Update 2 RC (at the time of this writing), uses the 2.0 version.

By default, the project templates will set you up with a database schema that is accessible via EntityFramework and Code First. Since the templates utilize Code First, you will need to manage database migrations. And that’s where the trouble came in when upgrading from 1.0 to 2.0. The database schema changed (and will require code migrations) as a result of upgrading since the library supports new features like email confirmation, phone numbers, new primary keys, new indexes, and more. Attempting to run the application after simply upgrading all the libraries through NuGet gave me this essay of an error:

The model backing the ‘ApplicationDbContext’ context has changed since the database was created. This could have happened because the model used by ASP.NET Identity Framework has changed or the model being used in your application has changed. To resolve this issue, you need to update your database. Consider using Code First Migrations to update the database (http://go.microsoft.com/fwlink/?LinkId=301867). Before you update your database using Code First Migrations, please disable the schema consistency check for ASP.NET Identity by setting throwIfV1Schema = false in the constructor of your ApplicationDbContext in your application.
public ApplicationDbContext() : base(“ApplicationServices”, throwIfV1Schema:false)

Fortunately, the error message itself and a blog post gets you a good portion closer to finishing the upgrade process. By changing the constructor of ApplicationDbContext to pass in false to its base class for throwIfV1Schema, you can avoid this exception and force your way through the migration steps. So, in brief:

  1. Open the Package Manager Console
  2. If you haven’t for this project, type Enable-Migrations to allow code migrations to be used in your project
  3. Type Add-Migration IdentityUpdate to create a migration script so the database schema can be brought up to speed to match the new model that was updated as a result of moving from 1.0 to 2.0. Note that the name of the migration can be anything.

Step 3 should fail if you have the same scenario as I did. You’ll probably see this error (which isn’t mentioned in the above linked blog post):

System.Data.Entity.Core.MetadataException: Schema specified is not valid. Errors:
(0,0) : error 0004: Could not load file or assembly ‘Microsoft.AspNet.Identity.EntityFramework, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35′ or one of its dependencies. The located assembly’s manifest definition does not match the assembly reference. (Exception from HRESULT: 0×80131040)

I actually got lucky and saw this error message in an issue on the ASP.NET Identity CodePlex page. Specifically, there is a workaround mentioned on this issue. Basically, it involves editing your web.config to add this dependentAssembly under the assemblyBinding section. Let’s hope that Microsoft updates the NuGet package to include the correct reference to the recent assemblies.

<dependentAssembly>
  <assemblyIdentity name="Microsoft.AspNet.Identity.EntityFramework" publicKeyToken="31bf3856ad364e35" culture="neutral" />
  <bindingRedirect oldVersion="0.0.0.0-2.0.0.0" newVersion="2.0.0.0" />
</dependentAssembly>

And finally, we can now create and execute our migrations.

  1. Type Add-Migration IdentityUpdate to create a migration script so the database schema can be brought up to speed to match the new model that was updated as a result of moving from 1.0 to 2.0. Note that the name of the migration can be anything.
  2. You should see a code migration file get generated with a long name containing a date and the name of the migration that you gave in the Add-Migration step. Confirm that it looks like what you expect.
  3. Type Update-Database to push the migration changes to the local database as part of your project.

Beta Testing Steam In-Home Streaming

Invite via Email

Valve has launched the initial phase of beta testing for their new Steam In-Home Streaming service. I have no clue how many people go into this first phase, but I was lucky enough to get invited. I was at work today and casually checked email from my phone. To my excitement, this popped up in my inbox.

InviteIf you got invited, then you will likely receive a similar email. It comes from the address “noreply@steampowered.com” if you’re curious. It doesn’t ask for any usernames, passwords, or any other Steam information, so don’t fall for any scams that people try to send you.

The email will have links to a Steam Support article with answers to some common questions, how to get setup, and how to get additional help. I highly suggest reading the support article and visiting the main streaming page.

The Gear

All my tests are going to be with mouse, keyboard, and touchpad (on the laptop). I don’t have a proper controller to test controller input.

Router: Linksys WRT54GL (10/100M ethernet / 54Mbps wireless)
Cables: CAT5e

Desktop (Host) Laptop (Client)
Model Number Custom Toshiba P755-S5215M
CPU Intel i5-2500K @ 3.3GHz Intel i3-2310M @ 2.1 GHz
GPU NVIDIA GTX 560 Ti Integrated Intel HD
Motherboard ASRock Z68 Extreme3 Gen3 Unknown
RAM 8GB 6GB
OS Windows 8.1 64-bit Xubuntu 13.04 64-bit
Resolution 1920×1080 1366×768

Getting Setup

Honestly, reading the support article linked above would probably suffice, but that’s so boring. Don’t you want to follow along with someone who is in the beta? Sure you do! Here’s what I did to get setup.

  1. Get two computers capable of launching the Steam client. In my case, I have a desktop which will host all of the games and a laptop which will connect to the host as a streaming client. The host and client operating system doesn’t seem to matter. My desktop is running Windows 8.1, but my laptop is running Xubuntu 13.04. The connections worked perfectly fine.
  2. Make sure both computers are on the same local network so they can see each other. I’m using a relatively dated router (see The Gear section above), so my latency results are going to be on the low end if I’m going up against people with gigabit networks. Both computers are connected to the router via CAT5e cable.
  3. Login to Steam on each computer, go to Steam –> Settings and opt-in to the Steam Beta Client. Restart Steam, and you should see the following pop-up in the lower right (once both of your computers are connected to Steam). ConnectedThe pop-up actually shows up on both computers indicating who it is connected to. In fact, there is a separate pop-up for disconnection as well.
  4. Confirm the connection by going to Steam –> Settings –> In-Home Streaming. You’ll see the devices that you can connect to along with a bunch of streaming settings (see below). Settings

Limit bandwidth: Auto/5/10/15/20/Unlimited Mbit/s

Limit framerate: Auto/30/60 FPS

Limit resolution: Desktop/1080p/720p

Disable hardware encoding: I’ve heard that the beta only uses software encoding, so I’m not sure if this option does anything.

The Magic

OK all the boring setup is done. I just left all the options at automatic, because I felt that Steam knew better than I did about when to adjust accordingly, but we’ll see how that actually plays out. When I used the client computer (laptop), I noticed that all the games installed on the host (desktop) were available (highlighted) to play in the “Installed Games” list even though the games weren’t installed!StreamButton There was also a new option on the games that weren’t installed which was titled, “STREAM”. The drop down on the button gave me the option to install the game or stream it from the computer that was hosting the game on my network.

Well, right off the bat, I noticed that even though I’m on Linux, there is the option to play non-Linux games like Deus Ex: Human Revolution and Mass Effect 2. I have no idea if they’ll work, but let’s find out. I’m going to try a Valve game, a non-Valve game, and a non-Steam game. By non-Steam, I mean a game that isn’t even registered on Steam but is launched through steam.

Dota 2

First, let’s check out Dota 2. Clicking the “STREAM” button launched the game on my desktop first and was followed up by viewing it on my laptop. Why is music from my desktop playing through the streaming client? Oh, I guess it captures all of the operating system audio and sends it along to the client. I had a media player going when I launched the game, and the audio was made available on the streaming client. Other than that, the first thing I noticed is that there’s a text overlay along the bottom with instructions.

STREAMING BETA
Press F6 or GUIDE+Y to toggle stats display

Pressing F6 expanded some diagnostics and a graph that look eerily similar to the “netgraph” command in the Source engine. I’m going to assume that negraph is built into this streaming client. The diagnostics output were something like:

Capture 1366x768 @ 58.88
Latency: 57.94ms (0.55.s input, 29.51ms game, 30.80ms display)
Ping time: 0.76ms
Incoming bitrate: 6871 kbit/s video: 6731 kbit/s
Outgoing bitrate: 88 kbit/s
Link utilization: 9% of estimated 71Mbps
Packet loss: 0.00% (0.00% frame loss)
Press F8 or GUIDE+X to save snapshot on remote computer

I assume that the number after the capture resolution is the frame rate. Dota 2 seems to be OK with streaming at 60 FPS at this resolution.

For some reason, the mouse cursor wasn’t showing up on the client, but I could see my movements being made on the host. In fact, the movements worked on the client (I could see buttons hovering, and I could click on buttons). However, the cursor was just invisible. Eventually it showed up, but I’m not sure what I did to make it appear. I launched a game with bots and played the game exactly as I would on my desktop; only this time it was on my laptop! I can barely play Osmos on this laptop, and here I am running Dota 2 at max settings with no lag.

I should note that when I got into the game, the diagnostics showed  that my incoming bitrate and link utilization both increased dramatically (to 15000 kbit/s and 20% of 100Mbps, respectively). OK, next game.

Mass Effect 2

I know for a fact that this game does not work on its own on Linux. Not to mention it wasn’t developed by Valve, so I was skeptical about streaming working very well. I thought there was an error launching the game at first, but it was just a message indicating that the remote desktop was doing some setup. FirstTimeIt’s impressive that the streaming takes into account the first time setup of all the annoying C++ Redistributables required to launch the game. My laptop patiently waited while everything installed and off I went. It’s worth noting that any Windows UAC prompts will need to be resolved on the host as those are not streamed to the client!

Once I was in the game, I can say confidently that I was blown away by the seamless input, game play, and display on my aging laptop. Honestly, there was no performance loss in the stream, and I’m not even using a gigabit network setup.

The diagnostics showed that I peaked around 15000 kbit/s, which seems to be my maximum (this was the max in Dota 2 as well). However, unlike Dota 2, the diagnostics showed that my “Capture” was set to 1280×720. Dissatisfied that this was lower than Dota 2, I went to the Mass Effect 2 settings to change the resolution. Sure enough, the resolution in-game was set to 1280×720. I bumped it up to 1920×1080 (the size on my desktop). After some brief flickering, the “Capture” then indicated 1366×768.

At first, I thought that I was limited by my laptop’s max resolution, but then I remembered the Steam setting, “Limit resolution to”. I bumped that value up to 1920×1080, relaunched Mass Effect 2, and saw that the “Capture” was now at 1920×1080. The frame rate was pegged at 30 FPS (which I’m assuming Steam has decided is appropriate since that setting was set to “Automatic”). I imagine that some die-hard folks will be upset that Steam has decided to degrade them by subjecting them to a 30 FPS gaming experience. You can always force the option to stream at 60 FPS, though I have no idea if the performance will suffer as a consequence.

I was curious to see what would happen if I alt-tabbed out of the game on the host, so that’s what I did. Checking back to my client, I saw that I was viewing my host’s desktop. I had full control of the desktop as long as the game was running (even though the game was minimized). I clicked around for a bit, moved a few icons, eventually got bored, and then closed the game on the host. It’s interesting that the host responds to all input and streams all displays as long as the streaming service is active and connected.

Starcraft II

I was excited to try a non-Steam game to see if Steam could be used to stream any game from my desktop. I was unfortunately met with sadness. This experiment was a complete and utter failure. It seems that because Starcraft II has an intermediate step of using its own Battle.net launcher, Steam can’t properly host the process in the streaming service. Any time I tried to stream, the client would get an error saying “Could not connect to host.” and the host simply never launched anything. I was upset, so I decided to try another one.

Fallout

The original Fallout can be fun while simultaneously frustrating. I guess that’s its main appeal. Anyway, I added the game to my Steam shortcut list. It immediately appeared as playable on my client’s Steam games library. So, I launched it. The game miraculously started up; the video came through; the audio came through. I delicately touched the mouse to select a new game. The sound of my excitement coming to a complete halt was quite loud at that moment.

I guess older games used a different API for input (and older DirectInput library perhaps). The streaming service was failing to detect the input on the client. Curiously, any movements I made on the host were displayed on the client (even input). It simply seemed that client to host input was a no-go for Fallout.

Initial Thoughts

For being a completely new beta, the service is phenomenally good. The setup process is largely automated once you get the correct version of the Steam client installed. As long as the games you want to play are sold through Steam, you probably won’t have a problem playing it. I realize that my sample size is small and my anecdotal evidence doesn’t count for much, but the tests that I’ve performed so far have been successful (disregarding the non-Steam game failures).

Pros

  • Play Steam game on any Steam-capable device with little to no lag even if the device has awful specs
  • Stream to your living room, bedroom, or whatever room you want to play in instead of being forced to play at your desk
  • Play every Steam game from any operating system even if the host and client don’t match operating systems (Windows -> Linux works fine)

Weird Bonuses

  • Share your desktop inadvertently by alt-tabbing to the desktop while the streaming service is running in a game
  • Stream audio that isn’t in the game to other devices (music, videos)

Cons

  • It’s still beta, and not everyone can try it yet
  • Doesn’t work with non-Steam games (and to be fair, this might happen in a future update, but I doubt Valve is even advertising that it can stream older games). Also their FAQ states that older games may not work well.
  • Doesn’t work as well over wireless (especially if you have a spotty connection or are across the house from the access point). This might require you to run cables where you don’t want to.

Here’s two pictures from the menus of Dota 2 and Mass Effect 2. Note the the diagnostics print outs that I mentioned before. (click for larger)

MassEffect2 Dota2

Always Deliver to the Software Specification

As programmers and system designers, we want our time to be spent well and our products to be well received. Nobody likes to spend weeks of their life coding a system that ends up being hated and unused. But hey, you were likely paid for that work, so what does it matter? Well, unless you reflect on what exactly caused that scenario, the situation will probably repeat itself in the next project or another one down the road.

Of course, maybe it wasn’t directly your fault. Maybe you were constantly distracted by that guy who screams into his phone a few cubicles over. Maybe your boss constantly forgets his passwords and thought the best person to bother was you. Maybe the other developers decided that the coding standards were beneath them and made peer reviews a living nightmare. Or maybe the software specification just sucked straight from the beginning. Sometimes these things happen and become unavoidable or at least difficult to avoid. But never, ever, let a product suffer as a result of not delivering to the specification; no matter how bad it sucks.

“But it was just that awful! There was no punctuation, and the writer decided to write the entire thing in Ye Olde English.” you shout. OK, wow, that’s pretty bad. But why weren’t these things brought up during the specification review and sign-off? I guess someone up top thought it was well written and chock full of good ideas. If it somehow managed to pass through the stakeholders and into your hands, then there’s not much you can do except leave the company or do your best to implement the funny jokes generator that slipped in from someone in management at the last minute.

Now, that doesn’t mean the specification can’t be improved. You can certainly bring things to the stakeholders’ attention for post-sign-off review. Perhaps you don’t know the subtle distinction between thou and thee which seems to be sprinkled everywhere in Ye Olde Specification. This approach to specification review is quite common, because things will inevitably surface during development that you didn’t expect. This path leads to happiness and sanity.

But sometimes, we programmers get that creative spark, that hint of artistic genius, our Van Gogh moment. Ignore it. Don’t let your imagination get the better of what you should actually be doing which is delivering to the specification. I’ve had the displeasure of working with someone (we’ll call him Rembrandt) who constantly wasted time implementing features that no one asked for. And you know what, some of the features were really awesome. Rembrandt had some genuinely good ideas. Rembrandt also caused the ship dates to constantly slip because he was never finished with the agreed upon tasks.

Therein lies the lesson. Don’t create features that aren’t explicitly stated in the specification. I know it’s hard to ignore that nagging voice in your head that says, “It will only take a little more time to add this other feature!” In reality, it probably wouldn’t take much time to add useful little features. After sidetracking for the tenth time, you lift your head up and realize the ship date has arrived, but you’re no where near completion. Your team, the stakeholders, and your boss are standing over your shoulder wondering what you’ve doing, and the only thing you have to fall back on is, “But I’m Rembrandt!”

Resolution Independent 2D Rendering in SDL2

A little over a year ago, I wrote a post about how to render to a fixed, virtual resolution so that we can render independently of the actual window size resolution. That approach utilized the XNA framework to perform what we needed. Since Microsoft effectively killed XNA by pushing forward with DirectX / WinRT, myself and others have moved on to other libraries. In this post, I will show you how to do the same thing but with SDL2. Honestly, this approach is even easier (as long as you are using SDL2 that is!)

The concept of rendering to a virtually sized target is labeled as “Logical Size” in SDL2. Rendering a game to a logical size makes the scaling of that game to match different window sizes much easier. Imagine that we created our game under the assumption of 800×600 (an old school, 4:3 aspect ratio). On a user’s machine that has their system resolution set to 1920×1080, we have two choices: 1) show the game in a tiny window or 2) stretch the picture to fit the full screen. Both of these options are pretty terrible. In the first, the window will be too small to see anything useful (depending on the textures and fonts used in the game). In the second, the stretched picture will look awful because the aspect ratios do not even match. This is where SDL2′s logical rendering comes into play.

After establishing your renderer, all you really need to do is call the SDL_RenderSetLogicalSize function with the appropriate parameters. For example, the below code will set the logical rendering size to 800×600.

SDL_RenderSetLogicalSize(renderer, 800, 600);

Now whenever we use our renderer to render textures, they will be appropriately scaled to fit the window size and letter-boxed to avoid the ugly stretched of different aspect ratios. You can see this in action in the picture below. In this example, I am rendering to a logical size of 800×600 in a window of size 1400×900. Note the letter boxes that SDL2 added to the left and right to avoid stretching.

LogicalRendering

Using SDL2-C# to Capture Text Input

A common feature of applications and video games is to allow the player to input text for various reasons. Maybe we want to allow the player to input their character’s name in an RPG, name a city in SimCity, or type a chat message to a friend in online. Using SDL2, we can take advantage of its built-in text processing system which abstracts much of the operating system event handles and character encoding mechanisms.

On consoles such as Xbox and Playstation, text input is rather simplistic and limited to visual keypads that you select via the controller. On a PC, we have the full range of widely varying keyboards from English and Spanish to Russian and Japanese. If we want our game or application to attract users on an international scale, it’s probably in your best interest to learn here and now how to use SDL2 to accomplish this goal.

At first glance, it probably seems simple to process text input. If the user presses the ‘A’ key on the keyboard, the OS will send an event that the keyboard was just pressed, the key was ‘A’, and no modifier keys were pressed (CAPS, SHIFT, CTRL, ALT, etc…). That’s it, right? Unfortunately, there are a ton of languages on this planet, and some of them have thousands of characters in them. People who type in those languages most certainly do not have thousand-letter keyboards or entire walls of their houses dedicated as a giant keyboard. This basically means that some characters will require multiple key presses just to process. Fortunately, SDL2 handles all of this for us and simply sends us a byte array with the results.

Among SDL2′s event processing, the structure we are interested in is SDL_TextInputEvent. This event is sent through the SDL2 event processing chain whenever text is input. I have personally seen this trigger from both physical keyboards and the Windows virtual on-screen keyboard. I am sure that there are other ways to trigger this event as well. Using this event, we can get the character information that was input by the user. Here are the fields of the C-style structure that we can use:

UInt32     type          The type of the event
UInt32     timestamp     The time that the event occurred
UInt32     windowID      The ID of the window that has focus, if any
char       text          The null-terminated, UTF-8 encoded input text

After determining that this is an SDL_TextInputEvent by checking the type field, we are most interested in the text field. That field is a pointer to a character array which is encoded using the UTF-8 scheme. In my most recent cases, I was using a C# wrapper around the SDL2 library named SDL2-CS. Because C# runs in a managed, garbage collected runtime, it’s a bit tricky to get the text input from C into C# through the .NET marshaller, but here’s how to do it.

// the character array from the C-struct is of length 32
// char types are 8-bit in C, but 16-bit in C#, so we use a byte (8-bit) here
byte[] rawBytes = new byte[SDL2.SDL.SDL_TEXTINPUTEVENT_TEXT_SIZE];

// we have a pointer to an unmanaged character array from the SDL2 lib (event.text.text),
// so we need to explicitly marshal into our byte array
unsafe { Marshal.Copy((IntPtr)event.text.text, rawBytes, 0, SDL2.SDL.SDL_TEXTINPUTEVENT_TEXT_SIZE); }

// the character array is null terminated, so we need to find that terminator
int indexOfNullTerminator = Array.IndexOf(rawBytes, (byte)0);

// finally, since the character array is UTF-8 encoded, get the UTF-8 string
string text = System.Text.Encoding.UTF8.GetString(rawBytes, 0, length);

The above code is only necessary because we are in a C# / .NET environment where we need to handle unmanaged to managed allocations with care. If you are sticking to C, then your job will be much easier in that you only need to receive the text input event and retrieve the associated encoded text.

A few closing notes:

  • The event SDL_KeyboardEvent is not useful for capturing text input, but is useful for capturing text editing input such as backspace and enter. This event will not encode the entered characters for you and will instead simply say, “such and such key was pressed or released.”
  • Further investigation is needed on how to interact with advanced text input such as from an Input Method Editor. A separate SDL_TextEditingEvent can be used to assist with this, but I have not had time to experiment yet.
  • The SDL2-CS library offers a comfortable path for Java/C# developers who want to slowly  be introduced to what SDL2 offers. Over time you can dip into source and pickup the C syntax of various things.
  • Check out my SharpDL library which you can create an XNA-like project from (with SDL2 underneath).

2D Tile Maps – Tile Picking

Several months ago, I talked about the distinction between world space and screen space. As a recap, these are fundamental concepts that separate our game or simulation state from our drawn or rendered state. What gets drawn to the screen is not necessarily how things are laid out in the game’s actual (world) state. Check out the previous articles for more information.

The concept of tile picking involves a user hovering their mouse or some other input device over a tile in the game map. Usually the user is doing this in order to interact with the tile such as moving a unit to a location, placing an item on the tile, or inspecting the metadata of the tile. Fortunately for the developer, the process of picking is independent of the projection other than some simple math to do a conversion of coordinates.

Imagine that you are playing SimCity 2000, and you want to create a stretch of road from one location to another. The process involves the user hovering the starting tile, clicking the mouse, dragging the road to the end tile, and releasing the mouse. Tiles were picked out of the game map based on the mouse’s coordinates during game updates. Which space do we pick the tile from? Do we have to calculate if the mouse is contained within the projected tile or within the game’s world space coordinates?

Orthogonal

If the projection used in the game is orthogonal (think NES/SNES Zelda games), then the tile picking really is just a matter of answering the question, “Which tile contains the current mouse coordinate?” Answering the question is simple:

mousePosition = GetCurrentMousePosition();

worldX = floor(mousePosition.X / tileWidth);
worldY = floor(mousePosition.Y / tileHeight);

pickedTile = tiles[worldX][worldY];

To explain briefly, we get the position of the mouse (I use SDL which has a function to get the mouse position). We then convert the coordinates from screen space into world space. Finally we get the tile that is stored in our tile collection at the world space coordinates. We floor the coordinate conversion because landing in the middle of a tile will result in a partial index.

worldspaceReference the grid to the left. For the sake of discussion, let us assume that each grid contains 16 x 16 tiles. The top left most pixel is located in world space [0,0] and at screen space (0,0). The top right most pixel is located in world space [5, 0] and at screen space (80,0).

If the user moves the mouse to the screen space coordinate (25, 50), then the math to calculate the picked tile is as follows (using the aforementioned technique).

mousePosition = (25, 50);

worldX = floor(25 / 16) = 1;
worldY = floor(50 / 16) = 3;

pickedTile = tiles[1][3];

worldspacetilepickPerforming that calculation, we get a result of the tile highlighted in blue. It is then up to you to do whatever you need with that tile. You can highlight it by changing its color, pass it on to some actor to recalculate its path, or place an object into the map at that tile’s position. Note that some bounds checking may need to be performed if your map has boundaries. If you try to get a tile from your tile collection at an index that does not exist, you will surely run into problems.

Isometric

The good news is that the strategy does not have to change with your projection differences. Only some minor math tweaks need to be made. Recall that the reason for this is because your world space representation does not change as your projection changes. They are independent! See below for the changes to the tile picking math.

mousePosition = GetCurrentMousePosition(); 

// convert our isometric screen coordinate into orthogonal world coordinate
worldPositionX = (2 * mousePositionY + mousePositionX) / 2;
worldPositionY = (2 * mousePositionY - mousePositionX) / 2;

worldX = floor(worldPositionX / tileWidth); 
worldY = floor(worldPositionY / tileHeight); 

pickedTile = tiles[worldX][worldY];

Notice that the only difference here is the translation from isometric coordinates to orthogonal coordinates. We do this because the tiles are not positioned on a nice orthogonal grid like our previous example. They are offset slightly in an isometric manner. Could you imagine if we tried to store our game state the same as our rendered state? Trying to pick a tile by calculating if an isometrically projected tile contains our mouse coordinates would be much harder than just translating into orthogonal and doing some simple division.