EAE 6320-001 GameEngII Game Engine Project, Game Dev Adventures!

Final Project: Matching Game!

My FINAL game: simple one-click download below, unzip, play, enjoy!

NancyNewren_ExampleGame FinalProject!

What’s new
It’s a game!

Controls
Main object: Use arrow keys to move the 3D object to its matching card.

About the Game

Okay, so it’s not pretty, but sometimes you just need to get to proof of concept; beautifying the game can come later.

When I was making educational games for Waterford Research Institute I made a lot of mini games for preK curriculum: letter/word association, number recoginition, letter matching, and even an ethics game. I really enjoyed making these simple-to-play games for 4-5 year olds. They were fun enough that even my older play testers (20-year-olds) liked them. However I never got to make a shape matching game. But now I have!

This was fun to make, especially since it was in my engine, but slightly more complicated than I anticipated.

The first thing was paring down the game to its minimum essentials: for instance having animations wouldn’t matter if I didn’t have a game first. So what was essential? Well having four objects with four matching cards. Being able to detect a right and wrong guess, and upon such guesses having a response (moving onto the next object for the right guess, feedback to let you know it was wrong otherwise). Once that was determined I simply needed to set up the graphics objects and then create the basic right/wrong logic with a simple collision detection.

Everything we needed to draw to screen was done: but any game logic I had to write, though admittedly we had previously written the player controls and the essential game loop code (like the update functions in Unity) were already supplied previous to this final assignment.

I don’t think I had noticed previously how heavily a game relies on its graphics components though. Without those, regardless of any clever logic you may have written, you don’t have much of a game. And I think it took up a majority of my time to find, create, edit, and import into my engine all the assets.

Honestly I think the most difficult thing to do was just tracking all the different assets. In game engines such as Unreal and Unity it is easier to track them I think because there are prefabs and the editor to help you create your arrays of assets. However creating the c3DObjects class to manage all things for my 3D objects (mesh, effect, texture, transform, speed, and even id) made it much simpler to program the game. When I was trying to detect whether or not an object matched a card all I had to do was give them both the same id in their c3DObject instance. Which brings up something: the cards, though flat, were still meshes. As I didn’t want them drawn like UIElements, but to interact with the objects in world space, I had them as 3DObjects. The only thing I’d add to the engine would be a UIElement class (akin to the 3DObjects class, but fancier since I’d need to track scaling vs. anchoring, etc.), and the idea of prefabs. It’d really make things easier to track with those things in place. Well, I’d also add an animation project to make animations simpler too.

The only time that I wished I’d done something different in previous assignments was the way I stored graphics elements in my game logic. Before even started coding the game I did some refractoring in my game logic to make it easier to track my graphics elements.

I created two effects for right and wrong guesses that tint and make the cards transparent. Then it was a simple matter of just swapping out the regular effect (which everything 3D was drawn with) with a right/wrong effect. That decision also made it easy to animate the ending flash as I just swapped out right and regular effects. If I’d passed the color to the effect to draw right/wrong it would have affected all my objects, and not just the one I wanted to draw.

The other difficulty was randomizing the order of everything while keeping track of them! picking four (of seven) random objects to be the match objects, making sure the four card were the four matching ones, and then drawing the four matching cards in a random order on screen. Here’s how I chose my random objects:

The collision was interesting. I didn’t do what fancy game engines do: detect whether the bounding boxes have collided. What I did was much, much simpler! Knowing that my local meshes origins were all at the origin, this meant that their position were all in world space. And as I didn’t care when their boundaries collided, but rather when their local origins were within a certain range of each other, all I needed to do was check to see if a matching object was within a certain distance of the card. To further optimize, as square roots are inefficient, I only checked the distance squared, I only checked the player object against cards that hadn’t been matched, and didn’t check for collisions at any time when the player movemnet input was deactivated. I added the DistanceSquared method to the Math::sVector class.

float distSquared = eae6320::Math::DistanceSquared(i_pObj1->m_rigidBodyState.PredictFuturePosition(i_elapsedSecondCount_sinceLastUpdate), i_pObj2->m_rigidBodyState.PredictFuturePosition(i_elapsedSecondCount_sinceLastUpdate));

return distSquared <= m_distAmountSquared;

The last thing I did was add in the sprite images upon a correct solution. This had multiple purposes: 1) it hid the no longer drawn match object, 2) It hid the sudden movement of the next object to match to the starting player position, and 3) It gave some fun, and funny, feedback to the player that they are on the right track.

If I had had more time (I did this all in about about 15-20 hours), I would have added:

  1. Movement animations
    1. For instance: Locking the “match” object to the center of the card before checking for a match, then moving it back to the beginning
  2. Setting boundaries the player can’t move outside of
  3. Beautifying the game: better match object models and cards, a better background too
  4. Add fun audio
  5. I also really wanted to do lighting

I enjoyed how it turned out though, and so did my play testers. 🙂

What I learned this semester

What I think the goal of the assignments were

I think the most important goal of the assignments was to teach us what good interfaces can do for us. For instance once we set up our one interface which behind the scenes dealt with our different platforms, we didn’t write any platform dependent code. That is very cool! We didn’t have to think about what each platform might need, instead we only connected with our interfaces and let the interfaces do their job.

We learned about building good asset pipelines. At the beginning we only had shaders, but by the end we had textures, meshes, and shaders. We separated out our algorithms from the data. I really liked this (especially creating the binary files!). Doing this makes it much faster and easier to change our assets as data is easier to update than code, and doesn’t require a recompile! We only need the built asset and that can plug-in to our code.

Another big point was to write code that is human readable, understandable, and easy to debug. Ultimately, whether it is someone else or yourself, someone is going to have to read your code to understand how to code against/interface it. If the programmer can’t understand and debug your code (which is really sad when you’re the programmer in your own code and you can’t do this!)  they’re just going to have to either spend a long time to understand it or are going to have to rewrite it, which if it’s a large amount of code is going to take a lot of time. But if you make it human understandable then they should understand quickly what is being done in the code and should be able to code/interface with it quickly, which is very important!

What I got out of this class

I learned so much this semester. I used to always be scared whenever I had to set up build dependencies and references, what the differences were, and why they mattered. I never felt like I could do it on my own. But I don’t feel afraid anymore! In fact, I learned it well enough that I was able to explain to someone else the differences and help him troubleshoot his code, and to recognize when another of my classmates had done it incorrectly in his own code. I say that not to point out that he did something wrong, as we all make mistakes, but that I know enough to know when it is done incorrectly now. And that is also not to say that it’s always easy, but I feel confident that I can figure it out, and to NOT PANIC when I get linker errors! J

I learned…

A lot about graphics for a non-graphics class (I actually understand, on a very high-level, what the graphics card does now).

A more-different way to essentially have shared pointers without the overhead of shared pointers through reference counting.

Oooh, after class one day I asked JP about how shaders do absolute value so quick and so I became aware of the intrinsic functions of the graphics card.

Also, D3D has different debug output in VS than OpenGL.

The difference between handles and pointers.

A side effect of all that we did in the code for me was using the c11 standard as I’ve never worked on projects before that used it (we were in the dark ages I suppose 😉 ).

The, unfortunately short, discussion we had about lighting was very interesting. I hadn’t thought about digital lighting as taking away light, but it made sense.

I also think I would enjoy doing graphics more than I thought I would. Every time we got into talking about graphics I got really excited. The math is pretty basic, but can also be really interesting. I really thought I would hate the discussion on lighting, but I found it fascinating.

My thoughts on good engineering architecture and design

I usually like to try to play the line between very structured and “design as you go”. However I knew there would be a lot of refractoring from assignment to assignment and there’s nothing I hate more than having to undo complicated architecture, and since I also didn’t know what the end goal was, I purposely chose to play on the side of “design as I go.” Otherwise I would have drawn out a more architectured structure. There was only one assignment that I wished I would have architected more in a previous assignment, but even in that case my friend who chose more architecture in the previous assignment only finished about an hour before me, and I know he spent more time making the architecture in the first place. So it was basically a wash.

I’ll say this about good architecture: while I was programming my game I made a decision early on to have a GetMatchObject(const unsigned int i_index) and GetCard(const unsigned int i_index) functions. This made it so that later on when I changed how I retrieved cards I only had to change it in one place: my “interface” function.

A good software architecture allows for flexibility in adding new items to the code, but isn’t so abstract and complicated that it is unclear what the interfaces are good for. I.E. it’s easy to read, understand, and debug, and allows for quick and easy iterations.

A good design will be platform independent: so that you only have to write code once! A bad design will have multiple files with only small changes in each for every platform (whether the platform is OSes, websites, or databases). A good design will allow for quick creation of new types that fit into the same interface. A bad design will require a new object or interface for each new type. A good design is faster to refractor. A bad design requires gutting of the interior for even small changes. On the other hand, a good design also isn’t over-architectured either. For instance, there is no need to create a reusable interface (no matter how neat) if you are only ever going to have one type that uses that interface. Good design is about being aware of the parts of code that need to be reusable, what needs to be able to be interfaced, and what can be done top-down as it were, because it’s not going to be used more than the once.

Acknowledgements

Thanks to Zeno for letting me bounce ideas off of him, and my Dad for proofing my write-up.

Many thanks also to my play testers who really just had fun playing: Zeno Saviour, Monika and Erik Tolman, my sister Deborah, and my brother Zeke (who tested it on linux with wine and it worked!).

EAE 6320-001 GameEngII Game Engine Project, Game Dev Adventures!

Assignment 14: Transparency in Meshes

This week’s game: simple one-click download below, unzip, play, enjoy!

NancyNewren_ExampleGame 14

What’s new
I added two transparent objects!

Controls
Main object: Use arrow keys to move main object left/right/up/down
Camera:  
Left/Right (A/D), Up/down (W/S), Zoom in/out (Q/E)

(When moving the camera things in the world will appear to move opposite. Thus when you move camera left, the world will appear to move right.)

 About the Project

This time around I added the ability to become transparent to my meshes! It was fun.

First thing was to create a shader and an effect to allow for transparency. Next I created two objects drawn with them, but drew the meshes in whatever order they were submitted to the render thread. This works fine for opaque objects, However for transparent objects when a transparent object is drawn in before an object behind it is drawn, the object behind it isn’t drawn properly (as expected that is) to the screen. I.E. you can’t see the object behind a transparent object if that object isn’t drawn before it.

This means that all transparent objects have to be drawn back to front. The order of the opaque objects doesn’t matter though (with depth buffering on at least), as long as they are drawn before all transparent objects.

So what I did to order the draw calls is: after submission to the render thread is over and and before the swap, I ordered the draw calls. How? First thing: I added the index of all opaque objects (I just asked the mesh’s effect if it was opaque, nicely of course) to a vector in the order they were submitted, and then put the transparent meshes in a separate vector with their index and their camera space z value. This is the key to quickly sorting these meshes: in camera space I immediately know which meshes are in the back and in the front based solely on their z values since camera space is when the camera is at the origin and is facing forward. Then a simple std::sort with a lambda function that orders the z’s based on z forward being negative z (i.e. larger values of z aren’t closer to the camera, they are the farthest away), quickly sorts those values. Once sorted I added the index values of the sorted transparent meshes to the end of the previously mentioned vector (the one with the opaque meshes all in front to be drawn first.) Then in the rendering of the meshes I draw them in the order of the sorted draw calls.

I chose the render thread because with my game it doesn’t really matter which thread does the sorting because it’s such a small game. However, in a real game I would profile both threads and then choose the thread that takes the least amount of time.

 Acknowledgements

I asked my brother how to use the std::sort with my vector<struct> as I hadn’t done a templated sort before . He set me on the path with the google search I needed and I whipped out a lambda compare for my sort in no time.  Thanks bro!

EAE 6320-001 GameEngII Game Engine Project, Game Dev Adventures!

Assignment 12: Binary Mesh Assets

Download this week’s game with the simple one-click download below! Once downloaded, unzip, play the executable, enjoy!

NancyNewren_ExampleGame 12

What’s new
Different main mesh: a Christmas cactus for my friend Felicia! Bring on the happy!

Controls
Main object: Use arrow keys to move main object left/right/up/down
Camera:  
Left/Right (A/D), Up/down (W/S), Zoom in/out (Q/E)
(When moving the camera things in the world will appear to move opposite. Thus when you move camera left, the world will appear to move right.)

About the Project

The advantage of the human readable file is that you can read, understand, and debug that file just by reading it, but they do take longer to load and are bigger files. So the idea is to load/store the assets in binary for two reasons: 1) It is more efficient (faster) to load from binary, and 2) it is also more economic (they are smaller) to store the files. So in our regular build and source files, human-readable: since a human needs to be able to interpret and debug them, but in our run-tim binary: because we’ve done all our testing and debugging and now only the computer needs to be able to read them.

There are four pieces of data that each mesh needs: number of indices, the indicies, the vertices, and the number of verticies. I chose to do the number of indicies first (since the binary read in needs to know how many indicies there are so it can jump to the next right place in the data), then the indicies next since they take up less space and are easier to read in a binary format as they are integers. Next I chose to do number of verticies, again because the read in needs to know how many vertices there are so we know the size of the memory, and then the verticies themselves. I could have done number indicies, number verticies, indicies, and then verticies, or a number of different configurations, but regardless of the order I chose, the number of indicies and verticies would always need to be stored before the index and vertex data.

Here is my floor’s binary file with the above listed order:

By default the human readable files are in OpenGL format. Previously I was swapping the UVs and winding orders after loading from the human readable files for D3D. But since I want a quick load at run time this swapping happens during the build for the binary. So the while the human readable files remain the same, the built meshes are different and cannot be swapped across platform. This does however make loading a binary file platform independent. This is how I load the binary data:

Lastly, enough talk! Let’s show the two advantages of the binary files: I made an object with 50436 index count, and 33616 vertex count. Here’s how they compare:

Human Readable File Binary
Size of file 4.61 MB 886 KB
Time to load (seconds) 0.256 0.002234

Acknowledgements

Thanks to Zeno who once again helped me with some of the C++ basics I’ve forgotten. It’s been a long time since I did file i/o with c++, and he was very helpful.

EAE 6320-001 GameEngII Game Engine Project, Game Dev Adventures!

Assignment 11: Made Mesh Assets with Maya, Added Textures to 3D Objects

Download this week’s game with the simple one-click download below! Once downloaded, unzip, play the executable, enjoy!

NancyNewren_ExampleGame 11

What’s new
Gameplay movement is the same but faster, and I updated the 3D objects’ colors and shape and added textures! Time for winter!

Controls
Main object: Use arrow keys to move main object left/right/up/down
Camera:  
Left/Right (A/D), Up/down (W/S), Zoom in/out (Q/E)
(When moving the camera things in the world will appear to move opposite. Thus when you move camera left, the world will appear to move right.)

About the Project

This week I created a Maya plugin, based on code provided to us by our professor, to export objects created in Maya to my Lua based human readable file format (which I made last week) which then imported the meshes into the game.

The Maya plugin code was a project we added to our solution called MayMeshExporter (MME). Once importing it into my code solution and assigning the proper property sheets, as well as setting the path environment variables for maya and for the maya plugin, no other settings were required (i.e. no project build dependencies nor references). This is because, while the game relies on the output from Maya via the plugin for the mesh data, the output of the MayaMeshExporter is the plugin for Maya, not the mesh data itself. So once the exporter is built we can manually export our objects, and then build the solution to have the MeshBuilder project “build” the mesh files. Essentially as long as we have some files for our meshes, either from the MME or manually made, at build time then we have what we need for our game. And since the MME doesn’t rely on any of the projects in our solution, no references were needed.

I think it’s neat that you can use Visual Studio to debug running programs. I did this with my maya plugin. Here I put a breakpoint inside the “initializePlugin” which is used to load the plugin in Maya.

I’ve used this (the attach debugger to process) frequently when debugging code for Unity and Unreal based projects.  It’s incredibly helpful to be able to step through your code this way.

With the MME working, I had a plugin to export data. So I created my plane and a new object for the player game object in Maya, set some vertex colors, and exported. This is what I had:

Maya creates more data than I was using: tangent, bitangent, texcoords, and normals. I chose to export all of them as it was a simple process to export them into my human readable file, and they are just ignored by the mesh import. I decided to do this so that if I chose to include these things in the future in my game it would be one less step I’d have to code in. This was actually helpful then when I added textures to my 3D objects…

The last thing I did was add textures to the meshes. This was a fairly straightforward process, though there were several steps involved, since I had done this previously for the UI elements, and didn’t take that long. This change touched code in graphics  (the renderer thread, meshes, and the shaders), to the game graphic objects (where the 3DObjects are stored), to the game itself, but I’m really liking the result!

After applying textures to the meshes the game looks like this:

And… it’s ready for snowfall!

EAE 6320-001 GameEngII Game Engine Project, Game Dev Adventures!

Assignment 10: Mesh Asset Now Loads from File Using Lua, Still in 3D!

This week’s game. One-click download, unzip, play, enjoy! 😀

NancyNewren_ExampleGame 10

What’s New
The only gameplay changes from last week are the colors. Enjoy! 😀

Controls
Main object: Use arrow keys to move
Camera:  
Left/Right (A/D), Up/down (W/S), Zoom in/out (Q/E)
(When moving the camera things in the world will appear to move opposite. Thus when you move camera left, the world will appear to move right.)

About the Project

This week I moved the creation of mesh assets from hard coded to Lua files… and then played with updating the colors just using the Lua files. It was fun! (At least the last part. Ha ha!) Having the Lua files for the assets is really neat because this opens the possibility to having other tools and programs create the asset files. More than making asset files data driven though, the idea of the raw asset file is to be human readable to make it easy to read, understand, and most importantly debug. In the future I’ll take the human readable file and a binary, but the important thing is when you run into a problem with an asset you want a human readable file that you can quickly and easily read and understand so you can quickly and easily get to debugging.

For instance, this is my floor mesh file:


The order of the indices for the index buffer matters, so it was an easy decision to make the indices an array type table. For the vertices, since the order of the indices maters, that means, that though the vertices can be listed in any order they must be listed in a predictable order, so the vertices table also needed to be an array (as a lua table type dictionary has no predictable order).  I decided to group vertex values all together (i.e. a vertex is defined as a position and a color), as I believed that this would make debugging simpler when there are over 100 vertices.

My final decision was to use an array type table for both position and color values as position in math is always ordered (x,y,z), and anyone who knows graphics, when they see 3{4} color values expects (r,g,b,(a)), so I didn’t feel the need to make the color and position tables dictionaries, but it could have been done. I felt though that it was more readable to see the values without extra “x=” and “a=” to clutter the tables.

Additionally

The Lua asset loading took longer than I expected to implement. I thought the hardest part would be switching meshes from pointers to handles (since they now are loading from file like the textures), but turns out it was reading in the Lua files! It’s hard to keep track of the stack! I got stuck being worried I’d lose the stack, so I finally drew myself a picture of all the load and pop points and coded them right the first try! Booyah!

When I started the implementation to Lua file loading, I confused myself quite completely. Zeno helped straighten me out though so I didn’t waste any time. I also have struggled a bit throughout the semester remembering all the c++ syntax as it has been about three years since I last coded in it regularly. While I have been improving all semester, I still got stuck on a couple things, but Zeno was right there with those pieces of c++ syntax I got stuck on right when I needed them.

One More Thing!

Since the Lua files store color as values [0.0f, 1.0f], and the vertex struct expects uint8_t’s, there was a simple conversion of the float * 255. But since computers don’t round, you can end up with numbers like 234.99 and the uint8_t will store 234, not 235. An incredibly simple way around this is, after doing your multiplication, just add 0.5f. Then 234.99 is now 235.49, which truncates to 235. Done and done.

Cool Stuff!, EAE 6320-001 GameEngII Game Engine Project

Assignment 9: Now in 3D!

NancyNewren_ExampleGame 09

(Simple one-click download. Once downloaded, unzip, play the executable, enjoy!)

Movement

Main object: Use arrow keys to move left/right and up/down
Camera:  
Left/Right (A/D), Up/down (W/S), Zoom in/out (Q/E)
(When moving the camera things in the world will appear to move opposite. Thus when you move camera left, the world will appear to move right.)

About the Project

This week I completed the transformation of the game to 3D. To accomplish this I added a third dimension: depth. Then added three matrix transformations to take 3D objects from their local space (at their origin and facing forward), to world space (where the object is in relation to the origin of the world), to camera space (where the object in the world is in relation to the observer, or camera).

Local space is essentially where a 3D mesh’s vertices are in relation  to its origin, and we place the object facing forward as convention so we know automatically how to rotate things and how they should appear without even seeing them initially.

World space is where an object is in the world of the game: if we have a city then we have streets and buildings, and we may use the same mesh to represent all those buildings, at the same local space, but in the world they can be all over the map. World space gives us an object in the map of a game.

Finally we have an observer: world space really doesn’t mean anything until we have an observer. In 2D space what we draw to screen is the world we’ve made, but in 3D what we draw to screen depends entirely upon the perspective we are looking at since we are taking something that has 3 dimensions and moving it to two. Without the observer there’s no way to know how to flatten the world. There must be an observer. In this case we call the observer the camera. Since the camera is looking at the world, when the camera moves the world appears to move in the opposite direction (up:down, left:right, forward:back, clockwise:counterclockwise, etc.). To make the math simple, and since moving the camera is just moving the world inversely to the camera: we take the world to the camera.

The last transform we need then is the one to flatten the world to render on screen.

Here’s how the platform independent transform from local space to world space (localToWorld) in the shader looks:

float4 verPosWorld = MultiplyMV(g_transform_localToWorld, float4(i_position, 1.0));

The last thing I did was implement depth buffering so that instead of the last mesh drawn just covering up the previous one, you can actually have meshes covering up parts of each other. This allows us to render the plane and the cube intersecting (without depth buffering the cube or the plane would be covering the other up):

Additionally…

I chose to make the plane double sided: so I used the same four vertices but sent the index buffer 12 points (instead of six) so that you can see the plane from underneath as well as on top, but the plane is still flat (no sides).

I didn’t have time for any optional challenges this week. I had wanted to do camera rotation, but I ended up spending my extra hours on a Visual Studio bug. I was relieved to discover that the bug wasn’t in my implementation, but still the time was gone. Maybe next time.

Cool Stuff!, EAE 6320-001 GameEngII Game Engine Project, Game Dev Adventures!

Assignment 8: Moving Towards 3D

NANCYNEWREN_EXAMPLEGAME 08
(Simple one-click download. Once downloaded, unzip, play the executable, enjoy!)
Controls: Use the typical WASD to move the trapezoid and see what shapes you make!

ABOUT THE PROJECT

For this weeks project I created a basic 2D mesh that I will extend next week to 3D. I started by copying the 2D sprite and shader code directly and then adding/changing to make it a mesh. Because this required a lot of steps, I did my best to do as small of chunks as possible and to always test that my code worked on every platform and every configuration after the updates. This I believe saved me big headaches as there was twice when one platform wasn’t working, but because the changes I made were fresh, and I knew the immediate changes I’d made that had resulted in the bug, I was able to fairly quickly resolve them.

To move the object I used a very simple physics equation: setting the velocity in the UpdateSimulationBasedOnInput() function to m_speed if a key was pressed for that direciton (i.e. if ‘A’ was pressed then velocity.x = -m_speed), zero when not pressed, and zero if opposing direction keys pressed at the same time. Then in the UpdateSimulationBasedOnTime(), position += velocity * time (because this is the function where we’re given the time variable, and the other function is where input updates are received). As with most games, the simulation (or game) update happens more slowly than rendering (in this case the simulation updates every ~0.066 seconds), which is why the movement appears jumpy. To get around you could obviously speed up the simulation time, but in most games the update times still differ, so the ideal solution is to calculate a predicted position and send that position to the renderer. This can be done in the submit to renderer as that funciton is given the time since the last simulation update and so a more accurate prediction can be made. (I have done this in the next assignment.)

I also created, though it was not required, a 3DObject class to manage the mesh, effect, and other variables needed for rendering a 3DObject.

One last note I wanted to mention: When rendering, because we always want our UI elements (or 2D sprites) to be seen, we always draw our 3D-objects/meshes first. This ensures us that our UI (player hud, menus, etc.), are always visible and accessible to the player.