EAE 6320-001 GameEngII Game Engine Project, Game Dev Adventures!

Final Project: Matching Game!

My FINAL game: simple one-click download below, unzip, play, enjoy!

NancyNewren_ExampleGame FinalProject!

What’s new
It’s a game!

Controls
Main object: Use arrow keys to move the 3D object to its matching card.

About the Game

Okay, so it’s not pretty, but sometimes you just need to get to proof of concept; beautifying the game can come later.

When I was making educational games for Waterford Research Institute I made a lot of mini games for preK curriculum: letter/word association, number recoginition, letter matching, and even an ethics game. I really enjoyed making these simple-to-play games for 4-5 year olds. They were fun enough that even my older play testers (20-year-olds) liked them. However I never got to make a shape matching game. But now I have!

This was fun to make, especially since it was in my engine, but slightly more complicated than I anticipated.

The first thing was paring down the game to its minimum essentials: for instance having animations wouldn’t matter if I didn’t have a game first. So what was essential? Well having four objects with four matching cards. Being able to detect a right and wrong guess, and upon such guesses having a response (moving onto the next object for the right guess, feedback to let you know it was wrong otherwise). Once that was determined I simply needed to set up the graphics objects and then create the basic right/wrong logic with a simple collision detection.

Everything we needed to draw to screen was done: but any game logic I had to write, though admittedly we had previously written the player controls and the essential game loop code (like the update functions in Unity) were already supplied previous to this final assignment.

I don’t think I had noticed previously how heavily a game relies on its graphics components though. Without those, regardless of any clever logic you may have written, you don’t have much of a game. And I think it took up a majority of my time to find, create, edit, and import into my engine all the assets.

Honestly I think the most difficult thing to do was just tracking all the different assets. In game engines such as Unreal and Unity it is easier to track them I think because there are prefabs and the editor to help you create your arrays of assets. However creating the c3DObjects class to manage all things for my 3D objects (mesh, effect, texture, transform, speed, and even id) made it much simpler to program the game. When I was trying to detect whether or not an object matched a card all I had to do was give them both the same id in their c3DObject instance. Which brings up something: the cards, though flat, were still meshes. As I didn’t want them drawn like UIElements, but to interact with the objects in world space, I had them as 3DObjects. The only thing I’d add to the engine would be a UIElement class (akin to the 3DObjects class, but fancier since I’d need to track scaling vs. anchoring, etc.), and the idea of prefabs. It’d really make things easier to track with those things in place. Well, I’d also add an animation project to make animations simpler too.

The only time that I wished I’d done something different in previous assignments was the way I stored graphics elements in my game logic. Before even started coding the game I did some refractoring in my game logic to make it easier to track my graphics elements.

I created two effects for right and wrong guesses that tint and make the cards transparent. Then it was a simple matter of just swapping out the regular effect (which everything 3D was drawn with) with a right/wrong effect. That decision also made it easy to animate the ending flash as I just swapped out right and regular effects. If I’d passed the color to the effect to draw right/wrong it would have affected all my objects, and not just the one I wanted to draw.

The other difficulty was randomizing the order of everything while keeping track of them! picking four (of seven) random objects to be the match objects, making sure the four card were the four matching ones, and then drawing the four matching cards in a random order on screen. Here’s how I chose my random objects:

The collision was interesting. I didn’t do what fancy game engines do: detect whether the bounding boxes have collided. What I did was much, much simpler! Knowing that my local meshes origins were all at the origin, this meant that their position were all in world space. And as I didn’t care when their boundaries collided, but rather when their local origins were within a certain range of each other, all I needed to do was check to see if a matching object was within a certain distance of the card. To further optimize, as square roots are inefficient, I only checked the distance squared, I only checked the player object against cards that hadn’t been matched, and didn’t check for collisions at any time when the player movemnet input was deactivated. I added the DistanceSquared method to the Math::sVector class.

float distSquared = eae6320::Math::DistanceSquared(i_pObj1->m_rigidBodyState.PredictFuturePosition(i_elapsedSecondCount_sinceLastUpdate), i_pObj2->m_rigidBodyState.PredictFuturePosition(i_elapsedSecondCount_sinceLastUpdate));

return distSquared <= m_distAmountSquared;

The last thing I did was add in the sprite images upon a correct solution. This had multiple purposes: 1) it hid the no longer drawn match object, 2) It hid the sudden movement of the next object to match to the starting player position, and 3) It gave some fun, and funny, feedback to the player that they are on the right track.

If I had had more time (I did this all in about about 15-20 hours), I would have added:

  1. Movement animations
    1. For instance: Locking the “match” object to the center of the card before checking for a match, then moving it back to the beginning
  2. Setting boundaries the player can’t move outside of
  3. Beautifying the game: better match object models and cards, a better background too
  4. Add fun audio
  5. I also really wanted to do lighting

I enjoyed how it turned out though, and so did my play testers. 🙂

What I learned this semester

What I think the goal of the assignments were

I think the most important goal of the assignments was to teach us what good interfaces can do for us. For instance once we set up our one interface which behind the scenes dealt with our different platforms, we didn’t write any platform dependent code. That is very cool! We didn’t have to think about what each platform might need, instead we only connected with our interfaces and let the interfaces do their job.

We learned about building good asset pipelines. At the beginning we only had shaders, but by the end we had textures, meshes, and shaders. We separated out our algorithms from the data. I really liked this (especially creating the binary files!). Doing this makes it much faster and easier to change our assets as data is easier to update than code, and doesn’t require a recompile! We only need the built asset and that can plug-in to our code.

Another big point was to write code that is human readable, understandable, and easy to debug. Ultimately, whether it is someone else or yourself, someone is going to have to read your code to understand how to code against/interface it. If the programmer can’t understand and debug your code (which is really sad when you’re the programmer in your own code and you can’t do this!)  they’re just going to have to either spend a long time to understand it or are going to have to rewrite it, which if it’s a large amount of code is going to take a lot of time. But if you make it human understandable then they should understand quickly what is being done in the code and should be able to code/interface with it quickly, which is very important!

What I got out of this class

I learned so much this semester. I used to always be scared whenever I had to set up build dependencies and references, what the differences were, and why they mattered. I never felt like I could do it on my own. But I don’t feel afraid anymore! In fact, I learned it well enough that I was able to explain to someone else the differences and help him troubleshoot his code, and to recognize when another of my classmates had done it incorrectly in his own code. I say that not to point out that he did something wrong, as we all make mistakes, but that I know enough to know when it is done incorrectly now. And that is also not to say that it’s always easy, but I feel confident that I can figure it out, and to NOT PANIC when I get linker errors! J

I learned…

A lot about graphics for a non-graphics class (I actually understand, on a very high-level, what the graphics card does now).

A more-different way to essentially have shared pointers without the overhead of shared pointers through reference counting.

Oooh, after class one day I asked JP about how shaders do absolute value so quick and so I became aware of the intrinsic functions of the graphics card.

Also, D3D has different debug output in VS than OpenGL.

The difference between handles and pointers.

A side effect of all that we did in the code for me was using the c11 standard as I’ve never worked on projects before that used it (we were in the dark ages I suppose 😉 ).

The, unfortunately short, discussion we had about lighting was very interesting. I hadn’t thought about digital lighting as taking away light, but it made sense.

I also think I would enjoy doing graphics more than I thought I would. Every time we got into talking about graphics I got really excited. The math is pretty basic, but can also be really interesting. I really thought I would hate the discussion on lighting, but I found it fascinating.

My thoughts on good engineering architecture and design

I usually like to try to play the line between very structured and “design as you go”. However I knew there would be a lot of refractoring from assignment to assignment and there’s nothing I hate more than having to undo complicated architecture, and since I also didn’t know what the end goal was, I purposely chose to play on the side of “design as I go.” Otherwise I would have drawn out a more architectured structure. There was only one assignment that I wished I would have architected more in a previous assignment, but even in that case my friend who chose more architecture in the previous assignment only finished about an hour before me, and I know he spent more time making the architecture in the first place. So it was basically a wash.

I’ll say this about good architecture: while I was programming my game I made a decision early on to have a GetMatchObject(const unsigned int i_index) and GetCard(const unsigned int i_index) functions. This made it so that later on when I changed how I retrieved cards I only had to change it in one place: my “interface” function.

A good software architecture allows for flexibility in adding new items to the code, but isn’t so abstract and complicated that it is unclear what the interfaces are good for. I.E. it’s easy to read, understand, and debug, and allows for quick and easy iterations.

A good design will be platform independent: so that you only have to write code once! A bad design will have multiple files with only small changes in each for every platform (whether the platform is OSes, websites, or databases). A good design will allow for quick creation of new types that fit into the same interface. A bad design will require a new object or interface for each new type. A good design is faster to refractor. A bad design requires gutting of the interior for even small changes. On the other hand, a good design also isn’t over-architectured either. For instance, there is no need to create a reusable interface (no matter how neat) if you are only ever going to have one type that uses that interface. Good design is about being aware of the parts of code that need to be reusable, what needs to be able to be interfaced, and what can be done top-down as it were, because it’s not going to be used more than the once.

Acknowledgements

Thanks to Zeno for letting me bounce ideas off of him, and my Dad for proofing my write-up.

Many thanks also to my play testers who really just had fun playing: Zeno Saviour, Monika and Erik Tolman, my sister Deborah, and my brother Zeke (who tested it on linux with wine and it worked!).

Advertisement
EAE 6320-001 GameEngII Game Engine Project, Game Dev Adventures!

Assignment 14: Transparency in Meshes

This week’s game: simple one-click download below, unzip, play, enjoy!

NancyNewren_ExampleGame 14

What’s new
I added two transparent objects!

Controls
Main object: Use arrow keys to move main object left/right/up/down
Camera:  
Left/Right (A/D), Up/down (W/S), Zoom in/out (Q/E)

(When moving the camera things in the world will appear to move opposite. Thus when you move camera left, the world will appear to move right.)

 About the Project

This time around I added the ability to become transparent to my meshes! It was fun.

First thing was to create a shader and an effect to allow for transparency. Next I created two objects drawn with them, but drew the meshes in whatever order they were submitted to the render thread. This works fine for opaque objects, However for transparent objects when a transparent object is drawn in before an object behind it is drawn, the object behind it isn’t drawn properly (as expected that is) to the screen. I.E. you can’t see the object behind a transparent object if that object isn’t drawn before it.

This means that all transparent objects have to be drawn back to front. The order of the opaque objects doesn’t matter though (with depth buffering on at least), as long as they are drawn before all transparent objects.

So what I did to order the draw calls is: after submission to the render thread is over and and before the swap, I ordered the draw calls. How? First thing: I added the index of all opaque objects (I just asked the mesh’s effect if it was opaque, nicely of course) to a vector in the order they were submitted, and then put the transparent meshes in a separate vector with their index and their camera space z value. This is the key to quickly sorting these meshes: in camera space I immediately know which meshes are in the back and in the front based solely on their z values since camera space is when the camera is at the origin and is facing forward. Then a simple std::sort with a lambda function that orders the z’s based on z forward being negative z (i.e. larger values of z aren’t closer to the camera, they are the farthest away), quickly sorts those values. Once sorted I added the index values of the sorted transparent meshes to the end of the previously mentioned vector (the one with the opaque meshes all in front to be drawn first.) Then in the rendering of the meshes I draw them in the order of the sorted draw calls.

I chose the render thread because with my game it doesn’t really matter which thread does the sorting because it’s such a small game. However, in a real game I would profile both threads and then choose the thread that takes the least amount of time.

 Acknowledgements

I asked my brother how to use the std::sort with my vector<struct> as I hadn’t done a templated sort before . He set me on the path with the google search I needed and I whipped out a lambda compare for my sort in no time.  Thanks bro!

EAE 6320-001 GameEngII Game Engine Project, Game Dev Adventures!

Assignment 13: Build System!

This week’s game: simple one-click download below, unzip, play, enjoy!

NancyNewren_ExampleGame 13

What’s new
Nothing from the player’s perspective. Just built using my new build system!

Controls
Main object: Use arrow keys to move main object left/right/up/down
Camera:  
Left/Right (A/D), Up/down (W/S), Zoom in/out (Q/E)
(When moving the camera things in the world will appear to move opposite. Thus when you move camera left, the world will appear to move right.)

About the Project

I freaking love my new build system! It saves so much time! I have a file where I list all my graphics assets that need to be built for the game. The build system then checks this list of assets to see if they already exist, if so if there is a newer source version or if the builder itself is newer, and then it will only build if necessary. That means once I build everything (which can take several minutes) it takes less than a second to build again because my system is being smart. It’s so nice!

There are big advantages to having a list of all the assets to build in their own file: it is quick and easy to update, and you don’t have to search through other code. It’s also easy to see any bugs, and to add new features as well.

Of course this list isn’t everything that needs to be built: there are licenses and settings files that need to be copied over into the playable game folder, but the system checks for those as well. It saves so much time! Especially since we have two different platforms and two different build configuration types, which means we have to build and test for four total different build configurations. When you have to wait several minutes every time you build, every time you test a simple change, it very quickly adds up to taking hours! I’m so grateful for this new build system. Thanks JP for having us do it! 🙂

EAE 6320-001 GameEngII Game Engine Project, Game Dev Adventures!

Assignment 12: Binary Mesh Assets

Download this week’s game with the simple one-click download below! Once downloaded, unzip, play the executable, enjoy!

NancyNewren_ExampleGame 12

What’s new
Different main mesh: a Christmas cactus for my friend Felicia! Bring on the happy!

Controls
Main object: Use arrow keys to move main object left/right/up/down
Camera:  
Left/Right (A/D), Up/down (W/S), Zoom in/out (Q/E)
(When moving the camera things in the world will appear to move opposite. Thus when you move camera left, the world will appear to move right.)

About the Project

The advantage of the human readable file is that you can read, understand, and debug that file just by reading it, but they do take longer to load and are bigger files. So the idea is to load/store the assets in binary for two reasons: 1) It is more efficient (faster) to load from binary, and 2) it is also more economic (they are smaller) to store the files. So in our regular build and source files, human-readable: since a human needs to be able to interpret and debug them, but in our run-tim binary: because we’ve done all our testing and debugging and now only the computer needs to be able to read them.

There are four pieces of data that each mesh needs: number of indices, the indicies, the vertices, and the number of verticies. I chose to do the number of indicies first (since the binary read in needs to know how many indicies there are so it can jump to the next right place in the data), then the indicies next since they take up less space and are easier to read in a binary format as they are integers. Next I chose to do number of verticies, again because the read in needs to know how many vertices there are so we know the size of the memory, and then the verticies themselves. I could have done number indicies, number verticies, indicies, and then verticies, or a number of different configurations, but regardless of the order I chose, the number of indicies and verticies would always need to be stored before the index and vertex data.

Here is my floor’s binary file with the above listed order:

By default the human readable files are in OpenGL format. Previously I was swapping the UVs and winding orders after loading from the human readable files for D3D. But since I want a quick load at run time this swapping happens during the build for the binary. So the while the human readable files remain the same, the built meshes are different and cannot be swapped across platform. This does however make loading a binary file platform independent. This is how I load the binary data:

Lastly, enough talk! Let’s show the two advantages of the binary files: I made an object with 50436 index count, and 33616 vertex count. Here’s how they compare:

Human Readable File Binary
Size of file 4.61 MB 886 KB
Time to load (seconds) 0.256 0.002234

Acknowledgements

Thanks to Zeno who once again helped me with some of the C++ basics I’ve forgotten. It’s been a long time since I did file i/o with c++, and he was very helpful.

EAE 6320-001 GameEngII Game Engine Project, Game Dev Adventures!

Assignment 11: Made Mesh Assets with Maya, Added Textures to 3D Objects

Download this week’s game with the simple one-click download below! Once downloaded, unzip, play the executable, enjoy!

NancyNewren_ExampleGame 11

What’s new
Gameplay movement is the same but faster, and I updated the 3D objects’ colors and shape and added textures! Time for winter!

Controls
Main object: Use arrow keys to move main object left/right/up/down
Camera:  
Left/Right (A/D), Up/down (W/S), Zoom in/out (Q/E)
(When moving the camera things in the world will appear to move opposite. Thus when you move camera left, the world will appear to move right.)

About the Project

This week I created a Maya plugin, based on code provided to us by our professor, to export objects created in Maya to my Lua based human readable file format (which I made last week) which then imported the meshes into the game.

The Maya plugin code was a project we added to our solution called MayMeshExporter (MME). Once importing it into my code solution and assigning the proper property sheets, as well as setting the path environment variables for maya and for the maya plugin, no other settings were required (i.e. no project build dependencies nor references). This is because, while the game relies on the output from Maya via the plugin for the mesh data, the output of the MayaMeshExporter is the plugin for Maya, not the mesh data itself. So once the exporter is built we can manually export our objects, and then build the solution to have the MeshBuilder project “build” the mesh files. Essentially as long as we have some files for our meshes, either from the MME or manually made, at build time then we have what we need for our game. And since the MME doesn’t rely on any of the projects in our solution, no references were needed.

I think it’s neat that you can use Visual Studio to debug running programs. I did this with my maya plugin. Here I put a breakpoint inside the “initializePlugin” which is used to load the plugin in Maya.

I’ve used this (the attach debugger to process) frequently when debugging code for Unity and Unreal based projects.  It’s incredibly helpful to be able to step through your code this way.

With the MME working, I had a plugin to export data. So I created my plane and a new object for the player game object in Maya, set some vertex colors, and exported. This is what I had:

Maya creates more data than I was using: tangent, bitangent, texcoords, and normals. I chose to export all of them as it was a simple process to export them into my human readable file, and they are just ignored by the mesh import. I decided to do this so that if I chose to include these things in the future in my game it would be one less step I’d have to code in. This was actually helpful then when I added textures to my 3D objects…

The last thing I did was add textures to the meshes. This was a fairly straightforward process, though there were several steps involved, since I had done this previously for the UI elements, and didn’t take that long. This change touched code in graphics  (the renderer thread, meshes, and the shaders), to the game graphic objects (where the 3DObjects are stored), to the game itself, but I’m really liking the result!

After applying textures to the meshes the game looks like this:

And… it’s ready for snowfall!

EAE 6320-001 GameEngII Game Engine Project, Game Dev Adventures!

Assignment 10: Mesh Asset Now Loads from File Using Lua, Still in 3D!

This week’s game. One-click download, unzip, play, enjoy! 😀

NancyNewren_ExampleGame 10

What’s New
The only gameplay changes from last week are the colors. Enjoy! 😀

Controls
Main object: Use arrow keys to move
Camera:  
Left/Right (A/D), Up/down (W/S), Zoom in/out (Q/E)
(When moving the camera things in the world will appear to move opposite. Thus when you move camera left, the world will appear to move right.)

About the Project

This week I moved the creation of mesh assets from hard coded to Lua files… and then played with updating the colors just using the Lua files. It was fun! (At least the last part. Ha ha!) Having the Lua files for the assets is really neat because this opens the possibility to having other tools and programs create the asset files. More than making asset files data driven though, the idea of the raw asset file is to be human readable to make it easy to read, understand, and most importantly debug. In the future I’ll take the human readable file and a binary, but the important thing is when you run into a problem with an asset you want a human readable file that you can quickly and easily read and understand so you can quickly and easily get to debugging.

For instance, this is my floor mesh file:


The order of the indices for the index buffer matters, so it was an easy decision to make the indices an array type table. For the vertices, since the order of the indices maters, that means, that though the vertices can be listed in any order they must be listed in a predictable order, so the vertices table also needed to be an array (as a lua table type dictionary has no predictable order).  I decided to group vertex values all together (i.e. a vertex is defined as a position and a color), as I believed that this would make debugging simpler when there are over 100 vertices.

My final decision was to use an array type table for both position and color values as position in math is always ordered (x,y,z), and anyone who knows graphics, when they see 3{4} color values expects (r,g,b,(a)), so I didn’t feel the need to make the color and position tables dictionaries, but it could have been done. I felt though that it was more readable to see the values without extra “x=” and “a=” to clutter the tables.

Additionally

The Lua asset loading took longer than I expected to implement. I thought the hardest part would be switching meshes from pointers to handles (since they now are loading from file like the textures), but turns out it was reading in the Lua files! It’s hard to keep track of the stack! I got stuck being worried I’d lose the stack, so I finally drew myself a picture of all the load and pop points and coded them right the first try! Booyah!

When I started the implementation to Lua file loading, I confused myself quite completely. Zeno helped straighten me out though so I didn’t waste any time. I also have struggled a bit throughout the semester remembering all the c++ syntax as it has been about three years since I last coded in it regularly. While I have been improving all semester, I still got stuck on a couple things, but Zeno was right there with those pieces of c++ syntax I got stuck on right when I needed them.

One More Thing!

Since the Lua files store color as values [0.0f, 1.0f], and the vertex struct expects uint8_t’s, there was a simple conversion of the float * 255. But since computers don’t round, you can end up with numbers like 234.99 and the uint8_t will store 234, not 235. An incredibly simple way around this is, after doing your multiplication, just add 0.5f. Then 234.99 is now 235.49, which truncates to 235. Done and done.

Cool Stuff!, EAE 6320-001 GameEngII Game Engine Project

Assignment 9: Now in 3D!

NancyNewren_ExampleGame 09

(Simple one-click download. Once downloaded, unzip, play the executable, enjoy!)

Movement

Main object: Use arrow keys to move left/right and up/down
Camera:  
Left/Right (A/D), Up/down (W/S), Zoom in/out (Q/E)
(When moving the camera things in the world will appear to move opposite. Thus when you move camera left, the world will appear to move right.)

About the Project

This week I completed the transformation of the game to 3D. To accomplish this I added a third dimension: depth. Then added three matrix transformations to take 3D objects from their local space (at their origin and facing forward), to world space (where the object is in relation to the origin of the world), to camera space (where the object in the world is in relation to the observer, or camera).

Local space is essentially where a 3D mesh’s vertices are in relation  to its origin, and we place the object facing forward as convention so we know automatically how to rotate things and how they should appear without even seeing them initially.

World space is where an object is in the world of the game: if we have a city then we have streets and buildings, and we may use the same mesh to represent all those buildings, at the same local space, but in the world they can be all over the map. World space gives us an object in the map of a game.

Finally we have an observer: world space really doesn’t mean anything until we have an observer. In 2D space what we draw to screen is the world we’ve made, but in 3D what we draw to screen depends entirely upon the perspective we are looking at since we are taking something that has 3 dimensions and moving it to two. Without the observer there’s no way to know how to flatten the world. There must be an observer. In this case we call the observer the camera. Since the camera is looking at the world, when the camera moves the world appears to move in the opposite direction (up:down, left:right, forward:back, clockwise:counterclockwise, etc.). To make the math simple, and since moving the camera is just moving the world inversely to the camera: we take the world to the camera.

The last transform we need then is the one to flatten the world to render on screen.

Here’s how the platform independent transform from local space to world space (localToWorld) in the shader looks:

float4 verPosWorld = MultiplyMV(g_transform_localToWorld, float4(i_position, 1.0));

The last thing I did was implement depth buffering so that instead of the last mesh drawn just covering up the previous one, you can actually have meshes covering up parts of each other. This allows us to render the plane and the cube intersecting (without depth buffering the cube or the plane would be covering the other up):

Additionally…

I chose to make the plane double sided: so I used the same four vertices but sent the index buffer 12 points (instead of six) so that you can see the plane from underneath as well as on top, but the plane is still flat (no sides).

I didn’t have time for any optional challenges this week. I had wanted to do camera rotation, but I ended up spending my extra hours on a Visual Studio bug. I was relieved to discover that the bug wasn’t in my implementation, but still the time was gone. Maybe next time.

Cool Stuff!, EAE 6320-001 GameEngII Game Engine Project, Game Dev Adventures!

Assignment 8: Moving Towards 3D

NANCYNEWREN_EXAMPLEGAME 08
(Simple one-click download. Once downloaded, unzip, play the executable, enjoy!)
Controls: Use the typical WASD to move the trapezoid and see what shapes you make!

ABOUT THE PROJECT

For this weeks project I created a basic 2D mesh that I will extend next week to 3D. I started by copying the 2D sprite and shader code directly and then adding/changing to make it a mesh. Because this required a lot of steps, I did my best to do as small of chunks as possible and to always test that my code worked on every platform and every configuration after the updates. This I believe saved me big headaches as there was twice when one platform wasn’t working, but because the changes I made were fresh, and I knew the immediate changes I’d made that had resulted in the bug, I was able to fairly quickly resolve them.

To move the object I used a very simple physics equation: setting the velocity in the UpdateSimulationBasedOnInput() function to m_speed if a key was pressed for that direciton (i.e. if ‘A’ was pressed then velocity.x = -m_speed), zero when not pressed, and zero if opposing direction keys pressed at the same time. Then in the UpdateSimulationBasedOnTime(), position += velocity * time (because this is the function where we’re given the time variable, and the other function is where input updates are received). As with most games, the simulation (or game) update happens more slowly than rendering (in this case the simulation updates every ~0.066 seconds), which is why the movement appears jumpy. To get around you could obviously speed up the simulation time, but in most games the update times still differ, so the ideal solution is to calculate a predicted position and send that position to the renderer. This can be done in the submit to renderer as that funciton is given the time since the last simulation update and so a more accurate prediction can be made. (I have done this in the next assignment.)

I also created, though it was not required, a 3DObject class to manage the mesh, effect, and other variables needed for rendering a 3DObject.

One last note I wanted to mention: When rendering, because we always want our UI elements (or 2D sprites) to be seen, we always draw our 3D-objects/meshes first. This ensures us that our UI (player hud, menus, etc.), are always visible and accessible to the player.

EAE 6320-001 GameEngII Game Engine Project, Game Dev Adventures!

Assignment 7: Platform Independent Shaders

NancyNewren_ExampleGame 07

Hold space to change the humerus texture and find your funny!

(Simple one-click download. Once downloaded, unzip, play the executable, enjoy!)

About the Project

I thought this assignment was fun. I like working with macros, and since we can’t have separate cpp implementation files for shaders like we do for our other code, to make my shaders platform independent I got to use my C macros. Declaring the variables is simple enough. For instance, in my platform independent shader, with the use of the macros, this is how I declare a texture:

DeclareTextureObj( g_color_texture, 0 );

The functions were a bit more tricky, but this is how I sample a color from the texture:

const float4 sampledColor = TEXTURE( g_color_texture, i_texcoord, g_color_samplerState );

Since openGL has a reserved variable “gl_Position” which is required, to manage this I set up a macro so that in my shader all I have to do is this:

o_position = float4( i_position.x, i_position.y, 0.0, 1.0 );

and the macro resolves the “o_position” to “gl_Position”.

I also decided to complete the optional challenge of enabling alpha transparency to  my shaders. This was a very simple task. The render state is updated based on the bit in the renderstate variable. Change that bit to a 1 and you have enabled alpha transparency. I did this for my kiddy corner images.

Last thing I did was move and resize my images as well us update the user input mechanic so that when you hold the space bar you see one image, and when you release it it changes back.

EAE 6320-001 GameEngII Game Engine Project, Game Dev Adventures!

Assignment 6: Textures and Reference Counting (again)

NancyNewren_ExampleGame 06

(Simple one-click download. Once downloaded, unzip, play the executable, enjoy!)

Press space to change the humerus texture and find your funny!

About the Project

The first thing I did was add files to the Graphics project to manage textures in code, and then added a TextureBuilder project (to build the textures which are then loaded into the cTexture class from the files in the Graphics project added previously). There were no instructions given on how to do this. Normally I panic because on previous assignments I’ve really struggled with this part, but this time I went in with the attitude that I could figure it out! And I did! So there were two obvious references needed to add to the Texture builder: DirectXTex (because it uses function calls), and AssetBuildLibrary (which was glaringly obvious since one of the TextureBuilder classes inherits from one of AssetBuildLibrary’s classes). With those added there were still 10 linker errors, but I knew that this reference didn’t belong on the Texture Builder because the linker errors were having issues with AssetBuildLibrary (ABL), which needed to have a reference to the Platform project. This may seem strange that ABL didn’t need it in our previous assignment, but that’s because the projects that use ABL were adding Platform as a reference to themselves. Once I added Platform as a reference to the ABL (where it actually should have been liked in the first place) I was able to remove the superfluous Platform reference from the other projects.

The other fun thing to figure out was the build dependency for TextureBuilder. Now the AssetBuildExe uses TextureBuilder to build the textures, but AssetBuildExe doesn’t actually need TextureBuilder during the build. No, it only needs TextureBuilder when it is executed. Well AssetBuildExe gets executed by BuildExampleGameAssets. That means that, even though AssetBuildExe needs TextureBuilder to build the textures, it’s actually BuildExampleGameAssets that has the build dependency on TextureBuilder, because BuildExampleGameAssets is what executes AssetBuildExe, which then uses TextureBuilder.

I added textures to the sprites for this assignment. Since Iadded the textures I needed to use uv coordinates to draw the textures on the sprite. Dealing with the uv’s was fairly simple as I just assigned the vertices of my sprite to the normalized device coordinates. For both platforms that is 0->1 for u. Bt for v, in D3D the values go 1->0 top down, and OpenGL is oppisite that. So when I was assigning the values in the sprite for the x,y screen space coordinates, I assigned the u,v coordinates there as well. So the user doesn’t have to do anything for the u,v. Since I did this inside the scope where I defined the x,y values, there was no code sharing between the platforms. (Since I am dealing with vertex winding and now the different normalized device coordinates for uv, it was simpler to keep them separate). But all this is “behind the hood” so to speak for the game programmer, so all I specified for the sprites were exactly the same things I had for the last version of the game.

The other thing I did was use a handle for loading my textures. The handle makes it so that we don’t load more than one asset from file (using the path as the unique identifier), since loading from file is an expensive operation this makes our game run more efficiently. Then when another object asks for that asset, instead of reloading, it just passes a reference. This is why we had to increment the reference count for the handle manually. To think of it in other terms, we were passing to the graphics thread the handle, not the texture itself. So instead of increasing the number of references on the texture, we increase it on the handle.  Really a way to increment the reference count for handle could have been provided, but I think this was not done to emphasize the that the handle wasn’t the object itself.

Now I did use the handle-provided way to decrement the reference count after rendering was done. The reason for this is because when the reference count reaches 0 and you haven’t used the handle appropriate way to decrement the memory won’t be released properly.

The last thing I did was add a space bar press and time change to two of the textures. It’s quite humerous.

Struggles and Helps

This entire project was much more involved than I anticipated it being, and I really struggled with the last requirements. I spent a long time on the first part: determining project dependencies and setting up my shaders correctly for drawing textures on my shaders. But I was able to really help one of my classmates understand the difference between build dependencies and references because of all the thought I put into it, so it was worth it.

I definitely wouldn’t have finished the project on time without Arpit’s help. I could not get submitting textures to work. My game was completely broken and he helped me get it working so I could complete the assignment. He stayed up late to help me do this after he’d completed his assignment.