Ph.D. Adventures!

COVID-19 Craziness (Ph.D. Year 1, Spring Week 10-13)


Crazy couple weeks! I went from a laid back lovely spring break and celebrating my wedding anniversary early because we didn’t have the kids that weekend (week 10) to the craziness that is the COVID-19 pandemic. online courses, increased workload, locked indoors, homeschooling my stepchildren, trying to keep up with research (which we’ve had to redesign due to the pandemic), and major depressive episodes (week 13). Man, I’m tired. 

For the Interested Reader:


Eliane and I are in the middle of redesigning our research project and updating our IRB application in light of the pandemic, school closures, and recent stay at home orders. We can no longer perform our field observations as schools are shut down. I am continuing to work on the PhET simulation redesign based on our HCI and educational design theory to test in a lab study most likely as an A/B test.

Homeschooling two kids as a Ph.D. student is no joke. I’m pretty proud of what I’ve been able to accomplish despite losing half of my days, and I didn’t quit any of my classes, However, there has been a marked decrease in performance in my most time-consuming class. I simply don’t have time for it. I have been in conversations with my professor and he has given me some lee-way, but not as much as I really need. He was really nice though and told me he thought I was doing well in his class especially considering my extra constraints since the pandemic. I am most likely switching that course to CR/NC to relieve some of the stress I’ve been feeling, which is in alignment with the advice I received from my doctorate mentor.

Eliane and Rogelio both independently sent me an email about the same fellowship which is meant to serve as an augmentation to other funding. I’m going to start working with them to apply for that.

What’s next

  • Finish up work on the IRB application. We only have a few sections left now.
  • Continue making updates to PhET
  • Apply for the SigHPC fellowship
  • Survive the rest of the semester
  • Reach out to my therapist


Not enough time in the day/week to accomplish all the extra tasks that have been put on my plate since the pandemic hit the U.S.

COVID-19 Craziness, and LIfe

Homeschooling my kids has thrown a huge hurdle in my path. I spend mornings and early afternoons with them, basically not accomplishing anything of my own (though I do attempt it), and then spend the rest of the day trying to fit my stuff in, including some self-care,  and trying to get enough sleep. It’s not really working for me, but we don’t have any alternatives.

I’ve been struggling with a depressive episode triggered by the fallout of the pandemic (the pandemic itself hasn’t worried me, it’s been all the repercussions in my life and the lives of people I love that has caused it). So to look on the bright side, it’s a great time and opportunity to work on some old issues that resurfaced and to reflect on how far I’ve come! Depression isn’t a norm in my life anymore and I know that this will pass. In the meantime, I recognize it would be wise to get some professional support so I can stay safe and be the best mom/wife/sister/friend possible. I can’t do that if I don’t take care of myself.

I told Eliane I really want to focus more on games research and so I’m going to be working with Rogelio as my advisor, and that I want her on my committee. She was super supportive and I’m looking forward to having her on my committee as all my interactions with her have been extremely positive and she knows a lot about educational technologies which will be great insights to have. She was going to fund me over the summer before that conversation, but she has other students she’s working with that she may throw that funding behind now (which I completely understand and support), so I most likely won’t have a job this summer. I haven’t decided if that’s good or bad, but it’s definitely an added stress. My husband is an essential worker, but if he gets sick… well, I’m praying he stays well! 

Right as the pandemic started to spread in the U.S. I (made my kids) play the board game Pandemic. As a mathematics undergraduate, I learned about the basic SIR (Susceptible, Infected, Recovered) models used to study infectious disease. I wanted my kids to understand how infectious diseases spread and that there are things that people, public health specialists, medical professionals, as well as scientists and innovators can do to help when a pandemic happens. It wasn’t much of a success the first time, however, I played again with my daughter a few days later and I am proud to say that we beat every outbreak quite handily.

In times of crisis, it’s important to focus on things that bring you hope and joy, and to find gratitude in the little moments. I love that playing games was recommended by health organizations as a way to cope with this crisis! #GamesForChange

The earthquakes that hit Utah really shook our home and did some damage. as well as really upsetting a member of my family. I feel like part of my job as wife/mom is calming down all the anxiety and stir craziness that’s happening. So to anyone who is struggling, remember the good things, be grateful for the small things, and be hopeful for the future of things, and remember it’s okay to have a bad day.

“Everything works out in the end. If it hasn’t worked out, it’s not the end!” -Unknown


Cool Stuff!, Ph.D. Adventures!

Jump on Those Spikes! (Ph.D. Year 1, Spring Week 9)


We went to the classroom this week and did an initial observation just to get a feel for what it is like in the classroom, and to get an idea of what it will be like when we’re in their gathering data. Lots of course work and some personal goals that I’ve been working towards that are taking up a lot of time. I have now collected all 40 charms in Hallow Knight and have received a blessing that gives me soul when I sit on a bench. Right now I’m sitting on the bench outside the last boss.

For the Interested Reader:


I listed all of my accomplishments above and so I’ll just go into a little more detail about Hollow Knight. The White Palace, which one must complete in order to receive all of the charms, is a platforming challenge nightmare. For fun I read a walk-through of the White Palace and instead of directions like go here, do this, move left, jump right, etc., the walk-through (besides suggesting different charm combinations which was useful) literally said that the White Palace is a very difficult platforming challenge and you will find yourself very irritated unless you have a lot of patience.

So I got to practice my patience. Ha ha. I want to emulate the kind of calmness while going through tough challenges like the White Palace that I want my children to have. So if I found myself getting too upset I walked away from the game.

Some of what makes the White Palace difficult is, in performing platforming challenges, you never know in Hollow Knight where an implicit checkpoint will be. If you fall back to a previous point you start back from there. And implicit points only occur where the knight can stand still out of danger. There are very few points for the knight to do this in the White Palace with long difficult challenges everywhere between. However, I did feel that tree designers did a good job of placing these implicit checkpoints.

While I was playing the White Palace on Sunday with my brother and husband watching I tried to stay alive while they tried to figure out a good strategy for me to proceed. It worked! An it made the challenges a little more fun as my brother and husband also pointed me to strategies that led to certain death: jump into those spikes! Which I did.

What’s next

  • Mid-semester update to Dr. Eliane Wiese
  • Create some redesigns for the polarity simulation
  • Course work
  • Personal goals


I know my course work is suffering because of personal goals that have come up. I’m lacking the motivation to work on my course work because of these new personal and professional goals.

Ph.D. Adventures!

More Here, Less There (Ph.D. Year 1, Spring Week 5)


Last week I began examining the relationship between usability heuristics and educational technologies design. Celebrated my husband’s birthday with a fun trip out of town. I caught up on my homework that was due at the end of the week, and now need to do all the coursework that is due this week. It’s shaping up to be a very busy week. I finally beat The Collector boss on Hollow Knight (he killed me a lot).

For the Interested Reader:


Are we missing simple usability considerations in the design of technology? Given the task to examine educational technologies design I decided the best place to start would be the simplest: to examine the relationship between usability heuristics and educational technologies. 

Celebrated my husband’s birthday with a fun trip out of town. I didn’t take any pictures, but we had tons of fun with the kiddos.

I still feel like I’m recovering from last semester. Though the more we progress in this semester the less angry I feel about courses. I’m not feeling my assignments.

What’s next

  • Make notes and hypotheses about how simple usability heuristics can be molded for educational technologies.
  • Coursework (two big projects, daily homework, and an exam)
  • Personal goals


I’m an all-in kind of gal, and so I’m having a hard time putting more focus on my research (for which you receive zero feedback on “grades” during the semester, you just get one at the end), and not doing all I can in my classes (for which you get told how “good” you are constantly, which I feel is really more of a reflection on how well the teacher is teaching/how interested the students are despite the teacher, than how “good” a student really is.). So I find myself doing less all around rather than more here and less there.

EAE 6320-001 GameEngII Game Engine Project, Game Dev Adventures!

Final Project: Matching Game!

My FINAL game: simple one-click download below, unzip, play, enjoy!

NancyNewren_ExampleGame FinalProject!

What’s new
It’s a game!

Main object: Use arrow keys to move the 3D object to its matching card.

About the Game

Okay, so it’s not pretty, but sometimes you just need to get to proof of concept; beautifying the game can come later.

When I was making educational games for Waterford Research Institute I made a lot of mini games for preK curriculum: letter/word association, number recoginition, letter matching, and even an ethics game. I really enjoyed making these simple-to-play games for 4-5 year olds. They were fun enough that even my older play testers (20-year-olds) liked them. However I never got to make a shape matching game. But now I have!

This was fun to make, especially since it was in my engine, but slightly more complicated than I anticipated.

The first thing was paring down the game to its minimum essentials: for instance having animations wouldn’t matter if I didn’t have a game first. So what was essential? Well having four objects with four matching cards. Being able to detect a right and wrong guess, and upon such guesses having a response (moving onto the next object for the right guess, feedback to let you know it was wrong otherwise). Once that was determined I simply needed to set up the graphics objects and then create the basic right/wrong logic with a simple collision detection.

Everything we needed to draw to screen was done: but any game logic I had to write, though admittedly we had previously written the player controls and the essential game loop code (like the update functions in Unity) were already supplied previous to this final assignment.

I don’t think I had noticed previously how heavily a game relies on its graphics components though. Without those, regardless of any clever logic you may have written, you don’t have much of a game. And I think it took up a majority of my time to find, create, edit, and import into my engine all the assets.

Honestly I think the most difficult thing to do was just tracking all the different assets. In game engines such as Unreal and Unity it is easier to track them I think because there are prefabs and the editor to help you create your arrays of assets. However creating the c3DObjects class to manage all things for my 3D objects (mesh, effect, texture, transform, speed, and even id) made it much simpler to program the game. When I was trying to detect whether or not an object matched a card all I had to do was give them both the same id in their c3DObject instance. Which brings up something: the cards, though flat, were still meshes. As I didn’t want them drawn like UIElements, but to interact with the objects in world space, I had them as 3DObjects. The only thing I’d add to the engine would be a UIElement class (akin to the 3DObjects class, but fancier since I’d need to track scaling vs. anchoring, etc.), and the idea of prefabs. It’d really make things easier to track with those things in place. Well, I’d also add an animation project to make animations simpler too.

The only time that I wished I’d done something different in previous assignments was the way I stored graphics elements in my game logic. Before even started coding the game I did some refractoring in my game logic to make it easier to track my graphics elements.

I created two effects for right and wrong guesses that tint and make the cards transparent. Then it was a simple matter of just swapping out the regular effect (which everything 3D was drawn with) with a right/wrong effect. That decision also made it easy to animate the ending flash as I just swapped out right and regular effects. If I’d passed the color to the effect to draw right/wrong it would have affected all my objects, and not just the one I wanted to draw.

The other difficulty was randomizing the order of everything while keeping track of them! picking four (of seven) random objects to be the match objects, making sure the four card were the four matching ones, and then drawing the four matching cards in a random order on screen. Here’s how I chose my random objects:

The collision was interesting. I didn’t do what fancy game engines do: detect whether the bounding boxes have collided. What I did was much, much simpler! Knowing that my local meshes origins were all at the origin, this meant that their position were all in world space. And as I didn’t care when their boundaries collided, but rather when their local origins were within a certain range of each other, all I needed to do was check to see if a matching object was within a certain distance of the card. To further optimize, as square roots are inefficient, I only checked the distance squared, I only checked the player object against cards that hadn’t been matched, and didn’t check for collisions at any time when the player movemnet input was deactivated. I added the DistanceSquared method to the Math::sVector class.

float distSquared = eae6320::Math::DistanceSquared(i_pObj1->m_rigidBodyState.PredictFuturePosition(i_elapsedSecondCount_sinceLastUpdate), i_pObj2->m_rigidBodyState.PredictFuturePosition(i_elapsedSecondCount_sinceLastUpdate));

return distSquared <= m_distAmountSquared;

The last thing I did was add in the sprite images upon a correct solution. This had multiple purposes: 1) it hid the no longer drawn match object, 2) It hid the sudden movement of the next object to match to the starting player position, and 3) It gave some fun, and funny, feedback to the player that they are on the right track.

If I had had more time (I did this all in about about 15-20 hours), I would have added:

  1. Movement animations
    1. For instance: Locking the “match” object to the center of the card before checking for a match, then moving it back to the beginning
  2. Setting boundaries the player can’t move outside of
  3. Beautifying the game: better match object models and cards, a better background too
  4. Add fun audio
  5. I also really wanted to do lighting

I enjoyed how it turned out though, and so did my play testers. 🙂

What I learned this semester

What I think the goal of the assignments were

I think the most important goal of the assignments was to teach us what good interfaces can do for us. For instance once we set up our one interface which behind the scenes dealt with our different platforms, we didn’t write any platform dependent code. That is very cool! We didn’t have to think about what each platform might need, instead we only connected with our interfaces and let the interfaces do their job.

We learned about building good asset pipelines. At the beginning we only had shaders, but by the end we had textures, meshes, and shaders. We separated out our algorithms from the data. I really liked this (especially creating the binary files!). Doing this makes it much faster and easier to change our assets as data is easier to update than code, and doesn’t require a recompile! We only need the built asset and that can plug-in to our code.

Another big point was to write code that is human readable, understandable, and easy to debug. Ultimately, whether it is someone else or yourself, someone is going to have to read your code to understand how to code against/interface it. If the programmer can’t understand and debug your code (which is really sad when you’re the programmer in your own code and you can’t do this!)  they’re just going to have to either spend a long time to understand it or are going to have to rewrite it, which if it’s a large amount of code is going to take a lot of time. But if you make it human understandable then they should understand quickly what is being done in the code and should be able to code/interface with it quickly, which is very important!

What I got out of this class

I learned so much this semester. I used to always be scared whenever I had to set up build dependencies and references, what the differences were, and why they mattered. I never felt like I could do it on my own. But I don’t feel afraid anymore! In fact, I learned it well enough that I was able to explain to someone else the differences and help him troubleshoot his code, and to recognize when another of my classmates had done it incorrectly in his own code. I say that not to point out that he did something wrong, as we all make mistakes, but that I know enough to know when it is done incorrectly now. And that is also not to say that it’s always easy, but I feel confident that I can figure it out, and to NOT PANIC when I get linker errors! J

I learned…

A lot about graphics for a non-graphics class (I actually understand, on a very high-level, what the graphics card does now).

A more-different way to essentially have shared pointers without the overhead of shared pointers through reference counting.

Oooh, after class one day I asked JP about how shaders do absolute value so quick and so I became aware of the intrinsic functions of the graphics card.

Also, D3D has different debug output in VS than OpenGL.

The difference between handles and pointers.

A side effect of all that we did in the code for me was using the c11 standard as I’ve never worked on projects before that used it (we were in the dark ages I suppose 😉 ).

The, unfortunately short, discussion we had about lighting was very interesting. I hadn’t thought about digital lighting as taking away light, but it made sense.

I also think I would enjoy doing graphics more than I thought I would. Every time we got into talking about graphics I got really excited. The math is pretty basic, but can also be really interesting. I really thought I would hate the discussion on lighting, but I found it fascinating.

My thoughts on good engineering architecture and design

I usually like to try to play the line between very structured and “design as you go”. However I knew there would be a lot of refractoring from assignment to assignment and there’s nothing I hate more than having to undo complicated architecture, and since I also didn’t know what the end goal was, I purposely chose to play on the side of “design as I go.” Otherwise I would have drawn out a more architectured structure. There was only one assignment that I wished I would have architected more in a previous assignment, but even in that case my friend who chose more architecture in the previous assignment only finished about an hour before me, and I know he spent more time making the architecture in the first place. So it was basically a wash.

I’ll say this about good architecture: while I was programming my game I made a decision early on to have a GetMatchObject(const unsigned int i_index) and GetCard(const unsigned int i_index) functions. This made it so that later on when I changed how I retrieved cards I only had to change it in one place: my “interface” function.

A good software architecture allows for flexibility in adding new items to the code, but isn’t so abstract and complicated that it is unclear what the interfaces are good for. I.E. it’s easy to read, understand, and debug, and allows for quick and easy iterations.

A good design will be platform independent: so that you only have to write code once! A bad design will have multiple files with only small changes in each for every platform (whether the platform is OSes, websites, or databases). A good design will allow for quick creation of new types that fit into the same interface. A bad design will require a new object or interface for each new type. A good design is faster to refractor. A bad design requires gutting of the interior for even small changes. On the other hand, a good design also isn’t over-architectured either. For instance, there is no need to create a reusable interface (no matter how neat) if you are only ever going to have one type that uses that interface. Good design is about being aware of the parts of code that need to be reusable, what needs to be able to be interfaced, and what can be done top-down as it were, because it’s not going to be used more than the once.


Thanks to Zeno for letting me bounce ideas off of him, and my Dad for proofing my write-up.

Many thanks also to my play testers who really just had fun playing: Zeno Saviour, Monika and Erik Tolman, my sister Deborah, and my brother Zeke (who tested it on linux with wine and it worked!).

EAE 6320-001 GameEngII Game Engine Project, Game Dev Adventures!

Assignment 13: Build System!

This week’s game: simple one-click download below, unzip, play, enjoy!

NancyNewren_ExampleGame 13

What’s new
Nothing from the player’s perspective. Just built using my new build system!

Main object: Use arrow keys to move main object left/right/up/down
Left/Right (A/D), Up/down (W/S), Zoom in/out (Q/E)
(When moving the camera things in the world will appear to move opposite. Thus when you move camera left, the world will appear to move right.)

About the Project

I freaking love my new build system! It saves so much time! I have a file where I list all my graphics assets that need to be built for the game. The build system then checks this list of assets to see if they already exist, if so if there is a newer source version or if the builder itself is newer, and then it will only build if necessary. That means once I build everything (which can take several minutes) it takes less than a second to build again because my system is being smart. It’s so nice!

There are big advantages to having a list of all the assets to build in their own file: it is quick and easy to update, and you don’t have to search through other code. It’s also easy to see any bugs, and to add new features as well.

Of course this list isn’t everything that needs to be built: there are licenses and settings files that need to be copied over into the playable game folder, but the system checks for those as well. It saves so much time! Especially since we have two different platforms and two different build configuration types, which means we have to build and test for four total different build configurations. When you have to wait several minutes every time you build, every time you test a simple change, it very quickly adds up to taking hours! I’m so grateful for this new build system. Thanks JP for having us do it! 🙂

EAE 6320-001 GameEngII Game Engine Project, Game Dev Adventures!

Assignment 12: Binary Mesh Assets

Download this week’s game with the simple one-click download below! Once downloaded, unzip, play the executable, enjoy!

NancyNewren_ExampleGame 12

What’s new
Different main mesh: a Christmas cactus for my friend Felicia! Bring on the happy!

Main object: Use arrow keys to move main object left/right/up/down
Left/Right (A/D), Up/down (W/S), Zoom in/out (Q/E)
(When moving the camera things in the world will appear to move opposite. Thus when you move camera left, the world will appear to move right.)

About the Project

The advantage of the human readable file is that you can read, understand, and debug that file just by reading it, but they do take longer to load and are bigger files. So the idea is to load/store the assets in binary for two reasons: 1) It is more efficient (faster) to load from binary, and 2) it is also more economic (they are smaller) to store the files. So in our regular build and source files, human-readable: since a human needs to be able to interpret and debug them, but in our run-tim binary: because we’ve done all our testing and debugging and now only the computer needs to be able to read them.

There are four pieces of data that each mesh needs: number of indices, the indicies, the vertices, and the number of verticies. I chose to do the number of indicies first (since the binary read in needs to know how many indicies there are so it can jump to the next right place in the data), then the indicies next since they take up less space and are easier to read in a binary format as they are integers. Next I chose to do number of verticies, again because the read in needs to know how many vertices there are so we know the size of the memory, and then the verticies themselves. I could have done number indicies, number verticies, indicies, and then verticies, or a number of different configurations, but regardless of the order I chose, the number of indicies and verticies would always need to be stored before the index and vertex data.

Here is my floor’s binary file with the above listed order:

By default the human readable files are in OpenGL format. Previously I was swapping the UVs and winding orders after loading from the human readable files for D3D. But since I want a quick load at run time this swapping happens during the build for the binary. So the while the human readable files remain the same, the built meshes are different and cannot be swapped across platform. This does however make loading a binary file platform independent. This is how I load the binary data:

Lastly, enough talk! Let’s show the two advantages of the binary files: I made an object with 50436 index count, and 33616 vertex count. Here’s how they compare:

Human Readable File Binary
Size of file 4.61 MB 886 KB
Time to load (seconds) 0.256 0.002234


Thanks to Zeno who once again helped me with some of the C++ basics I’ve forgotten. It’s been a long time since I did file i/o with c++, and he was very helpful.

EAE 6320-001 GameEngII Game Engine Project, Game Dev Adventures!

Assignment 10: Mesh Asset Now Loads from File Using Lua, Still in 3D!

This week’s game. One-click download, unzip, play, enjoy! 😀

NancyNewren_ExampleGame 10

What’s New
The only gameplay changes from last week are the colors. Enjoy! 😀

Main object: Use arrow keys to move
Left/Right (A/D), Up/down (W/S), Zoom in/out (Q/E)
(When moving the camera things in the world will appear to move opposite. Thus when you move camera left, the world will appear to move right.)

About the Project

This week I moved the creation of mesh assets from hard coded to Lua files… and then played with updating the colors just using the Lua files. It was fun! (At least the last part. Ha ha!) Having the Lua files for the assets is really neat because this opens the possibility to having other tools and programs create the asset files. More than making asset files data driven though, the idea of the raw asset file is to be human readable to make it easy to read, understand, and most importantly debug. In the future I’ll take the human readable file and a binary, but the important thing is when you run into a problem with an asset you want a human readable file that you can quickly and easily read and understand so you can quickly and easily get to debugging.

For instance, this is my floor mesh file:

The order of the indices for the index buffer matters, so it was an easy decision to make the indices an array type table. For the vertices, since the order of the indices maters, that means, that though the vertices can be listed in any order they must be listed in a predictable order, so the vertices table also needed to be an array (as a lua table type dictionary has no predictable order).  I decided to group vertex values all together (i.e. a vertex is defined as a position and a color), as I believed that this would make debugging simpler when there are over 100 vertices.

My final decision was to use an array type table for both position and color values as position in math is always ordered (x,y,z), and anyone who knows graphics, when they see 3{4} color values expects (r,g,b,(a)), so I didn’t feel the need to make the color and position tables dictionaries, but it could have been done. I felt though that it was more readable to see the values without extra “x=” and “a=” to clutter the tables.


The Lua asset loading took longer than I expected to implement. I thought the hardest part would be switching meshes from pointers to handles (since they now are loading from file like the textures), but turns out it was reading in the Lua files! It’s hard to keep track of the stack! I got stuck being worried I’d lose the stack, so I finally drew myself a picture of all the load and pop points and coded them right the first try! Booyah!

When I started the implementation to Lua file loading, I confused myself quite completely. Zeno helped straighten me out though so I didn’t waste any time. I also have struggled a bit throughout the semester remembering all the c++ syntax as it has been about three years since I last coded in it regularly. While I have been improving all semester, I still got stuck on a couple things, but Zeno was right there with those pieces of c++ syntax I got stuck on right when I needed them.

One More Thing!

Since the Lua files store color as values [0.0f, 1.0f], and the vertex struct expects uint8_t’s, there was a simple conversion of the float * 255. But since computers don’t round, you can end up with numbers like 234.99 and the uint8_t will store 234, not 235. An incredibly simple way around this is, after doing your multiplication, just add 0.5f. Then 234.99 is now 235.49, which truncates to 235. Done and done.

Cool Stuff!, EAE 6320-001 GameEngII Game Engine Project, Game Dev Adventures!

Assignment 8: Moving Towards 3D

(Simple one-click download. Once downloaded, unzip, play the executable, enjoy!)
Controls: Use the typical WASD to move the trapezoid and see what shapes you make!


For this weeks project I created a basic 2D mesh that I will extend next week to 3D. I started by copying the 2D sprite and shader code directly and then adding/changing to make it a mesh. Because this required a lot of steps, I did my best to do as small of chunks as possible and to always test that my code worked on every platform and every configuration after the updates. This I believe saved me big headaches as there was twice when one platform wasn’t working, but because the changes I made were fresh, and I knew the immediate changes I’d made that had resulted in the bug, I was able to fairly quickly resolve them.

To move the object I used a very simple physics equation: setting the velocity in the UpdateSimulationBasedOnInput() function to m_speed if a key was pressed for that direciton (i.e. if ‘A’ was pressed then velocity.x = -m_speed), zero when not pressed, and zero if opposing direction keys pressed at the same time. Then in the UpdateSimulationBasedOnTime(), position += velocity * time (because this is the function where we’re given the time variable, and the other function is where input updates are received). As with most games, the simulation (or game) update happens more slowly than rendering (in this case the simulation updates every ~0.066 seconds), which is why the movement appears jumpy. To get around you could obviously speed up the simulation time, but in most games the update times still differ, so the ideal solution is to calculate a predicted position and send that position to the renderer. This can be done in the submit to renderer as that funciton is given the time since the last simulation update and so a more accurate prediction can be made. (I have done this in the next assignment.)

I also created, though it was not required, a 3DObject class to manage the mesh, effect, and other variables needed for rendering a 3DObject.

One last note I wanted to mention: When rendering, because we always want our UI elements (or 2D sprites) to be seen, we always draw our 3D-objects/meshes first. This ensures us that our UI (player hud, menus, etc.), are always visible and accessible to the player.

EAE 6320-001 GameEngII Game Engine Project, Game Dev Adventures!

Assignment 6: Textures and Reference Counting (again)

NancyNewren_ExampleGame 06

(Simple one-click download. Once downloaded, unzip, play the executable, enjoy!)

Press space to change the humerus texture and find your funny!

About the Project

The first thing I did was add files to the Graphics project to manage textures in code, and then added a TextureBuilder project (to build the textures which are then loaded into the cTexture class from the files in the Graphics project added previously). There were no instructions given on how to do this. Normally I panic because on previous assignments I’ve really struggled with this part, but this time I went in with the attitude that I could figure it out! And I did! So there were two obvious references needed to add to the Texture builder: DirectXTex (because it uses function calls), and AssetBuildLibrary (which was glaringly obvious since one of the TextureBuilder classes inherits from one of AssetBuildLibrary’s classes). With those added there were still 10 linker errors, but I knew that this reference didn’t belong on the Texture Builder because the linker errors were having issues with AssetBuildLibrary (ABL), which needed to have a reference to the Platform project. This may seem strange that ABL didn’t need it in our previous assignment, but that’s because the projects that use ABL were adding Platform as a reference to themselves. Once I added Platform as a reference to the ABL (where it actually should have been liked in the first place) I was able to remove the superfluous Platform reference from the other projects.

The other fun thing to figure out was the build dependency for TextureBuilder. Now the AssetBuildExe uses TextureBuilder to build the textures, but AssetBuildExe doesn’t actually need TextureBuilder during the build. No, it only needs TextureBuilder when it is executed. Well AssetBuildExe gets executed by BuildExampleGameAssets. That means that, even though AssetBuildExe needs TextureBuilder to build the textures, it’s actually BuildExampleGameAssets that has the build dependency on TextureBuilder, because BuildExampleGameAssets is what executes AssetBuildExe, which then uses TextureBuilder.

I added textures to the sprites for this assignment. Since Iadded the textures I needed to use uv coordinates to draw the textures on the sprite. Dealing with the uv’s was fairly simple as I just assigned the vertices of my sprite to the normalized device coordinates. For both platforms that is 0->1 for u. Bt for v, in D3D the values go 1->0 top down, and OpenGL is oppisite that. So when I was assigning the values in the sprite for the x,y screen space coordinates, I assigned the u,v coordinates there as well. So the user doesn’t have to do anything for the u,v. Since I did this inside the scope where I defined the x,y values, there was no code sharing between the platforms. (Since I am dealing with vertex winding and now the different normalized device coordinates for uv, it was simpler to keep them separate). But all this is “behind the hood” so to speak for the game programmer, so all I specified for the sprites were exactly the same things I had for the last version of the game.

The other thing I did was use a handle for loading my textures. The handle makes it so that we don’t load more than one asset from file (using the path as the unique identifier), since loading from file is an expensive operation this makes our game run more efficiently. Then when another object asks for that asset, instead of reloading, it just passes a reference. This is why we had to increment the reference count for the handle manually. To think of it in other terms, we were passing to the graphics thread the handle, not the texture itself. So instead of increasing the number of references on the texture, we increase it on the handle.  Really a way to increment the reference count for handle could have been provided, but I think this was not done to emphasize the that the handle wasn’t the object itself.

Now I did use the handle-provided way to decrement the reference count after rendering was done. The reason for this is because when the reference count reaches 0 and you haven’t used the handle appropriate way to decrement the memory won’t be released properly.

The last thing I did was add a space bar press and time change to two of the textures. It’s quite humerous.

Struggles and Helps

This entire project was much more involved than I anticipated it being, and I really struggled with the last requirements. I spent a long time on the first part: determining project dependencies and setting up my shaders correctly for drawing textures on my shaders. But I was able to really help one of my classmates understand the difference between build dependencies and references because of all the thought I put into it, so it was worth it.

I definitely wouldn’t have finished the project on time without Arpit’s help. I could not get submitting textures to work. My game was completely broken and he helped me get it working so I could complete the assignment. He stayed up late to help me do this after he’d completed his assignment.

EAE 6320-001 GameEngII Game Engine Project, Game Dev Adventures!

Assignment 3: Graphics Platform Independence, User Input

NancyNewren_ExampleGame 03

This game has TWO sprites with different effects! Aww, the awesomeness!

(Simple one-click download. Once downloaded, unzip, play the executable, enjoy!)

About the Project

In this project I made the graphics functions platform independent by creating a cView class. This class handled initializing, cleaning up, clearing, and presenting. Then all platform dependent code and variables went into cView.[platform].cpp files.

Thus the call for clearing the back buffer code now looks like this:

//When rendering

I used a color struct to pass colors, and used this to create the pretty gray background (as opposed to black).

sColor lightGray(0.8f, 0.8f, 0.8f);  //alpha is set to 1.0f when left blank
UpdateClearColor(lightGray);        //The UpdateClearColor can be called at anytime to update the clear color

To create a sprite I allowed the user to just send in a center position, width and height — as opposed to several vertices (Simplified):

eae6320::Graphics::VertexFormats::sSpriteVertex center1(0.0f, 0.5f);
sprite1.Initialize(center1, 0.5f, 0.5f);

To create an effect the user need only pass the name of the shader file to the effect:

eae6320::Graphics::sEffect effect1("shader1");

I also added a second sprite and effect so that I could draw in another sprite with different shader data.

Hmm… Interesting Error

We all got stumped on an interesting linker error I was getting early on when making the graphics code platform independent. I have no idea what caused it and it certainly didn’t feel interesting at the time as it sucked up a bunch of my time. I had actually given up on it for the night — I put in macros to the graphics.cpp and moved on to the rest of the game assignment — when finally after finishing the rest of the requirements I caught some problem solving determination, inspiration hit, and finally hit upon the solution: I did function renaming, code manipulation, and the last thing I did was I moved the included  Graphics.h header to be included before the <Engine/Results/cResult.h> header in my cView class. Once I swapped their positions it worked! I cleaned up the code, and then to test things, I moved it back and guess what: still worked! I couldn’t recreate the linker error. The problem was perhaps a couple things, and it’s possible I’ll never know where or what, but it does go to show you that with three friends, determination, and a cool head you can solve a problem, even if you don’t know the exact cause!

Getting it Done

I was initially stuck on deciding how to implement the platform independent graphics.cpp, but once I decided how it should be done the rest just kind of pounded out itself. That’s also about the time Arpit, Zeno, and I congregated and the work went faster, until it got really late, and then it slowed, but it was fun.

Thanks to Zeno and Arpit for helping all night. We helped each other through the entire project and there is no way I would have completed this on time without their help: there was a lot of help going around and everyone helped each other. Zeno is very knowledgeable: he really helped me with structures, architecture, and lots of random issues and C++ nitty gritty questions (It’s great discussing those c++ ideas with him). Arpit is very good at debugging: he helped me solve my shader errors and kept a cool head as I panicked. 🙂 Chen was helpful too: he stopped in for a bit and tried to help me resolve the aforementioned linker error. With everything they all had me try on that problem, it really helped me rule out what wasn’t wrong so that hours later I was finally able to fix it. I have some pretty cool peers.

I love what I do and I love that I get to work with such talented, intelligent, creative, and willingly helpful people. That’s the games industry! From my work at EA, in the EAE program, and in the serious game industry: I haven’t found a better group of incredibly competent, highly skilled people. I love my game dev peeps. They’re the best!

I feel tempted to do a hashtag: #OneLuckyGirl