Cool Stuff!

Summer Camps — Continue Learning Resources

Hey, kiddos!

I loved having you in my camps. Below you’ll find the resources I mentioned for continuing to learn. Keep creating!

Beginning Programmers

More Advanced Game Making and Game Design

Youtube Channels on Game Development

Unity Game Engine

  • Unity Learning (projects and tutorials from Unity)
  • You can also access Unity Learning from the Unity Hub > Learn tab
  • Unity on youtube


  • Gamasutra (advanced, blog for game developers)
  • NancyGames! (You’re already here! Game design, developement, and meaningful play)


Cool Stuff!, Ph.D. Adventures!

Jump on Those Spikes! (Ph.D. Year 1, Spring Week 9)


We went to the classroom this week and did an initial observation just to get a feel for what it is like in the classroom, and to get an idea of what it will be like when we’re in their gathering data. Lots of course work and some personal goals that I’ve been working towards that are taking up a lot of time. I have now collected all 40 charms in Hallow Knight and have received a blessing that gives me soul when I sit on a bench. Right now I’m sitting on the bench outside the last boss.

For the Interested Reader:


I listed all of my accomplishments above and so I’ll just go into a little more detail about Hollow Knight. The White Palace, which one must complete in order to receive all of the charms, is a platforming challenge nightmare. For fun I read a walk-through of the White Palace and instead of directions like go here, do this, move left, jump right, etc., the walk-through (besides suggesting different charm combinations which was useful) literally said that the White Palace is a very difficult platforming challenge and you will find yourself very irritated unless you have a lot of patience.

So I got to practice my patience. Ha ha. I want to emulate the kind of calmness while going through tough challenges like the White Palace that I want my children to have. So if I found myself getting too upset I walked away from the game.

Some of what makes the White Palace difficult is, in performing platforming challenges, you never know in Hollow Knight where an implicit checkpoint will be. If you fall back to a previous point you start back from there. And implicit points only occur where the knight can stand still out of danger. There are very few points for the knight to do this in the White Palace with long difficult challenges everywhere between. However, I did feel that tree designers did a good job of placing these implicit checkpoints.

While I was playing the White Palace on Sunday with my brother and husband watching I tried to stay alive while they tried to figure out a good strategy for me to proceed. It worked! An it made the challenges a little more fun as my brother and husband also pointed me to strategies that led to certain death: jump into those spikes! Which I did.

What’s next

  • Mid-semester update to Dr. Eliane Wiese
  • Create some redesigns for the polarity simulation
  • Course work
  • Personal goals


I know my course work is suffering because of personal goals that have come up. I’m lacking the motivation to work on my course work because of these new personal and professional goals.

Cool Stuff!, Ph.D. Adventures!

So Far So Good! (Ph.D. Year 1, Spring Week 7)


I stuck with my research time and was able to make an outline for my literary review and recoded some of the articles within my literary review. I’ve also set out a timeline for the rest of the semester for the simulation educational research I’m doing with Dr. Wiese. And I went to Nevada for the weekend to visit a family friend. I also now have a 38 out of 40 charms in Hollow Knight.

For the Interested Reader:


I listed my accomplishments in the TL DR. One thing I didn’t mention is that I started doing Hackerrank challenges in C++. I’m doing one challenge a day pretty much. It’s been a lot of fun; once I finish a challenge I ask myself if I can do it better, if I can do it in place, if I can use a different structure, etc., and I think through those things since that’s usually how my technical interviews have gone. It’s also fun just to see if there is an improvement that can be made.

What’s next

  • I need to create a 3-week plan for Dr. Wiese on for what we’re going to do for the next three weeks.
  • I need to look into creating an IRB for my researcher with Dr. Wiese.
  • There is an article I need to read called fostering the intelligent novice.
  • Figure out how to do citations in latex.
  • The stick with my research times and keep plugging away at my literary review.
  • As always, coursework.
  • Personal goals.


  • Figure out how to do citations in Latex


Cool Stuff!, Game Dev Adventures!

MY FAVORITE SEASON! (Ph.D. Year 1, Week 5)


I wrote a very rough draft for the AISL grant. Narrowed down research questions and aims for systematic review along with a pretty comprehensive list of keywords and databases to search. I need to get the AISL proposal written up and fast. The deadline is next week and I need to somehow fit that in with my course load. Where has September gone? On the bright side, it’s officially my favorite season and my husband and I just passed our half-year wedding anniversary.

The Full Story:


It’s officially my favorite season! YEAH AUTUMN! Pumpkins, cute jackets and boots, and bring on the hot chocolate! My husband and I just passed our six-month anniversary. (We’re not very good at celebrating on specific days, but we do celebrate!). We started our marriage with his favorite season — we married on the spring equinox — and now we get to celebrate the rest of our first year kicking it off with my favorite season. I picked a good wedding day. 

Last week Rogelio made me aware of the NSF’s AISL program and potential funding. AISL is “Advancing Informal STEM Learning” which is right up my games ally! He asked me to write up a 1.5-page draft proposal. We first have to compete with others from the University of Utah (each university can only submit three proposals for AISL funding). It’s final is due next week on Wednesday, Oct 2nd! Wish us luck.

With the help of my advisor, Rogelio, I narrowed down my systematic review questions, created aims, and with the help of others from the QED lab, I now also have a pretty comprehensive list of databases to search and keywords to search on. (Thanks QED lab!)

What’s next

Write up a hero narrative for the AISL. Get all my notes from the systematic review all in one place (they are a bit of everywhere right now). Somehow fit in my coursework and study for my upcoming midterms.


I somehow have to fit writing up the AISL and do my research amongst many large projects from my courses. I can see now why taking three courses, two seminars, and doing research is a bit much. I am definitely feeling burnout. Mostly because one of my courses workload (just one) ate my weekend. 😦


Cool Stuff!, EAE 6320-001 GameEngII Game Engine Project

Assignment 9: Now in 3D!

NancyNewren_ExampleGame 09

(Simple one-click download. Once downloaded, unzip, play the executable, enjoy!)


Main object: Use arrow keys to move left/right and up/down
Left/Right (A/D), Up/down (W/S), Zoom in/out (Q/E)
(When moving the camera things in the world will appear to move opposite. Thus when you move camera left, the world will appear to move right.)

About the Project

This week I completed the transformation of the game to 3D. To accomplish this I added a third dimension: depth. Then added three matrix transformations to take 3D objects from their local space (at their origin and facing forward), to world space (where the object is in relation to the origin of the world), to camera space (where the object in the world is in relation to the observer, or camera).

Local space is essentially where a 3D mesh’s vertices are in relation  to its origin, and we place the object facing forward as convention so we know automatically how to rotate things and how they should appear without even seeing them initially.

World space is where an object is in the world of the game: if we have a city then we have streets and buildings, and we may use the same mesh to represent all those buildings, at the same local space, but in the world they can be all over the map. World space gives us an object in the map of a game.

Finally we have an observer: world space really doesn’t mean anything until we have an observer. In 2D space what we draw to screen is the world we’ve made, but in 3D what we draw to screen depends entirely upon the perspective we are looking at since we are taking something that has 3 dimensions and moving it to two. Without the observer there’s no way to know how to flatten the world. There must be an observer. In this case we call the observer the camera. Since the camera is looking at the world, when the camera moves the world appears to move in the opposite direction (up:down, left:right, forward:back, clockwise:counterclockwise, etc.). To make the math simple, and since moving the camera is just moving the world inversely to the camera: we take the world to the camera.

The last transform we need then is the one to flatten the world to render on screen.

Here’s how the platform independent transform from local space to world space (localToWorld) in the shader looks:

float4 verPosWorld = MultiplyMV(g_transform_localToWorld, float4(i_position, 1.0));

The last thing I did was implement depth buffering so that instead of the last mesh drawn just covering up the previous one, you can actually have meshes covering up parts of each other. This allows us to render the plane and the cube intersecting (without depth buffering the cube or the plane would be covering the other up):


I chose to make the plane double sided: so I used the same four vertices but sent the index buffer 12 points (instead of six) so that you can see the plane from underneath as well as on top, but the plane is still flat (no sides).

I didn’t have time for any optional challenges this week. I had wanted to do camera rotation, but I ended up spending my extra hours on a Visual Studio bug. I was relieved to discover that the bug wasn’t in my implementation, but still the time was gone. Maybe next time.

Cool Stuff!, EAE 6320-001 GameEngII Game Engine Project, Game Dev Adventures!

Assignment 8: Moving Towards 3D

(Simple one-click download. Once downloaded, unzip, play the executable, enjoy!)
Controls: Use the typical WASD to move the trapezoid and see what shapes you make!


For this weeks project I created a basic 2D mesh that I will extend next week to 3D. I started by copying the 2D sprite and shader code directly and then adding/changing to make it a mesh. Because this required a lot of steps, I did my best to do as small of chunks as possible and to always test that my code worked on every platform and every configuration after the updates. This I believe saved me big headaches as there was twice when one platform wasn’t working, but because the changes I made were fresh, and I knew the immediate changes I’d made that had resulted in the bug, I was able to fairly quickly resolve them.

To move the object I used a very simple physics equation: setting the velocity in the UpdateSimulationBasedOnInput() function to m_speed if a key was pressed for that direciton (i.e. if ‘A’ was pressed then velocity.x = -m_speed), zero when not pressed, and zero if opposing direction keys pressed at the same time. Then in the UpdateSimulationBasedOnTime(), position += velocity * time (because this is the function where we’re given the time variable, and the other function is where input updates are received). As with most games, the simulation (or game) update happens more slowly than rendering (in this case the simulation updates every ~0.066 seconds), which is why the movement appears jumpy. To get around you could obviously speed up the simulation time, but in most games the update times still differ, so the ideal solution is to calculate a predicted position and send that position to the renderer. This can be done in the submit to renderer as that funciton is given the time since the last simulation update and so a more accurate prediction can be made. (I have done this in the next assignment.)

I also created, though it was not required, a 3DObject class to manage the mesh, effect, and other variables needed for rendering a 3DObject.

One last note I wanted to mention: When rendering, because we always want our UI elements (or 2D sprites) to be seen, we always draw our 3D-objects/meshes first. This ensures us that our UI (player hud, menus, etc.), are always visible and accessible to the player.

Cool Stuff!, EAE 6320-001 GameEngII Game Engine Project, Game Dev Adventures!

Assignment 2: Effect and Sprite Encapsulation, GPU Debuggers

NancyNewren_ExampleGame 02

This game is a simple sprite animation. Sit back and enjoy the ambiance!

(Simple one-click download. Once downloaded, unzip, play the executable, enjoy!)

About the Project

In this project I encapsulated effect and sprite code into a struct. I chose a struct, but a class would have been identical in code as the only difference between a class and a struct is a struct is public by default and a class is private by default. The purpose of this was to eliminate duplicate code and to ensure any differences in code were intentional.

The process of encapsulation wasn’t exactly straightforward as it was complicated by the fact that I am publishing to essentially two different platforms: one with Direct3D and the other OpenGL. So in addition to creating the typical .h and .cpp files, I also created structName.[platform].cpp files which held the larger portions of code that were dissimilar between the platforms. Thus creating a platform independent interface and two platform specific implementations. In the case of the effect, there was an additional fourth file which handled the platform independent parts of binding, initializing, and cleaning up. The sprite did not require this fourth file as there were not enough similarities between the initializing, drawing, and cleaning up to warrant any platform independent functions.

I utilized Notepad++’s compare plugin a lot in this assignment. It made it vey quick and simple to view the differences along side the similarities. All large differences were moved into platform dependent files while small differences where set out in the same file but with preprocessor macros to turn the code on and off depending on the platform build.

With the effect and sprite code now encapsulated within sEffect and sSprite respectively, the code in Graphics.[platform].cpp to bind and draw is now this:

// Bind the shading data
// Draw the geometry

The color and color animation were updated and made smooth by using the equation I mentioned last time:

0.5*sin(time) + 0.5;

The only real differences left between the Graphics.[platform].cpp files that are left involve the view for the d3d, and clearing the image buffer before binding and drawing. This could be easily resolved by pulling out that code and putting them into the Graphics.cpp file and then using macros to surround the platform specific code. While it is possible to separate it into separate files, I don’t believe any of the changes are significantly large enough to do so: with one exception: the InitializeView() function. But it is at the end of the file, and with a well said comment, I believe it could still remain in the file with a macro.

Lastly, I added a second triangle to make a rectangle, being careful to maintain the proper winding order.

Though the idea of encapsulation is pretty straightforward, going through the code and separating it out into its interfaces and implementations took the majority of the time.

Thanks to Zeno who helped me when my platform specific implementations weren’t working, also for answering C11 standards questions for me as this is the first time I have used them, and being someone I could bounce ideas off of. In particular, I completely forgot that the platform specific files had to be excluded from the others’ build. I literally just did that for some files two days ago, but that’s why I think it’s more efficient to work along side other engineers – even while working on separate projects. They frequently can help you quickly jump the hurdle you’re not seeing that’s right in front of you because your brain is too busy trying to solve the problem at the end of the track.


Graphics debugging can be difficult (as is anything that you have to see or that you don’t get results until run-time), but VS has a cool built-in GPU debugger for d3d, and for openGL I chose to use the open source RenderDoc. For the last part of this project I took images of my game using the graphics debuggers. Although I couldn’t get all the images I wanted.

VS Graphics Analyzer for D3D

Here are two from the VS Graphics Analyzer. (You’ll notice that the pipelines is empty. That’s because every time I tried to use it the GA would crash.)

RenderDoc for OpenGL Graphics Debugging

This is literally the farthest I got with RenderDoc. The moment I try to open a log file I get the RenderDoc Error Reporter. On another computer the program crashed before the game could even load.

Why They Didn’t Work

I tried the VS GPU Analyzer and RenderDoc on three different computers and none of them worked: My laptop win8.1, desktop win7, and my work computer which is win10. I suspect the firewalls and restrictions on my work computer kept the graphics analyzers from being able to do their injections. It seems that the VSGA and RenderDoc aren’t compatible with the libraries we’re using on Windows 8.1 and 7, and there doesn’t seem to be a fix. However if you’re on Windows 10 (and don’t have strict restrictions on it), then I believe these tools are a great asset for graphics debugging.

Optional Challenge: Draw a House

The question was “Can you figure out how to draw a house?” The answer: yes.

Really, you can draw anything you want out of triangles if you’re willing to take the time to code all that in. All of these were hardcoded, and even though I thought I had winding order down, I still made a couple mistakes the first time (It does help to write it out first!). I just did a couple more simple ones: an arrow and a pinwheel.

Of course in a real game you aren’t drawing the triangles out by hardcoding their positions. There are pipelines that handle drawing and rendering images to the screen; but way down below all those cool graphics: triangles.

Cool Stuff!, EAE 6320-001 GameEngII Game Engine Project, Game Dev Adventures!

Assignment 1: Project Setup and Simple Animation


This game is a simple triangle animation. The player need not do anything but enjoy the beautiful movement of this simple shape.

(Simple one-click download. Once downloaded, unzip, play the executable, enjoy!)

About the Project

This was a fun one to program. It was very straightforward and when it came time to do some math, I was excited! The object was to simply animate a triangle on screen using the given solution files and graphics project.

To animate the triangle I took the sin of time to give it a smooth vacillation. Sin’s limits are [-1,1] though, so to animate the color I essentially took the absolute value of sin and changed the value of the green between 0 and 1, as color is represented as rgba in [0,1]. Because I didn’t want to call a math library to do a simple absolute value, to calculate the abs(sin()) I used the mathematical definition of absolute value:

f(x) = x if x >=0
f(x) = -x if x<0

Thus in my code I have something like this:

absSin = sin >=0 ? sin : -1*sin;

Now this does give a sharp point to the animation, but I wasn’t interested in smoothness at every point along the graph and like the quick turnaround. However, if I had desired a smooth animation I would have halved the peak size and moved the graph above the x-axis (to get that [0,1] range) as follows:

moreDifferentSin = 0.5*sin(time) + 0.5;

Instead of moving the triangle around I decided to make it shrink and grow by multiplying the abs(sin(time)) to the position of x for all vertices. Since the middle vertex was already at (0,0), it remained in place. The result: continuous similar triangles of varying size and color.

I definitely struggled getting the project to build. But I learned quite a bit about setting up dependencies and building while doing it. I was even able to quickly help some classmates as their errors came up because pretty much everything I could do wrong, I did. But now I recognize the errors and know what to do for them. The class discussion boards were definitely helpful in resolving this, but especially the two hours I spent with the professor JP after posting two discussions myself finally got my build working. He did this late on a Saturday night and that is definitely going the extra mile.

I first built on my Windows 8 laptop, knowing that the project code was written for Windows 8+. However my classmate, Cornelia, also did a test for me of the project on Windows 7. His was a VM though and he didn’t get it working. JP asked me to test on my desktop and I found that it worked! I had already installed all the necessary programs and SDKs previously, but didn’t need to install anything else to get it working except the game itself. It was exciting to see it work on my Windows 7: for all configurations and all platforms (we’re using openGL and D3D). Now I don’t have to upgrade. Hooray! 😉

Working with the Graphics Project

We were given the entire solution minus the graphics project which we then had to add in to the rest of the solution and then add the appropriate references and dependencies to it and the projects that called its functions within their cpp files. I added Graphics as a reference to ExampleGame_, and Applications.

I did not add it to the Tools project however, even though Graphics is referenced by the ShaderBuilder files, because all references to Graphics were enums, which aren’t necessary to link. For example, in Tools/ShaderBuilder/cShaderBuilder.cpp it reads:

Graphics::ShaderTypes::eType shaderType = Graphics::ShaderTypes::Unknown;

Which references the Graphics library, but doesn’t call a function which is used in the cpp file, so it’s not necessary to add the reference.

Expectation of Class

Based on the class so far I expect to learn quite a bit about programming graphics. JP (my professor) was the first person to ever explain how to tell which way the triangles were wound in such a way that I completely understood and understood with confidence. I’ve also learned the difference between creating a solution build, and a building a project. I never understood why you would do one over the other before. (Essentially you build projects when you want to just build it and its dependencies. You build solutions to build everything in the solution, whether or not it is listed as a dependencies. This is especially helpful to know if you have projects that rely on the output of each other, but don’t need each other to build. I don’t think I’ve ever had a setup where this was true before, but it makes sense in large projects that you wouldn’t want to set dependencies on other projects just so they build together. The upside to this is if you change a part in one project you only need to build that project and whatever depends on it, and not anything else. In very large projects that can save a lot of time!) I’m definitely expecting to learn lots more C++ programming insights, visual studio shortcuts, and become a much more proficient programmer.

Cool Stuff!, Helping Hand, Ludology

10 Ways to Improve Your Game Cameras

Game cameras can be some of the most tricky coding you do. That’s why when John Nesky’s 50 Common Game Camera Mistakes from GDC 2014 went live on youtube (The talk is embedded below), I immediately watched it! We can all benefit from these lessons from Nesky for refining our cameras!

When I first started watching Nesky’s talk I didn’t think he actually had 50 mistakes nor that he could actually get to all of them in an hour talk! But not only did he do both, I also realized that he was also right about these only being some of the issues we face. Which means we should keep the conversation going!

So here are my 10 Ways to Improve Your Game Cameras as takeaways from Nesky’s talk:

  1. Use a bigger FOV for heaven’s sake! 🙂
  2. If you can use a simple camera, do it!
  3. Player’s intent should supersede camera scripts
  4. Use the camera to give the player subtle hints, but don’t be overbearing.
  5. Don’t use quick camera transitions in place of cuts. Either cut, slow down the transition, or find an in-between.
  6. If rotating, camera shouldn’t rotate in place (on its own axes). Should rotate around avatar.
  7. WATCH FOR SIMULATION SICKNESS (SS). It’s a disconnect between movement you see and movement you feel. Any movement seen and not felt can cause SS. Have options to turn off extra movement if you want to leave it in the game.
  8. Avoid jerky camera movements and constant camera angle changes, especially during combat! It not only leads to SS, but can also be disorienting to players as controlling movement changes with camera angles. It also doesn’t look as pretty, ahem, as aesthetically pleasing.
  9. Allow players to invert controls. It’s a significant portion of players that want to invert (like me!). You’ll lose players otherwise.
  10. Implement, test, iterate, test, test, test! Repeat.
  11. Re-Watch John Nesky’s talk as needed. 😉

I just love GDC talks. There’s always so many of them at the event that you literally cannot attend all of them. So thanks to GDC for posting this treasure to the public. Thanks to John Nesky for being willing to share his own mistakes so the rest of us don’t have to suffer… as much! And for the rest of us I wish a big: Good luck!