So it comes with great reluctance but I think I will have to suspend development on my 3D game engine, at least for the near future. Surely this will be a disappointment to anyone following the progress and I did not make the decision lightly. After spending some time thinking about it, I think it’s the right move. It’s just really hard to justify the amount of time and effort put into this when there are much better solutions available off-the-shelf.

In the little less than 2 years I’ve worked on this project, I think I have made decent strides. Honestly, I was not working on it full-time. Just on weekends mostly or maybe here and there when I had time after work. My initial goal was to have something functional in about 6 months to a year. It took a little longer, but I guess eventually I did have the basic features of an engine somewhat working. I will list what was implemented below, but it was enough to load in some textured models and fly a camera around. Not ground-breaking work by any means, but I built nearly everything myself so I was happy about that. There was just an enormous amount of work remaining to get to a state where a game could be produced. Finally, it was too much.

So before I give my personal advice, I’ll break down how I got to this point. First, let’s look at all the books I read as research for this project:

Game Engine Architecture – Jason Gregory
Game Coding Complete, Fourth Edition – Mike McShaffry
Real-Time Rendering, Third Edition – Tomas Akenine-Moller
Game Engine Design And Implementation – Alan Thorn
3D Game Engine Programming – Stefan Zerbst
Game Engine Gems, Volume One – Eric Lengyel
3D Game Engine Architecture – David H. Eberly
3D Game Engine Design –  David H. Eberly
3D Math Primer for Graphics and Game Development – Fletcher Dunn
Mathematics for 3D Game Programming and Computer Graphics, Third Edition – Eric Lengyel
Introduction to 3D Game Programming with DirectX 11 – Frank Luna
Practical Rendering and Computation with Direct3D 11 – Jason Zink
Beginning DirectX 11 Game Programming – Allen Sherrod
Game Programming Patterns – Robert Nystrom
Game Physics Engine Development – Ian Millington
Physics for Game Developers – David M Bourg
Physics for Game Programmers – Grant Palmer
Game Physics Pearls – Gino van den Bergen
Shadow Algorithms Data Miner – Andrew Woo
The C++ Programming Language, 4th Edition – Bjarne Stroustrup
C++ Primer (5th Edition) –  Stanley B. Lippman
C++ Primer Plus (6th Edition) – Stephen Prata
More Effective C++ – Scott Meyers

If you are considering engine development, please read on, but I believe all the above books are valuable to look at. I’ve also read a variety of sources online, from tutorials to official documentation and more. Looking at that list it seems huge, but I am an avid reader and have read and listened to a number of non-technical books in that time-span as well.

Next, let’s look at what features were actually implemented and working (or mostly working):

  • Win32 app template (message pump)
  • DirectX 11 based rendering engine
  • DirectInput mouse and keyboard controls
  • Custom math library (Vector, Point, Matrix, view/projection)
  • Node-based hierarchical scene graph
  • Frustum culling (sphere and point tests, minimal bounding sphere)
  • Generic XML parser (from scratch, kind of hacky and slow)
  • COLLADA importer (only basic vertex and UVs)
  • Custom binary 3D model format (for speed)
  • Camera object (spectator mode)
  • Texture mapping and Skybox (cube mapping)
  • Directional, ambient, and specular lighting
  • Normal and shadow mapping shaders
  • Command-line console window logging
  • Architectural glue and probably some odds-and-ends.

So what wasn’t finished? Way too much to list. I don’t want to get down on myself too hard, cause I think that I accomplished a little something here. Certainly some of this stuff was not easy and, if anything, it has made me a better coder. Even in the current state I guess you could make a Pong game or a simple walk-through demo or the like. Also keep in mind this was a part-time excursion. If I were dedicated to this fully it probably would only have taken months (not counting research time). While I started with a very clean style and deliberate design, by the end I was kind of hacking things together, and some features (like the shadow mapping) became very intertwined with the core rendering code. Of course, this was very much a learning experience, and I planned to go back and clean things up later (which may or may not ever happen, we’ll see).

OK, so by now you are probably getting the idea. As to my advice, if you are trying to make a game (or game-like interactive experience) don’t bother making a custom engine. Everyone basically told me this from the start, and I didn’t listen. It’s sort of like when you see people doing stunts on TV and they tell you “don’t do this at home.” But they did it, and it made them rich and famous. So maybe I was looking for glory, or hoping finishing a big project like this would take me to another level. I guess in a way it did. I’m just not sure that was the best use of my limited time. What I mean to say is that knowledge is valuable, and I feel I learned a lot. However, unless you are really looking to be an engine programming professional, there is not much reason to try to do this.

The crux of the matter is that all-in-one engines like Unity and Unreal are just too complex and mature to expect to compete with. That’s not to say a small team or a single hero-coder can’t produce something great. They can. However, using a pre-built engine cuts out a *lot* of time and risk. Had I been working on a project for a client, I would have never even contemplated a custom engine. It just doesn’t make sense. Both Unreal and Unity have huge teams of expert engineers working solely on engine development. Unless you work for a huge AAA studio with a monster budget, I just can’t see coming up with something better in any reasonable amount of time. Even some huge companies (like Capcom and Square-Enix) are dropping in-house engine development for Unreal 4. Not a good sign for indies or bedroom coders working on their own engines.

But let’s say those big engines aren’t for you. There are a number of open-source engines and libraries out there to help you along. Something like Torque or C4 look promising (if you want a full-featured solution), or OGRE (if you don’t mind having to cobble together a bunch of different libraries). If you are interested in web publishing there are things like three.js for WebGL. With source access you can add the features you need or fix critical bugs. Really there are too many options to ignore.

For the last few weeks I’ve been playing with Unreal Engine 4, and it basically does everything I need and a whole lot more. At $19/month and a small royalty (if you sell your title) making a AAA looking game has never been more affordable. Unity may still have a leg up on ease-of-use and scalability on different platforms, but for high-end development I think Unreal is probably the one to go with. Having the source-code is a huge boon, and the price makes it attainable to almost anyone. Plus, with the visual scripting, Unreal can even be approachable to artists and designers. I’m still in the early stages of getting acquainted with the engine, but so far I am pleasantly surprised. I plan to follow up with some posts and tutorials in UE4 in the future, so don’t think this is the end of my blogging career or anything of that sort.

With all that said, there are still some things I’d like to see in engines that don’t exist today. You typically can’t use the editor in stereo 3D (assuming the engine even supports 3D natively), motion control support for placing objects is non-existent, image and model editing is usually very rudimentary (you must switch back and forth to other apps), real-time code editing and previewing can be limited. So things can certainly improve, and I guess there could be a case made for making plugins or forks to existing projects. I’m not saying things can’t improve. Just that reinventing the wheel to make incremental improvements may not be the best path.

At this point I don’t think I would say engine development is a “waste of time.” As a learning experience, I think it was fun and rewarding. Certainly I hope I didn’t just waste 2 years of my life. Doing all the work and research has brought me here to this place. So it was somewhat worthwhile. It just pains me to think how far along I might have been had I jumped on Unreal 4 when they first announced the subscription model, or spent more time with Unity over the years. If you actually want to build (and finish) a game, it’s a pretty clear choice to use an off-the-shelf solution. I’m sorry, guys. I wanted to prove everyone wrong, but they were right. Make games, not engines.

Finally, I’m sure the question you’re thinking is “will I release the source.” I’m considering it, but there are some hacked bits in there and I’m not sure it’s a great example to work off of. More likely I will pull out portions or classes from the project and use them for future blog posts highlighting a particular algorithm or technique. The full source itself is probably less useful compared to other stuff that’s out there. In any case, I hope maybe you have learned something after 2 years and 25 posts on this project. If you click the (somewhat hidden) 3 white line icon on the top right of the blog, you can see a list of all the blog updates in this series for a little history (if you weren’t following the whole time). Cheers, and happy coding.

 

It’s been some time since the last 3D engine update, but I’m still sticking with it. Currently I am working on getting a physics engine implemented. The video you see above is the first glimpse of this custom physics engine. Obviously it’s ultra basic right now, but it’s a start. The algorithm is based on a verlet integrator, and the code is running using DirectCompute on the GPU.

To be honest, it’s pretty hacked together right now, and the bounds/bouncing behavior is hard-coded. But, hey, it’s something! I also tweaked the style of the demo to more closely align with other demos from researchers (and the clean flat-shaded style looks a bit more classy).

I’m planning on expanding this significantly, but still have a lot of work ahead of me. Though I’m still questioning whether GPGPU is the way to go with this, especially since there are 8-core CPUs available for consumers (and which I am running now). There is also added difficulty and limitations of DirectCompute rather than straight C++. But if there are big gains on the GPU obviously I would like to go that route. Definitely more testing to do here, but progress is being made. Stay tuned.

engine_zero_08

While I implemented frustum culling a little while ago, I never actually coded a proper bounding volume. For the  bounding test I was using a sphere, but I just set the radius to some hard-coded value. This was fine when I just had a bunch of similar sized cubes on screen, however it broke apart once I started getting varied models imported. This week I decided to do something about it.

After a quick Google search for bounding spheres, I ended up on Wikipedia. There was some good information there regarding minimal bounding spheres (also known as smallest enclosing ball or minimal enclosing ball), and I was given a few options for different algorithms. After reading the whole page I become very interested in the “bouncing bubble” algorithm, especially due to the spiffy gif animation showing it in action. The brief explanation seemed easy to follow compared to the other methods, and the error percentage was only around 1-2% which is much lower than Ritter’s algorithm (which can be as high as 20%). So far so good.

Unfortunately I found next to nothing on the internet describing the method. What I did find is a post from last year on StackOverflow by someone in the same situation as me. This lead me to the original paper describing the method, which was for sale for $18 on Grin.com. Honestly I don’t mind paying the money for quality information if it’s going to help improve my engine. The article was fairly easy to follow, and provided just enough information to base the implementation on. At first I was disappointed that no source-code was given, but I managed to pull it together using the pseudo-code. All in all I think I spent about 4 hours on it (including research) so I’m happy to have got it working so quickly.

Below is my bouncing bubble implementation. Keep in mind I have not profiled this code, it’s possible it’s not fully optimized. However, I’m only doing the calculation when a 3D model is loaded, so it shouldn’t effect performance at all really.

BoundSphere RenderCore::calculateBoundSphere(vector vertices){
	Vector3D center = vertices[0].position;
	float radius = 0.0001f;
	Vector3D pos, diff;
	float len, alpha, alphaSq;
	vector::iterator it;

	for (int i = 0; i < 3; i++){
		for (it = vertices.begin(); it != vertices.end(); it++){
			pos = it->position;
			diff = pos - center;
			len = diff.length();
			if (len > radius){
				if (i < 2){
					alpha = len / radius;
					alphaSq = alpha * alpha;
					radius = 0.5f * (alpha + 1 / alpha) * radius;
					center = 0.5f * ((1 + 1 / alphaSq) * center + (1 - 1 / alphaSq) * pos);
				} else {
					radius = (radius + len) / 2.0f;
					center = center + ((len - radius) / len * diff);
				}
			}
		}
	}
	
	return BoundSphere(center, radius);
}

Also, I should remind myself to do benchmarks more often. After I added all the normal mapping and shadow stuff performance has taken a big it. Before I was getting likely over 2400fps, now I’m only getting around 1600fps. Still acceptable, but maybe could be better with such a simple scene. The culling definitely helps, but I may have to look into other optimizations like batching and instancing if I want to get the speeds I need for my project. I’ll probably get started with the physics engine aspect soon and then optimize later if I can’t meet my 120fps target. Cheers.

 

After struggling for a bit with the shadow mapping implementation, I finally have something presentable. I followed a tutorial from Microsoft and thought I understood what was happening. However, it required a lot of changes in the rendering code and it took a little while to get things working. Even once it was somewhat functional, I still had some issues with what they call shadow acne. It seemed really bad on the self-shadowing side of objects. I tried tweaking all the values I could (i.e. the bias in the shader), but I was not able to get it to look right. Finally I just set it so that there wouldn’t be self-shadowing of a single polygon. This happens to look fine, since the lighting equations make the polygon shaded anyhow. I’m not sure if this will work for every single instance, but with the simple models I’m testing with it’s fine.

I’m at the point where I really want to get the physics engine up and running. Still have some more research to do, so I bet it will take some time before I have something more to show. But it’s coming along.

 

After a few days of hacking away at the code, I’ve got a new video up. In this update I have added normal mapping and specular lighting. I did have a few set-backs while working on the shaders, and it was made even more difficult since I was basically “flying blind” without a debugger. It seems that the Express version of Visual Studio does not support shader debugging, and neither does Nvidia’s debugger tool. Very sad, and I may (at some point) have to upgrade to the Pro version. I’ll probably hold out for a little bit, and I did end up figuring the problems out this time.

With the specular lighting shader, the issue I had was doing the matrix multiply and output on a float4 instead of a float3. To be honest, I am not quite sure why it didn’t work, but switching things around seemed to help. The change I made was also follows.

output.normal = normalize(mul((float3x3)worldMatrix, normal.xyz));

Instead of:

output.normal = normalize(mul(worldMatrix, normal));

Again, not sure why that works, but I’m happy it does. With the normal mapping, there was a lot of math and changes involved, and it became difficult to find which piece was broken. As it turns out, all the math was (mostly) correct, but the values I was sending to the constant buffer were not being updated. From what I can tell, the pixel shader cannot read the constant buffer (cbuffer) but the vertex shader can. Strange, and I’m not sure why this must be the case. So instead of trying to read the cbuffer values directly from the pixel shader, I passed them in as output values from the vertex shader. In addition, trying to pass bool or int values to the pixel shader would not work. So, as a hack, I added the same int value to all parameters of a float3 and passed that instead. I bet I can figure this one out eventually, but I’ve already spent all day on this and I’m happy just to see something working on the screen.

Next up I will probably try to model something more substantial than a cube so I have something better to look at.

Spent the last couple days adding in skybox support into the engine. Currently it’s a little hard-coded, but it does seem to be working well. I also bumped the field of view (FOV) up to 90 (from 45) so you can see more of the sky. I wanted to make sure I was only using my own artwork for this engine demo. Unfortunately, it would have been rather difficult to take panoramic photos myself. So I generated the sky texture using Terragen. Somewhat of a cheat, but I did technically “create” it myself, so I can live with that.

Ran into one issue that had me stumped for about a day. Basically, I was following a tutorial for implementing the skybox. It seemed like I followed everything well, but I was getting this wild fish-eye type of distortion on the sky. As it turned out, it was my custom matrix transpose function that was at fault. The tutorial used “XMMatrixTranspose()” but I re-implemented this myself. The trouble was, I was swapping around the values without creating a temporary copy first. Meaning, I was doing something like this:

m12 = m21;
...
m21 = m12;

Clearly that won’t work. The interesting part is that I got stuck on it one night and couldn’t figure out what was wrong. I knew it had something to with the matrix math, but I wasn’t sure what. So I left it alone and went about my day. Then, while in the bathroom washing my face, I had a “eurika!” moment and the answer came to me. Not sure how that works, if it’s like a sub-process in my mind or something cracking away at problems. But I’m glad I finally caught the error.

There are a few things I’d like to try next. First off, the lighting model could be improved great. Right now it’s just a hard-coded ambient and directional light. I’d like to have a more flexible system to add any type of light into the scene graph and have the shader support it. Support for normal mapping would be cool, as well as specular and emissive textures. I’d also like to do some initial experiments in building a physics system, or at least learn more about compute shaders and how they integrate into the pipeline. Plus, better artwork. Stay tuned.

While getting models loaded was pretty exciting, I ended up dealing with major load times on the demo. Granted, my XML parsing code is probably slow as all hell, but I don’t think COLLADA is really designed for real-time engine use. With simple plane and cube shapes the loading wasn’t that bad, but with my soda can model (around 600 triangles) the loading was nearing 10 seconds (totally unacceptable). I can only imagine what would happen with a really complex model. Something had to be done.

So I decided to switch to a binary format with basically only exactly what I needed to pump into DirectX (the vertices, normals, uvs, and indices). I created a separate console application that would covert *.DAE files into my new binary format. Then I added engine support for loading the binary file instead of COLLADA. The gain was HUGE. Now when running the exe, there was no noticeable lag time at all. I guess I kind of knew I needed to do this at some point, but the wait times were too much to bear any longer. Glad to find a good solution.

Here are some snippets of code to show how to save variables as binary data:

float someValue= 0.12345f;
ofstream outputFile;
outputFile.open(L"output.bin", ios::out | ios::binary);
outputFile.write((char*)&someValue, sizeof(float));
outputFile.close();

And then you can read this value later by doing:

float someValue;
ifstream inputFile;
inputFile.open(L"output.bin", ios::in | ios::binary);
inputFile.read((char*)&someValue, sizeof(float));
inputFile.close();

Actually not that difficult at all. The benefits are decreased loading time and also smaller file sizes. The cons are that you now have another step in the asset pipeline, and that the files are no longer human-readable. A fair trade I would say.

engine zero coke can

What you see above is a custom model I made in 3ds Max, exported as a COLLADA *.dae file, and imported into my DirectX engine. I figured I’d start with something simple, like a soda can, and I plan to make a lot more models going forward. Although I hadn’t touched Max in years, I found it to be a comfortable experience and was able to put the model together in a few hours.

Now, actually getting that model into DirectX was a different story. First off, the COLLADA documentation is vast, but they fail to explain basic things about the format. The examples they show all make sense, but with a real model it becomes more complex. To make matters worse, their forum was a ghost town and I found lots of people with the same basic questions I had that posted a thread with no replies for months (or years). That said, I was able to eventually figure it out by a lot of testing and trial and error. It really goes to show that you can build the best system in the world, but if the documentation is lacking and the community is thin, then it’s not worth jack.

To make matters worse, there was a small bug in my XML parsing code that was messing up the attributes. So some of the simple models I tried (and plane and a cube) worked, but the soda can didn’t. It ended up taking a while to track down this problem since Visual Studio was hanging if I tried to debug. It’s really scary to get to this point where you *need* the debugger desperately and it’s not there. While I thought it was crashing, it was actually just caught up in my slow parser, and when I waited for about 5 – 10 minutes it finally came back to life (and thankfully I only needed to get to that one breakpoint to see what the issue was).

Next up, I ran into some issues with the model orientation and texturing. Since 3ds Max using a Z-up coordinate system and DirectX is Y-up, this needed some special care. I would have thought the COLLADA exporter would handle this, but apparently not. The fix is to swap the Y and Z positions of each vertex. This will effect the winding order as well, so if you want your mesh to not be inside-out, you need to also change the order of the indices when you create the index buffer. For example, a triangle of “0, 1, 2” will become “0, 2, 1”. Finally I had to negate the V parameter of the UV coordinates so that the texture looked proper.

All-in-all, I am pretty happy considering I have wrote the importer basically from scratch. I would like to try some more complex models, but I will have to figure out what I want to build next. Since I am doing all this work myself, I’d like to use the engine to showcase my own artwork. I would rather not just download assets from the internet. Maybe I will build a refrigerator to put the soda in, or some more common products.

If you like what you read, post a comment and let me know how I’m doing. Cheers.

OGRE 3D Instancing

After some more testing, it looks like OGRE is not the savior it seemed like yesterday. While the static geometry boosted frame-rates greatly, it’s only useful for, well, static objects. Meaning the models can’t move or animate. I did find another option, instancing, which initially looked promising. It allows rendering of large amounts of identical objects faster than just having them be individual. Sounds good.

The implementation seemed complex at first, but then I found the InstanceManager which simplified things a whole lot. However, after getting it working, I wasn’t as impressed with the performance. Just rendering the same 13k still cubes I was getting a little over 100 fps. Then when adding rotation animation to the cubes, the speed dropped down to around 33 fps. Certainly this is still better than the naive implementation, however still nowhere close to where I want.

To be completely upfront, my computer is not a power-house. I’m still running a Core 2 Duo @ 3GHz and GTX 470’s in SLI. Getting a little old, I know, but still can play modern games like Titanfall or whatever. Maybe I’m expecting too much, don’t know at this point. I think I will just go back to development on my engine and worry about performance optimization later. Even so, this was still an interesting investigation at least.

OGRE 3D Static Cubes

Looks like I spoke too soon. While OGRE was getting pretty slow with the naive implementation, I was able to find some code on what they call StaticGeometry, which is a system to batch together lots of similar meshes that don’t move (great for my cube example project). With this feature added, the frame rate has sky-rocketed to over 2,600 fps. Most impressive. Keep in mind a blank DirectX window on my machine will get around 3,600 fps. So getting around 2,600 with over 13,000 cubes is very nice. That still doesn’t help me with my physics simulation, since static objects won’t cut it. But it does at least give me a good benchmark as to what is possible on my development hardware.