It hit me like a ton of bricks. I was sitting in my Game Design class, listening to this
indie game developer (DMD alum Mike Highland) talk about the process of developing a game for the iphone. At one point in his presentation, he mentioned the issue of getting his game to run within a reasonable frame rate range (they were at 20 fps, when a minimum 30 fps was desired). I straightened up in my seat; a bad frame rate is most certainly an issue that I'm going to encounter as I create my real-time hair simulation. How did Mike deal with the issue? I cringed. "Originally, we were working with OpenGL directly...but as soon as we moved over to cocos2d (a 2d engine that extrapolates OpenGL), we bumped our frame rate up dramatically." Boom! I couldn't help but think about my ridiculous "Frankenstein's Viewer". Here I was, screwing around with OpenGL on the lowest level, thinking that it would give me more control over the mesh objects in my scene (head, collider, scalp). But, while I can make a heck of a Viewer for 277, the focus of this project is
not to figure out what rotation matrices are needed to move a camera around the center of a scene (all that translate into global space, rotate, and then translate back to local space nonsense). In addition, I've only implemented
some of the methods out there for storing meshes, and the ones that I have are not very efficient. It's time I re-evaluated my means of storing data and creating user tools. I need to implement my ideas with higher-end libraries (you know, the kind that
already have the classes and methods that I outlined 2 posts ago). So, to conclude, I wasted a weekend making something that I don't want to use. Its outward appearance is a bit "eh", its controls are finicky and not very user friendly, and its underlying structure is complicated and inefficient.
Time for Plan B (the original plan that I should have been implementing all along): Qt + OpenMesh. Cat also got back to me about how the low dynamic head animations are performed:
"We (use) a morphable model as a shape prior. Our face template contains a fixed number of vertices (~8K) and is in a one-to-one correspondence with the vertices of the morphable model. ...We initialize the morphable model by fitting it to a set of user defined points. The model is then automatically refined using the correspondences generated by Surfel Fitting." (taken from Dominik's paper about reconstructing dynamic facial expressions; Cat is working with Dominik, by the way)
What does this mean for me? Two things: (1) the head models that I will be given are dense, and (2) the positions of their vertices change from one frame to the next. How the vertex positions are calculated doesn't really concern me; as long as I can grab them at each frame, I'm happy. Also, if Cat stores her mesh data using OpenMesh, and I get OpenMesh to work on my machine...think of how easy that data grabbing would be! OpenMesh also has a built in decimation algorithm, which might prove useful.
That being said, I'll have to push back my plan for this next week and repeat last week's agenda. Let's try that again, shall we? Also,
hello arcball. Why did my professors not teach me about you? Or was I just not paying attention...