Monday, October 26, 2009

Narrowing Down Algorithms

Modeling
  • Transition between different Levels-of-Detail (LODs) or Wisps
  1. Strands (highest) --> Clusters --> Strips (lowest)
  2. Use (a modified) Adaptive Wisp Tree (AWT) to facilitate transitions
  • Interpolation of sparse guide hairs (points connected by curves) to create density
  1. Guide hairs grown based on scalp geometry
  2. Guide hairs loaded from a file
Animating
  • Particle dynamics (treat control points of guide hairs like particles)
  1. Verlet integration
  2. Runge-Kutta integration
  • Collision detection using a polygonal mesh boundary around the head
Rendering
  • Anisotropic reflections
  1. Marschner model
  2. Look-up textures
  • Opacity shadow maps

Collision Detection using Rays

Ray tracing isn't just for rendering scenes, you know! It also comes in handy for 3d collision detection (as opposed to, say...projecting your collider onto all 3 coordinate planes and trying to call pointInPoly for each projected face...not like I tried that or anything...). I know this is just one hair strand, but the ability to maintain speed at this stage is encouraging. No more snapping either!

(yes, I know...his one eye creeps me out too)

Upcoming Tasks:
  • Using Connelly's HairQuad class as a road map, begin devising a way of organizing hair strands into groups that collide with each other [Update: Actually, I don't think I'll combine hair strands into groups quite yet - I'd like to create classes for the other 2 LODs (hair clusters and hair strips) first. Maybe they can all be subclasses of the same virtual class (HairLOD or something like that). Then I'll look into ways of transitioning between the 3 (maybe based on proximity or occlusion - this will require reading through the Adaptive Wisp Tree paper again). So, I'm going to rewrite this task as: Implement self-intersection for hair strands. Then, create HairCluster and HairStrip classes, both with collision (self and collider) capabilities like the HairStrand class.]
  • Try connecting the hair strand control points with curves (Bezier, Hermite, B-splines?)
  • Continue to collaborate with Cat --> use OpenMesh objects as input for colliders (I'm currently using a simple .obj parser instead)
  • Write the "Background" section of my proposal [Update: done, but I'll probably keep revising it]

Sunday, October 25, 2009

Collider-phyllic Strands

Collision detection is working! Unfortunately, the hair strand seems to want to stick to the collider. I'm going to look into this more...

(just one "free" control point)


(collder-phyllic strand!)

UPDATE: I have fixed the issue! Silly me, I was projecting all of the "free" control points onto the collider mesh whenever just one collided (hence all of them suddenly snapped into place). I've adjusted my code to snap just the colliding particle:

(getting better...)

Alas, the snap is still a bit too forceful. I don't really know what's causing this. It seems almost contradictory: one the one hand, the snap means that my code has registered a collision, but on the other, the abruptness of the snap seems to imply that some extra steps are needed to get the particle a little closer to the collider before it actually collides. So, my eyes are telling me one thing, but my code (the math) is telling me another. I trust my eyes more, honestly. ~__~

Friday, October 23, 2009

A Strand of Progress

Good one! Using Connelly's Hair class as a road map (in addition to Jakobsen's paper), I've constructed my own (albeit crude) HairStrand class. Here's the little guy in action (gravity is turned on in the negative y direction, and the drag coefficient is set to 0.5):
Cat is currently helping me to get a grip on manipulating OpenMesh objects (specifically how to grab vertex positions from a TriMeshT object - it looks like I'll probably need to use iterators). Unfortunately, the library is not very well documented and definitely not very intuitive (for me, anyway). So, with that going on in the background, my next step consists of colliding this little guy with a polygonal mesh. I have the function written up and ready to go, but I'm betting it won't cooperate on the first test run. For testing these interactions, I'm using a simple Fl_Gl_Window object (vs. Qt).

Wednesday, October 14, 2009

Short-Term Goals

Since the alpha review is Friday, I need to start stepping up my game. So, by the end of the day on Friday, I hope to have accomplished the following:
  1. Dissect Connelly's "Brush" code into classes, methods, and procedures.
  2. Hack QtViewer (which I have decided to use in place of Cat's SimpleViewer) so that the vertex positions of the viewed mesh change as the user rotates it around. At the moment, the viewer performs "rotations" by adding transformation matrices to the gl modelview matrix. This changes how OpenGL interprets the mesh's vertices (thus they appear to be rotated when the scene is rendered), but does not change their position values directly. I need some way of simulating animated vertex positions - I think this is a good approximation.

Monday, October 12, 2009

Errors, Errors...

This is why it is not a good idea to try and run Linux things on Windows. Upon compiling, the following repeats itself over and over:
1>c:\path_to_my_project\openmesh\core\geometry\VectorT_inc.hh(413) : warning C4003: not enough actual parameters for macro 'max'
1>c:\path_to_my_project\openmesh\core\geometry\VectorT_inc.hh(421) : warning C4003: not enough actual parameters for macro 'min'
1>c:\path_to_my_project\openmesh\core\geometry\VectorT_inc.hh(452) : warning C4003: not enough actual parameters for macro 'min'
1>c:\path_to_my_project\openmesh\core\geometry\VectorT_inc.hh(457) : warning C4003: not enough actual parameters for macro 'max'
1>c:\path_to_my_project\openmesh\core\geometry\VectorT_inc.hh(413) : error C2059: syntax error : ''
1>c:\path_to_my_project\openmesh\core\geometry\VectorT_inc.hh(499) : see reference to class template instantiation 'OpenMesh::VectorT' being compiled
1>c:\path_to_my_project\openmesh\core\geometry\VectorT_inc.hh(413) : error C2334: unexpected token(s) preceding ':'; skipping apparent function body
1>c:\path_to_my_project\openmesh\core\geometry
\VectorT_inc.hh(418) : error C2143: syntax error : missing ')' before '}'
1>c:\path_to_my_project\openmesh\core\geometry\VectorT_inc.hh(418) : error C2143: syntax error : missing '}' before ')'
1>c:\path_to_my_project\openmesh\core\geometry\VectorT_inc.hh(418) : error C2059: syntax error : ')'
1>c:\path_to_my_project\openmesh\core\geometry\VectorT_inc.hh(418) : error C2143: syntax error : missing ';' before '}'
1>c:\path_to_my_project\openmesh\core\geometry\V
ectorT_inc.hh(418) : error C2238: unexpected token(s) preceding ';'
1>c:\path_to_my_project\openmesh\core\geometry\VectorT_inc.hh(421) : error C2143: syntax error : missing ';' before 'inline'
With one of the target areas being lines 413-418:
/// return the maximal component
inline Scalar max() const
{
Scalar m(Base::values_[0]);
for(int i=1; im) m=Base::values_[i];
return m;
}
I found one reference to a VectorT_inc.hh issue when I searched for help online. And it was in German. Of course it was.

I encountered these errors when trying to convert Cat's SimpleViewer (the one she sent me to use) into a format the Visual Studio could understand. Granted, I did get a simpler version (called QtViewer) to work (i.e. I have Qt and OpenMesh built, dangit). I suppose I can use the QtViewer for now, but...I'm really limited in terms of sticking buttons and callbacks in where I want them. This is getting incredibly messy, and I haven't even touched hair yet. So sad, so sad. Here's a screen shot of the QtViewer in action:

It provides some great camera manipulation callbacks that would certainly do the trick. Should I just give up on using Cat's SimpleViewer?

Edit (2/17/10): to fix this problem, I inserted the following right before the inline Scalar max() const function (line 413):
#undef min
#undef max

Tuesday, October 6, 2009

Back up the bus...

It hit me like a ton of bricks. I was sitting in my Game Design class, listening to this indie game developer (DMD alum Mike Highland) talk about the process of developing a game for the iphone. At one point in his presentation, he mentioned the issue of getting his game to run within a reasonable frame rate range (they were at 20 fps, when a minimum 30 fps was desired). I straightened up in my seat; a bad frame rate is most certainly an issue that I'm going to encounter as I create my real-time hair simulation. How did Mike deal with the issue? I cringed. "Originally, we were working with OpenGL directly...but as soon as we moved over to cocos2d (a 2d engine that extrapolates OpenGL), we bumped our frame rate up dramatically." Boom! I couldn't help but think about my ridiculous "Frankenstein's Viewer". Here I was, screwing around with OpenGL on the lowest level, thinking that it would give me more control over the mesh objects in my scene (head, collider, scalp). But, while I can make a heck of a Viewer for 277, the focus of this project is not to figure out what rotation matrices are needed to move a camera around the center of a scene (all that translate into global space, rotate, and then translate back to local space nonsense). In addition, I've only implemented some of the methods out there for storing meshes, and the ones that I have are not very efficient. It's time I re-evaluated my means of storing data and creating user tools. I need to implement my ideas with higher-end libraries (you know, the kind that already have the classes and methods that I outlined 2 posts ago). So, to conclude, I wasted a weekend making something that I don't want to use. Its outward appearance is a bit "eh", its controls are finicky and not very user friendly, and its underlying structure is complicated and inefficient.

Time for Plan B (the original plan that I should have been implementing all along): Qt + OpenMesh. Cat also got back to me about how the low dynamic head animations are performed:
"We (use) a morphable model as a shape prior. Our face template contains a fixed number of vertices (~8K) and is in a one-to-one correspondence with the vertices of the morphable model. ...We initialize the morphable model by fitting it to a set of user defined points. The model is then automatically refined using the correspondences generated by Surfel Fitting." (taken from Dominik's paper about reconstructing dynamic facial expressions; Cat is working with Dominik, by the way)
What does this mean for me? Two things: (1) the head models that I will be given are dense, and (2) the positions of their vertices change from one frame to the next. How the vertex positions are calculated doesn't really concern me; as long as I can grab them at each frame, I'm happy. Also, if Cat stores her mesh data using OpenMesh, and I get OpenMesh to work on my machine...think of how easy that data grabbing would be! OpenMesh also has a built in decimation algorithm, which might prove useful.

That being said, I'll have to push back my plan for this next week and repeat last week's agenda. Let's try that again, shall we? Also, hello arcball. Why did my professors not teach me about you? Or was I just not paying attention...

Some Progress

I still have a lot of bugs to work out, but it's a start. Frankenstein indeed [shudders].

Saturday, October 3, 2009

Frankenstein's Viewer

I'm going to try and "Frankenstein" together my own Viewer using FLTK, GLUT, and OpenGL. First of all, I already know how to build/use these libraries. Secondly, I have access to several existing programs that implement them (my "body parts" for the monster, if you will). Yay for 277, 460, and 462! I do feel bad about not using the viewer that Cat supplied me with (it is probably very well-written). If my Linux skills were stronger, I probably would. But, since the Viewer is not the focus of my project, I want to get it out of the way a.s.a.p., and if that means using what I know, then so be it. Yukari, who is also working with Cat to make realistic eye movements, contacted me regarding how to set up Qt on a Linux machine; I have no idea what to tell her. O__O

Alright then. Let's take a look at what ingredients I'll need in order to make the Viewer:
Face class:
- has a collection of vertices
- has a normal
- has a color
- has a Draw() function

ObjParser class:
- has a collection of vertices
- has a collection of Faces
- has a Clear() function for resetting the contents of its collections

Mesh class:
- has a collection of vertices
- has a collection of Faces
- has a "Visible" boolean
- has a Draw() function (calls each Face's Draw())

Head class:
- extends Mesh
- (has animations)

Collider class:
- extends Mesh
- has a PointInCollider() boolean function (that returns the shortest projection out of the collider)

HeadCollider class:
- extends Collider Mesh
- has a dictionary that maps its vertices to vertices in a Head
- has a PointInCollider() boolean function (that returns the shortest projection out of the collider)

Scalp class:
- extends HeadCollider

HairGroup class:
- has Strands, Clusters, and Strips
- has an Update() function
- has a Draw() function
- project focus

HairManagerEngine (HairRenderer) class:
- has a Scalp
- has a HairGroup
- has a GrowHair() (set_geometry()) function
- has an init() function (for GPU work)
- has an Update() (update_geometry()) function (calls its HairGroup's Update())
- has a Draw() function (calls its HairGroup's Draw())
- project focus

Camera class:
- has a position
- has an up vector
- has a focus point
- has rotate functions
- has zoom functions

TreeNode class:
- has a parent TreeNode
- has a collection of children TreeNodes


GeometryNode:
- extends TreeNode
- has a Mesh
- has a Draw() function (calls its Mesh's Draw())


TransformNode:
- extends TreeNode
- has a transformation matrix
- has functions for editing its transformation matrix


SceneTree class:
- has a collection of GeometryNodes
- has a collection of TransformNodes
- has TreeNode insertion and deletion functions


Window class:
- has a HairManagerEngine (gets it from Viewer)
- has a SceneTree (gets it from Viewer)
- has a collection of Meshes (gets it from Viewer)
- has a Draw() function (calls each GeometryNode's Mesh's Draw() and its HairManagerEngine's Draw())
- has callback functions

Viewer class:
- has a light (a vector position)
- has an ObjParser
- has a Scalp
- has a Head (should match up with its scalp)
- has a collection of Colliders
- has a HairManagerEngine
- has a Camera
- has a SceneTree
- has a Window
- has callback functions

Viewer class capabilities:
- import an .obj (or .mesh?) scalp into the scene (just one)
  • a corresponding Scalp object should be created and inserted into the SceneTree
  • ConvertObjToScalp() function to do this
  • uses Decimate() function
- import an .obj (or .mesh?) head into the scene (just one)
  • a corresponding Head object should be created and inserted into the SceneTree
  • ConvertObjToHead() function to do this
  • a Collision object should be generated from the Head's geometry
  • ConvertHeadToCollider() function to do this
  • uses Decimate() function
- import an .obj Collider into the scene (as many as desired)
  • a corresponding Collider object should be created and inserted into the SceneTree
  • ConvertObjToCollider() function to do this
- remove Scalp, Head, or Colliders from the scene (delete them from the SceneTree)
  • Node deletion functions to do this
- move around the scene with a Camera
  • Camera rotate and zoom functions to do this
- light the scene with a point light that can be rotated around the center of the scene
- rotate the Head around
  • TransformNode transform matrix manipulation functions to do this
- (trigger Head animations)
- move and rotate Colliders (not the head collider, though) around
  • Transform matrix manipulation functions to do this
- add a HairGroup to the Scalp
  • GrowHair() function to do this

Viewer class Q&A:
Q: For high-poly objects, is there a more efficient way to render them than by looping through each face and telling OpenGL to draw it?
A: For now, no.

Q: If the .obj head is going to be animated in some way, then perhaps importing the .obj should be eliminated in favor of importing some kind of .mesh object with associated keyframe data. This involves collaboration with Cat to see how this .mesh object might need to be drawn.
A: Since I have not heard back from Cat about this, I'm going to import an .obj head and give the user controls for rotating it. This should be good enough for approximating "yes" and "no" head animations.

Q: Should colliders be spheres, cubes, or meshes? Should collisions be determined by space partition instead?
A: For now, the plan is to make the head collider a decimated version of the original head.
Like a good mad scientist, I even sketched out how I want my monster to look:
(Click to enlarge)