Monday, December 14, 2009

Remaining Goals (and Timeline)

Edit (2/5/10): I removed goals that no longer apply to the project --> Remember Option 2 from the post that precedes this one? I'm going to take it, except I'm also going to assume that such a method of point cloud translation/rotation extrapolation already exists. In other words, my project pipeline takes as input the position/orientation of a single point located at the center of the cloud that represents the cloud's net movement. If I have time towards the end of the semester (which...I hate to say it, is doubtful), I will re-add developing the extrapolation method as a goal.

GoalTime Required for CompletionPriority (on a scale of 1 to 10)



Grow key hairs out of the scalp mesh according to a procedurally generated vector field (user will input the local axes of the scalp mesh)2 days8
Adjust key hair soft body values until they behave more like real hairs1 day10
Develop (or find) and test an algorithm that will allow me to extract a single "net" transform from a cloud of moving vertices14 days9
Integrate a mesh decimation library (perhaps OpenMesh) into my code base to eliminate the need for external decimation in Maya7 days6
Bring the Bullet code over to my OpenGL framework (i.e. render Bullet's physics bodies myself instead of letting Bullet's Demo Application class do everything)7 days10
Tessellate the key hairs using B-splines14 days5
Use interpolation ("multi strand interpolation") between key hairs to grow and simulate filler hairs14 days8
Develop (or find) and test an algorithm for linking the vertices of the decimated head/scalp mesh to those of the original (non-decimated) head/scalp mesh14 days10
Using these links, move the vertices of the decimated head/scalp mesh to match the movements of the original head/scalp mesh's vertices3 days10
Build strips (and clusters) around each control and filler hair5 days9



Implement the Marschner reflectance model to render/color a polygonal strip of varying width21 days8
Implement the Kajiya shading model to render/color a polygonal strip of varying width (based on results, choose the "better" method, or somehow combine the two)21 days7
If the chosen model doesn't color wider strips realistically, modify the renderer so that it interprets 1 wide strip as X skinny strips (perhaps using an opacity/alpha map)14 days6
Implement the (modified) chosen model to render/color a 3d polygonal cluster10 days8
.........

I'll continue to fill in and modify this list over the next week...I'm not really sure how things will be going for me by the time I get to the rendering part of the project, so it's difficult to plan that far ahead. :-/ I found a (semi) new resource for rendering, though - NVIDIA Real Time Hair SIGGRAPH Presentation. Unfortunately, this presentation is a bit discouraging in the sense that most of it showcases the latest GPU technology, which I can't tap into at the moment (nor do I think I ever could, since they mention Direct3D11! I only have 9...).

Perhaps it is time for a different approach (yet again)

Greeting, friends. Over the weekend, I've been trying to use Bullet to do the following:
  • Break down an input head mesh (.obj) into multiple convex hulls (check!)

  • Deform the vertices of the hulls and tell every other colliding object in the scene about the deformation so that they will collide with the deformed convex hulls instead of the original ones (check!)

  • Add key hair soft bodies to the scene according to the topology of another .obj mesh (check!)

  • Anchor the key hair soft bodies to the convex hulls (check!)

  • Deform the vertices of the hulls and tell the key hair soft bodies about the deformation so that they will collide with the deformed convex hulls instead of the original ones (not so much...)
I can anchor the key hair soft bodies to the convex hulls just fine, but the key hairs are completely oblivious to any vertex defomations that take place. I've scoured the source code and searched the forums for a way to force the key hairs to recognize the vertex deformations, but this just doesn't seem to be an implemented feature at the moment. What confuses me the most is the fact that I can shoot spheres (rigid bodies) at the deformed convex hulls and they will bounce off as expected (they recognize the vertex deformations just fine). But, for some reason, the hairs (soft bodies) are not as smart.

The way I see it, there are 3 directions I can take from here:
  1. Keep looking for a way to hack the soft body key hairs so that they recognize the vertex deformations - if the rigid bodies can do it, then there has to be a way for the soft bodies to do it!

  2. Take a step back and contextualize what I'm trying to accomplish. This hair is going to go on a virtual character who will nod and shake his/her head. Even though all I have access to is individual vertex positions over time (i.e. key frame data), perhaps there is a way to extrapolate rotation data for the entire mesh by analyzing the net movement of its vertices over time. Bullet will be happy to rotate the convex hulls for me, and the key hair soft bodies can recognize this kind of movement on a larger scale (a transform applied to the convex hulls instead of their vertices). I did some testing to prove this:


    (In this screen shot, I'm moving my mouse back and forth to call ->setImpulse() on the convex hull - there's just 1 convex hull and only 60 hairs to maintain simulation speed, but you get the point. The key is that ->setImpulse() looks at the convex hull as a single body to which a single transformation matrix is applied - individual vertiex positions are not being altered).

  3. Create my own soft body key hairs and give them some thickness. At the moment I am modeling/simulating the key hairs as collections of 2d links with soft body physics applied to them (i.e. some deformation and bounciness is permitted). Please note I was not 1337 enough to create these myself - I modified one of Bullet's soft body primitives (rope). If I were to bring these links into the 3rd dimension, however, maybe this would make their collisions more robust and, as a result, force them to recognize the tiny vertex deformations of the convex hulls. I think I could go about doing this by either (a) joining together some rigid body cylinders or (b) joining together some soft body trimeshes (this would involve building a trimesh "cylinder" out of soft body patches). The downside to (a) is that I lose the deformation and springiness of soft body physics. As we all know, hair bounces just slightly and does indeed stretch slightly as well (maybe not to the degree that I have them bouncing above, but it still yields a more realistic approach than stiff sticks connected together).
2 and 3 are not easy endeavors (but which part of this project has been, eh? ;-) ). I am leaning towards 2, but at the same time, I am hesitant to take that approach because I'm not sure how to look at a cloud of vertices and extrapolate one transformation matrix to represent their net movement over one time step. Perhaps I will ask one of my advisors about the 3 approaches...maybe they'll have a better idea of what's feasible.

For those of you who don't know, I am going to try to continue this project into next semester - in other words, apply for an "Incomplete" and finish it at a later point. This is a relief, seeing as, as it stands, the project is nowhere near worthy for judging eyes. In order to successfully apply for an "Incomplete" however, I need to list out my remaining goals for the project and give an approximate time line for accomplishing them. My next post will consist of this time line - seeing as I did a poor job with predicting how long my project goals would take to complete, I'm going to ask for some planning advice this time. I have my list of goals, I just don't know how much time I should allot to them. :-/

Expect an update later tonight!

Friday, December 11, 2009

Attaching Hair (First Attempt)

My original plan was to add one "key hair" (Bullet soft body rope) to the center of each tri-face in an input scalp mesh (.obj). However, with 256 ropes to create, Bullet was not happy. The following is what I got, which looks fine in a static screen shot, but believe me, the application was not running in real-time.
So, I knocked the number of hairs down (for testing purposes) to 10, changed some property values, and took a closer look.
As you can see, there are some "in-grown" key hairs that are growing into the head (the two sprouting out the back) instead of away from the head along a face normal. My next step will be to investigate why the faces of the scalp .obj are not being exported such that (pt2-pt1)^(pt3-pt1) = the proper face normal direction. Then, from there, I'll begin to dabble in ways to speed things up...

EDIT: The in-grown hairs issue was my own fault...coding typo, if you will. Here's what I have after that fix (and with restitution bumped up to give the hairs a bit of stiffness near their roots).
But the question still remains... what is the maximum number of key hairs that Bullet can handle in real time? I'll need to start with that number of hairs and interpolate between them to create the illusion of density. The only problem is, what if the maximum number is something like 5? Eeek!

Monday, December 7, 2009

Hair Attachment Idea

(Click for full view)

There we go...

Problem resolved. How?
  • Restarted my computer.
  • Used a less efficient (but more stable) function for recalculating the bounding structure of the convex hull pieces (refitTree() instead of partialRefitTree()) at each time step.
I feel kind of silly for not being able to pinpoint the problem faster...oh well. Here's a video showing the deformable head collider (a composition of multiple convex hull shapes) in action!

The blue portion shows the mesh (.obj) that I fed into the program. As you can see, the convex hull approximation isn't bad at all! Here's a screenshot of the input mesh:
Obviously I had to decimate the original (the one Cat gave me) to obtain this semi-dense starting point. The convex hull decomposition library seems robust enough to generate hulls given the original .obj as input. However, it takes a long time to do so, so I'll use this semi-reduced version for testing purposes. Additionally, there is still the issue of creating a mapping from the vertices of the convex hulls to the vertices of the original mesh (in order for the head collider to accommodate deformations in the original). But for now, it's time to move on to making those LODs! I'm going to look into Bullet's btSoftBody class to see if it can help at all...

Sunday, December 6, 2009

Bullet: A Blessing and a Curse

Hello again. Things are progressing very slowly, but I still wanted to show you what I've been up to. Remember the issue about the cubes that passed right through the head mesh collision shape? Well, I've scoured the Bullet forums and even posted my own questions regarding the issue, but the problem is still mostly unresolved. However, in the process of trying to figure out what was going on, I've come across a method of optimization for creating the head collision shape. Using an open-source convex hull decomposition library (see here), I've managed to break down an originally semi-dense head mesh OBJ into several convex hull shapes, which I can then combine together (unfortunately, converting the original extremely dense OBJ takes a *very* long time - but I'm hoping the convex hull decomposition step will replace my need for a decimation via Maya step in the application pipeline):
Notice how this knocks things down from one huge AABB to multiple smaller AABBs, which is a better approximation for the head mesh and will hopefully speed things up later on. Unfortunately, attempting to recompute the smaller AABBs at each frame (as the head mesh deforms) has become tricky. I was able to do it easily with just one, but with multiple, Bullet throws errors. I don't think it should take me too long to resolve the issue, though.

I think the most probable explanation for why some cubes *still* want to pass through the head mesh (even though it has been optimized) is due to a computation bug with co-normal collisions. When I shoot cubes along the normal of a particular face, that's when (I think, anyway) the collision fails to register. One solution could be to generate a small box at the center of each face. But that just means even more shapes to check, so I don't know. I suppose some particles going inside the collider isn't a bad thing *if* they can come back out (i.e. 1-way collisions). However, Bullet doesn't have that feature built-in (I would have to code it myself). At the end of the day, I feel kind of silly for spending so much time on something that isn't even hair... So I think I'll just put this issue aside for now. Things don't look very promising in terms of finishing the project by the 18th, but I'll keep putting it as one of my top course priorities.

Another update to come later tonight / tomorrow morning!

Tuesday, November 24, 2009

Implementing Bullet Collisions

I apologize for the lack of updates (again, I had to devote most of my time to another CIS564 project - http://www.youtube.com/watch?v=YQApmArwUSk). I am going to try my best to catch up on hair sim work this week/weekend over TG break. I do have some tiny things to show in the meantime, however. Using one of the Bullet demos as a starting point (and the power of the Bullet forums - I'm so glad that this library is well documented and supported...yes, I'm talking to you, OpenMesh!), I have put together a mini project that lets me shoot "particles" at an "animated" head collider! Check out the footage below.
The keen observer will note, however, that some of the cube "particles" pass right through the head mesh instead of bouncing off. I'm not exactly sure why this is - I've posted a question on the forums and will get back to you when I receive a reply. So, hopefully I have come close to solving the issue of "which collision detection method is most efficient" (again, by letting more experienced programmers take care of it for me). The next step will be to decide where to draw the line between Bullet and my own particle dynamics code (i.e. how to integrate the two). For example, I could let Bullet take care of the initial particle movements/collisions and then append my own special iterative constraints (including where / how far the particles are allowed to move). I think I will go with that plan of integration for now...

This brings me to another simultaneous task that I'd like to complete this week - developing a class hierarchy (including OpenGL representations) for different hair LODs. Here are the 3 major players:Strips, clusters, and strands. I already have a rough strand class working...it shouldn't be too difficult to use it as a starting point for the other 2 LODs. Also, now that I'm working with Bullet, I can start to think of how to structure these in terms of collision shapes that Bullet is familiar with (spheres, boxes, cylinders, and others). So, I could see I strand as being a collection of constrained (to each other) collision spheres, while a cluster would be a collection of constrained collision cylinders (or spheres with a radius equal to the cylinders that will be drawn around the base skeleton). The strip is a bit trickier - again, perhaps I could just use collision spheres held together in a kind of net. There's also the question of whether or not all 3 LODs are extremely necessary (as far as this project is concerned). I'd love to just use 2 of the 3. I found this video from a Pixar paper about hair simulation / rendering in the Incredibles:

They're just using strips here, and already (with some rendering tricks and interpolation) things are looking good.

So, to summarize, here are the things that I'd like to at least have started by next week:
  • Resolve the disappearing particles issue in Bullet
  • Construct a class hierarchy for the 3 LODs, including both OpenGL representations (how does OpenGL draw them?) and Bullet representations (how does Bullet see them?)
  • Bring everything back over to my OpenGL/FLTK testing ground (as opposed to Bullet's)
  • Populate the scalp with 1 of the 3 (or all 3) LODs
  • Begin looking into methods of interpolation across the scalp geometry (the Nalu demo comes to mind)
Ready, set, break!

Sunday, November 8, 2009

Which Pile of Bunnies is Better?

I spent some time today messing with the Bullet / ODE demos, specifically comparing and contrasting collision detection performance and mesh-mesh interactions between the two. This has left me with two very large piles of Stanford bunnies:
(Bullet Bunnies)

(ODE Bunnies)

I know, I know...you can't really tell which is the better engine by just looking at snapshots, but based on dropping these guys on top of each other (and shooting cubes around, among other things), I think I'll try to plug Bullet in first. Its collision performance looks a bit smoother (which also seems to be the consensus reached in various programming forums that I visited).

...Although, Pixar has been known to use ODE. No, no...Bullet for now. I'll let you know how it goes.

Wednesday, November 4, 2009

Sorry, I was making a game...

This past weekend was dedicated entirely to completing a CIS564 project (http://www.youtube.com/watch?v=o3x2L8oXaKY), so no progress was made on my hairs. However, the following statistics may give you a hint as to my plans for the rest of this week/weekend.
See, this way...I don't need to worry about whether or not my collision detection algorithm is efficient (though I'm still proud that I was able to implement it). Hmm, for some reason this decision to use a pre-made library for the sake of a higher quality output sounds familiar (Qt and OpenMesh *cough, cough*).

Monday, October 26, 2009

Narrowing Down Algorithms

Modeling
  • Transition between different Levels-of-Detail (LODs) or Wisps
  1. Strands (highest) --> Clusters --> Strips (lowest)
  2. Use (a modified) Adaptive Wisp Tree (AWT) to facilitate transitions
  • Interpolation of sparse guide hairs (points connected by curves) to create density
  1. Guide hairs grown based on scalp geometry
  2. Guide hairs loaded from a file
Animating
  • Particle dynamics (treat control points of guide hairs like particles)
  1. Verlet integration
  2. Runge-Kutta integration
  • Collision detection using a polygonal mesh boundary around the head
Rendering
  • Anisotropic reflections
  1. Marschner model
  2. Look-up textures
  • Opacity shadow maps

Collision Detection using Rays

Ray tracing isn't just for rendering scenes, you know! It also comes in handy for 3d collision detection (as opposed to, say...projecting your collider onto all 3 coordinate planes and trying to call pointInPoly for each projected face...not like I tried that or anything...). I know this is just one hair strand, but the ability to maintain speed at this stage is encouraging. No more snapping either!

(yes, I know...his one eye creeps me out too)

Upcoming Tasks:
  • Using Connelly's HairQuad class as a road map, begin devising a way of organizing hair strands into groups that collide with each other [Update: Actually, I don't think I'll combine hair strands into groups quite yet - I'd like to create classes for the other 2 LODs (hair clusters and hair strips) first. Maybe they can all be subclasses of the same virtual class (HairLOD or something like that). Then I'll look into ways of transitioning between the 3 (maybe based on proximity or occlusion - this will require reading through the Adaptive Wisp Tree paper again). So, I'm going to rewrite this task as: Implement self-intersection for hair strands. Then, create HairCluster and HairStrip classes, both with collision (self and collider) capabilities like the HairStrand class.]
  • Try connecting the hair strand control points with curves (Bezier, Hermite, B-splines?)
  • Continue to collaborate with Cat --> use OpenMesh objects as input for colliders (I'm currently using a simple .obj parser instead)
  • Write the "Background" section of my proposal [Update: done, but I'll probably keep revising it]

Sunday, October 25, 2009

Collider-phyllic Strands

Collision detection is working! Unfortunately, the hair strand seems to want to stick to the collider. I'm going to look into this more...

(just one "free" control point)


(collder-phyllic strand!)

UPDATE: I have fixed the issue! Silly me, I was projecting all of the "free" control points onto the collider mesh whenever just one collided (hence all of them suddenly snapped into place). I've adjusted my code to snap just the colliding particle:

(getting better...)

Alas, the snap is still a bit too forceful. I don't really know what's causing this. It seems almost contradictory: one the one hand, the snap means that my code has registered a collision, but on the other, the abruptness of the snap seems to imply that some extra steps are needed to get the particle a little closer to the collider before it actually collides. So, my eyes are telling me one thing, but my code (the math) is telling me another. I trust my eyes more, honestly. ~__~

Friday, October 23, 2009

A Strand of Progress

Good one! Using Connelly's Hair class as a road map (in addition to Jakobsen's paper), I've constructed my own (albeit crude) HairStrand class. Here's the little guy in action (gravity is turned on in the negative y direction, and the drag coefficient is set to 0.5):
Cat is currently helping me to get a grip on manipulating OpenMesh objects (specifically how to grab vertex positions from a TriMeshT object - it looks like I'll probably need to use iterators). Unfortunately, the library is not very well documented and definitely not very intuitive (for me, anyway). So, with that going on in the background, my next step consists of colliding this little guy with a polygonal mesh. I have the function written up and ready to go, but I'm betting it won't cooperate on the first test run. For testing these interactions, I'm using a simple Fl_Gl_Window object (vs. Qt).

Wednesday, October 14, 2009

Short-Term Goals

Since the alpha review is Friday, I need to start stepping up my game. So, by the end of the day on Friday, I hope to have accomplished the following:
  1. Dissect Connelly's "Brush" code into classes, methods, and procedures.
  2. Hack QtViewer (which I have decided to use in place of Cat's SimpleViewer) so that the vertex positions of the viewed mesh change as the user rotates it around. At the moment, the viewer performs "rotations" by adding transformation matrices to the gl modelview matrix. This changes how OpenGL interprets the mesh's vertices (thus they appear to be rotated when the scene is rendered), but does not change their position values directly. I need some way of simulating animated vertex positions - I think this is a good approximation.

Monday, October 12, 2009

Errors, Errors...

This is why it is not a good idea to try and run Linux things on Windows. Upon compiling, the following repeats itself over and over:
1>c:\path_to_my_project\openmesh\core\geometry\VectorT_inc.hh(413) : warning C4003: not enough actual parameters for macro 'max'
1>c:\path_to_my_project\openmesh\core\geometry\VectorT_inc.hh(421) : warning C4003: not enough actual parameters for macro 'min'
1>c:\path_to_my_project\openmesh\core\geometry\VectorT_inc.hh(452) : warning C4003: not enough actual parameters for macro 'min'
1>c:\path_to_my_project\openmesh\core\geometry\VectorT_inc.hh(457) : warning C4003: not enough actual parameters for macro 'max'
1>c:\path_to_my_project\openmesh\core\geometry\VectorT_inc.hh(413) : error C2059: syntax error : ''
1>c:\path_to_my_project\openmesh\core\geometry\VectorT_inc.hh(499) : see reference to class template instantiation 'OpenMesh::VectorT' being compiled
1>c:\path_to_my_project\openmesh\core\geometry\VectorT_inc.hh(413) : error C2334: unexpected token(s) preceding ':'; skipping apparent function body
1>c:\path_to_my_project\openmesh\core\geometry
\VectorT_inc.hh(418) : error C2143: syntax error : missing ')' before '}'
1>c:\path_to_my_project\openmesh\core\geometry\VectorT_inc.hh(418) : error C2143: syntax error : missing '}' before ')'
1>c:\path_to_my_project\openmesh\core\geometry\VectorT_inc.hh(418) : error C2059: syntax error : ')'
1>c:\path_to_my_project\openmesh\core\geometry\VectorT_inc.hh(418) : error C2143: syntax error : missing ';' before '}'
1>c:\path_to_my_project\openmesh\core\geometry\V
ectorT_inc.hh(418) : error C2238: unexpected token(s) preceding ';'
1>c:\path_to_my_project\openmesh\core\geometry\VectorT_inc.hh(421) : error C2143: syntax error : missing ';' before 'inline'
With one of the target areas being lines 413-418:
/// return the maximal component
inline Scalar max() const
{
Scalar m(Base::values_[0]);
for(int i=1; im) m=Base::values_[i];
return m;
}
I found one reference to a VectorT_inc.hh issue when I searched for help online. And it was in German. Of course it was.

I encountered these errors when trying to convert Cat's SimpleViewer (the one she sent me to use) into a format the Visual Studio could understand. Granted, I did get a simpler version (called QtViewer) to work (i.e. I have Qt and OpenMesh built, dangit). I suppose I can use the QtViewer for now, but...I'm really limited in terms of sticking buttons and callbacks in where I want them. This is getting incredibly messy, and I haven't even touched hair yet. So sad, so sad. Here's a screen shot of the QtViewer in action:

It provides some great camera manipulation callbacks that would certainly do the trick. Should I just give up on using Cat's SimpleViewer?

Edit (2/17/10): to fix this problem, I inserted the following right before the inline Scalar max() const function (line 413):
#undef min
#undef max

Tuesday, October 6, 2009

Back up the bus...

It hit me like a ton of bricks. I was sitting in my Game Design class, listening to this indie game developer (DMD alum Mike Highland) talk about the process of developing a game for the iphone. At one point in his presentation, he mentioned the issue of getting his game to run within a reasonable frame rate range (they were at 20 fps, when a minimum 30 fps was desired). I straightened up in my seat; a bad frame rate is most certainly an issue that I'm going to encounter as I create my real-time hair simulation. How did Mike deal with the issue? I cringed. "Originally, we were working with OpenGL directly...but as soon as we moved over to cocos2d (a 2d engine that extrapolates OpenGL), we bumped our frame rate up dramatically." Boom! I couldn't help but think about my ridiculous "Frankenstein's Viewer". Here I was, screwing around with OpenGL on the lowest level, thinking that it would give me more control over the mesh objects in my scene (head, collider, scalp). But, while I can make a heck of a Viewer for 277, the focus of this project is not to figure out what rotation matrices are needed to move a camera around the center of a scene (all that translate into global space, rotate, and then translate back to local space nonsense). In addition, I've only implemented some of the methods out there for storing meshes, and the ones that I have are not very efficient. It's time I re-evaluated my means of storing data and creating user tools. I need to implement my ideas with higher-end libraries (you know, the kind that already have the classes and methods that I outlined 2 posts ago). So, to conclude, I wasted a weekend making something that I don't want to use. Its outward appearance is a bit "eh", its controls are finicky and not very user friendly, and its underlying structure is complicated and inefficient.

Time for Plan B (the original plan that I should have been implementing all along): Qt + OpenMesh. Cat also got back to me about how the low dynamic head animations are performed:
"We (use) a morphable model as a shape prior. Our face template contains a fixed number of vertices (~8K) and is in a one-to-one correspondence with the vertices of the morphable model. ...We initialize the morphable model by fitting it to a set of user defined points. The model is then automatically refined using the correspondences generated by Surfel Fitting." (taken from Dominik's paper about reconstructing dynamic facial expressions; Cat is working with Dominik, by the way)
What does this mean for me? Two things: (1) the head models that I will be given are dense, and (2) the positions of their vertices change from one frame to the next. How the vertex positions are calculated doesn't really concern me; as long as I can grab them at each frame, I'm happy. Also, if Cat stores her mesh data using OpenMesh, and I get OpenMesh to work on my machine...think of how easy that data grabbing would be! OpenMesh also has a built in decimation algorithm, which might prove useful.

That being said, I'll have to push back my plan for this next week and repeat last week's agenda. Let's try that again, shall we? Also, hello arcball. Why did my professors not teach me about you? Or was I just not paying attention...

Some Progress

I still have a lot of bugs to work out, but it's a start. Frankenstein indeed [shudders].

Saturday, October 3, 2009

Frankenstein's Viewer

I'm going to try and "Frankenstein" together my own Viewer using FLTK, GLUT, and OpenGL. First of all, I already know how to build/use these libraries. Secondly, I have access to several existing programs that implement them (my "body parts" for the monster, if you will). Yay for 277, 460, and 462! I do feel bad about not using the viewer that Cat supplied me with (it is probably very well-written). If my Linux skills were stronger, I probably would. But, since the Viewer is not the focus of my project, I want to get it out of the way a.s.a.p., and if that means using what I know, then so be it. Yukari, who is also working with Cat to make realistic eye movements, contacted me regarding how to set up Qt on a Linux machine; I have no idea what to tell her. O__O

Alright then. Let's take a look at what ingredients I'll need in order to make the Viewer:
Face class:
- has a collection of vertices
- has a normal
- has a color
- has a Draw() function

ObjParser class:
- has a collection of vertices
- has a collection of Faces
- has a Clear() function for resetting the contents of its collections

Mesh class:
- has a collection of vertices
- has a collection of Faces
- has a "Visible" boolean
- has a Draw() function (calls each Face's Draw())

Head class:
- extends Mesh
- (has animations)

Collider class:
- extends Mesh
- has a PointInCollider() boolean function (that returns the shortest projection out of the collider)

HeadCollider class:
- extends Collider Mesh
- has a dictionary that maps its vertices to vertices in a Head
- has a PointInCollider() boolean function (that returns the shortest projection out of the collider)

Scalp class:
- extends HeadCollider

HairGroup class:
- has Strands, Clusters, and Strips
- has an Update() function
- has a Draw() function
- project focus

HairManagerEngine (HairRenderer) class:
- has a Scalp
- has a HairGroup
- has a GrowHair() (set_geometry()) function
- has an init() function (for GPU work)
- has an Update() (update_geometry()) function (calls its HairGroup's Update())
- has a Draw() function (calls its HairGroup's Draw())
- project focus

Camera class:
- has a position
- has an up vector
- has a focus point
- has rotate functions
- has zoom functions

TreeNode class:
- has a parent TreeNode
- has a collection of children TreeNodes


GeometryNode:
- extends TreeNode
- has a Mesh
- has a Draw() function (calls its Mesh's Draw())


TransformNode:
- extends TreeNode
- has a transformation matrix
- has functions for editing its transformation matrix


SceneTree class:
- has a collection of GeometryNodes
- has a collection of TransformNodes
- has TreeNode insertion and deletion functions


Window class:
- has a HairManagerEngine (gets it from Viewer)
- has a SceneTree (gets it from Viewer)
- has a collection of Meshes (gets it from Viewer)
- has a Draw() function (calls each GeometryNode's Mesh's Draw() and its HairManagerEngine's Draw())
- has callback functions

Viewer class:
- has a light (a vector position)
- has an ObjParser
- has a Scalp
- has a Head (should match up with its scalp)
- has a collection of Colliders
- has a HairManagerEngine
- has a Camera
- has a SceneTree
- has a Window
- has callback functions

Viewer class capabilities:
- import an .obj (or .mesh?) scalp into the scene (just one)
  • a corresponding Scalp object should be created and inserted into the SceneTree
  • ConvertObjToScalp() function to do this
  • uses Decimate() function
- import an .obj (or .mesh?) head into the scene (just one)
  • a corresponding Head object should be created and inserted into the SceneTree
  • ConvertObjToHead() function to do this
  • a Collision object should be generated from the Head's geometry
  • ConvertHeadToCollider() function to do this
  • uses Decimate() function
- import an .obj Collider into the scene (as many as desired)
  • a corresponding Collider object should be created and inserted into the SceneTree
  • ConvertObjToCollider() function to do this
- remove Scalp, Head, or Colliders from the scene (delete them from the SceneTree)
  • Node deletion functions to do this
- move around the scene with a Camera
  • Camera rotate and zoom functions to do this
- light the scene with a point light that can be rotated around the center of the scene
- rotate the Head around
  • TransformNode transform matrix manipulation functions to do this
- (trigger Head animations)
- move and rotate Colliders (not the head collider, though) around
  • Transform matrix manipulation functions to do this
- add a HairGroup to the Scalp
  • GrowHair() function to do this

Viewer class Q&A:
Q: For high-poly objects, is there a more efficient way to render them than by looping through each face and telling OpenGL to draw it?
A: For now, no.

Q: If the .obj head is going to be animated in some way, then perhaps importing the .obj should be eliminated in favor of importing some kind of .mesh object with associated keyframe data. This involves collaboration with Cat to see how this .mesh object might need to be drawn.
A: Since I have not heard back from Cat about this, I'm going to import an .obj head and give the user controls for rotating it. This should be good enough for approximating "yes" and "no" head animations.

Q: Should colliders be spheres, cubes, or meshes? Should collisions be determined by space partition instead?
A: For now, the plan is to make the head collider a decimated version of the original head.
Like a good mad scientist, I even sketched out how I want my monster to look:
(Click to enlarge)

Monday, September 28, 2009

Plans for the Coming Week

09/28/2009 – 10/04/2009: Begin building UI viewer. Continue reading the research papers that I have collected up to this point (and putting them into the Excel database).

Saturday, September 26, 2009

Got the Starting Scalp!

I haven't been able to read much this week, so I'm going to stall by showing some pictures of the starting scalp that Cat sent me. Of course, right off the bat I noticed that the scalp model had too many polygons to be a good starting point for me. However, this made think about the advantages and disadvantages of starting with a low-poly scalp vs. a high-poly scalp when "growing" control strands.
The 2 Options:
(1) start with a low-polygon scalp with a control strand for each poly face (this leads to sparse control strands).

(2) start with a high-polygon scalp with a control strand for only certain poly faces (this amount could be controlled by a percentage variable).

Pitfalls:
1: (1) doesn't allow much control over hair placement or length [if I eventually implement some kind of color map to grow control strands with different lengths, then more polys implies a higher resolution, more precise "length map"]. In addition, if higher hair volume or more control over the hair is desired, new control strands can only be created by interpolation between existing control strands (costs computation time). Also, (1) requires some extra steps (see below) before a user's scalp model can be imported into the program and used for growing control strands.

2: (2) might allow for more control over hair placement, control, and volume [i.e. by changing the percentage of how many polys have control strands growing out of them], but a high-poly scalp could lead to increased computation time, since I would have to iterate through each poly face in the scalp and (based on a threshold), decide whether or not a control strand should grow out of that poly face. With the low-poly scalp, there are fewer poly faces to visit, and no need to check a threshold value, since every face gets a control strand. In addition, sparse control strands are probably more ideal for creating the "cluster" and "strip" LODs; what's the point of having a lot of control strands at my disposal when lower LODs eliminate the need for those extra control strands? A low-poly scalp also comes in handy for calculating hair-scalp collisions.
I think I'm going to lean toward (1) for now. In that case, I'd need to follow a pipeline similar to the following in order to get the starting scalp into an acceptable format:

(Click to enlarge)

This pipeline isn't too tedious, so perhaps it won't be that much of a drawback. If I have time later on during the course of the project, I could try to automate it (certainly there are algorithms out there for mesh decimation, right?).

Monday, September 21, 2009

Hair Pipeline Idea (a.k.a. Nalu is bald)

Hmmm...
Apparently Nalu has some bald spots, not to mention coarse spaghetti-like hair. Maybe the thinness of her hair is really a shadow map side-effect, but it made me think about ways of adding volume to the model used in the Nalu demo. More interpolation might work; adding, say, 4 strands between adjacent key strands, instead of 2. As an alternative, I thought of replacing all of her hair strands with strips (more surface area would mean more volume). But then I would probably end up with a cool-looking, but kind of unnatural (and difficult to render realistically without texture maps) result, like this:
Since strips seemed like a no-go, I thought of using clusters (a.k.a. cylinders) to pump up the volume (lol) instead. These are all different Levels-of-Detail, by the way: "Strands", "Clusters", and "Strips". I couldn't decide which LOD to choose, so I skimmed through one of the papers talking about them, hoping that the paper would favor one over the others. That's when I realized that the 3 are usually used simultaneously. Areas that can't be seen as well (such as hair covered by other hair) are represented as strips, while the next layer up is represented as clusters, and finally, the visible hairs on top stay as strands. This in mind, I sketched up a pipeline idea for my project:
(Click to enlarge)

Saturday, September 19, 2009

Hmm, I wonder what they used here...

That would make for some convincing hair! (Render times are a different story...)

Another "Base Case": Connelly's Project


So maybe a combo of Nalu + Blonde-haired Disco Ball...

Plans for the Coming Week

09/21/2009 – 09/27/2009: Continue finding and reading research papers related to hair simulation (both real-time and not). Make an Excel spreadsheet that matches up papers (including when they were written) with the algorithms they discuss. Meet with Joe, Norm, or others to examine algorithms discussed in both NVIDIA book excerpt (Nalu demo) and Connelly’s write-up. Decide whether or not they make a good “base case”. Obtain access to a Linux box (or Windows machine) and choose a C++ environment (if Windows machine, preferably Visual Studio).

Project Abstract (1st Draft)

Simulating hair on a virtual character remains one of the most challenging aspects of computer graphics. Currently, there are numerous different techniques for modeling, animating, and rendering believable hairstyles.

With the emergence of new-generation GPUs, real-time hair simulation techniques are beginning to adopt more detail-promoting methodologies. Computations that once took hours, such as hair-hair collisions, now take minutes; likewise, computations that used to take minutes have been condensed to times favorable for real-time results. This general trend is visible across all three subcategories of hair simulation: modeling, animating, and rendering.

This project seeks to use current advances in real-time hair simulation to create two realistic (in terms of modeling, animating, and rendering) hairstyles, one male and one female, for virtual characters performing simple head movements, such as nods and turns. Since “main” characters are involved, the chosen simulation technique should be a merger of both computationally-friendly and detail-promoting methods. The end product would ideally be used in any context where real-time, low dynamic movement and rendering of hair are required.

A Good "Base Case"?

For my crude "base case" simulation, I was thinking of using the same algorithms that NVIDIA used for their Nalu demo.

But I don't know...when she gets close to the camera...she looks kinda freaky. O___O

Friday, September 18, 2009

Organizing Techniques

---------------------------------------------------
MODELING:


(1) low polygon mesh

(2) LODs (level-of-detail)

a. strips
  • parametric surfaces (can use offset function to make them curly)
  • polygonal triangles to mark density/space in between guide hairs
b. clusters
  • GCs (general cylinders) [did not seem to work very well for tightly pulled-back hair]
c. strands
  • offset function to make them curly
d. wisps

(3) hierarchical hair structure (applicable to most LODs)

a. parent GC subdivided into several child GCs
b. AWT (adaptive wisp tree)
c. auto, manual, minor clumping
d. adaptive subdivision/merging based on what can be seen

(4) sparse guide hairs (curves)

a. interpolated to create hair density
b. dynamically added or removed

(5) vector fields


(6) multiple layers (main, under, fuzz, stray)


(7) 2D texture synthesis (2d feature maps)


(8) super-helices


(9) growing hairs based on scalp geometry (one at every normal)


---------------------------------------------------
ANIMATING (eulerian vs. langrangian):

(1) FFD (free form deformation) lattice


(2) mass-spring-damper
(point masses vs. cylinders)

(3) mass-spring-hinge


(4) cantilever beam


(5) multi-body open chain


(6) fluid dynamics


(7) metaballs (for collisions with model)


(8) control vertices treated as particles

a. Verlet integration
b. Euler integration
c. Runge-Kutta integration

(9) spheres/pearls (for collisions with model)


---------------------------------------------------
RENDERING
:

(1) polygon tessellation


(2) maps

a. texture
b. bump
c. alpha

(3) global illumination

a. self-shadowing
  • using opacity-based volume rendering (i.e. voxels)
b. back lighting
  • (opacity) shadow maps

(4) local illumination

a. anisotropic reflections
  • reflection maps
  • Marschner model
  • dual scattering approximation
b. hair segments as GL lines (with anti-aliasing)
c. alpha blending to give illusion of thickness

(5) Renderman


[might be helpful to paint the scalp a dark color so bald patches are less noticeable]


---------------------------------------------------
Techniques in bold are ones that I'd love to somehow include in my project, or at least investigate in depth. Many of them have already been used in a real-time context; others have not, but could potentially be modified/merged with other techniques to do so.

What can be done to speed up the process (goal: real-time for the last 2)?
- fewer primitives
- use approximations
- use the GPU

A Helpful Chart (maybe)

I keep seeing this chart in a lot of SIGGRAPH presentations. Maybe it will come in handy later on when I decide how to improve on my "base case" simulation (<-- at this point, I'm thinking mass-spring-damper or Verlet integration with spheres for collision detection).