Monday, September 28, 2009
Plans for the Coming Week
09/28/2009 – 10/04/2009: Begin building UI viewer. Continue reading the research papers that I have collected up to this point (and putting them into the Excel database).
Saturday, September 26, 2009
Got the Starting Scalp!
I haven't been able to read much this week, so I'm going to stall by showing some pictures of the starting scalp that Cat sent me. Of course, right off the bat I noticed that the scalp model had too many polygons to be a good starting point for me. However, this made think about the advantages and disadvantages of starting with a low-poly scalp vs. a high-poly scalp when "growing" control strands.
This pipeline isn't too tedious, so perhaps it won't be that much of a drawback. If I have time later on during the course of the project, I could try to automate it (certainly there are algorithms out there for mesh decimation, right?).
The 2 Options:I think I'm going to lean toward (1) for now. In that case, I'd need to follow a pipeline similar to the following in order to get the starting scalp into an acceptable format:
(1) start with a low-polygon scalp with a control strand for each poly face (this leads to sparse control strands).
(2) start with a high-polygon scalp with a control strand for only certain poly faces (this amount could be controlled by a percentage variable).
Pitfalls:
1: (1) doesn't allow much control over hair placement or length [if I eventually implement some kind of color map to grow control strands with different lengths, then more polys implies a higher resolution, more precise "length map"]. In addition, if higher hair volume or more control over the hair is desired, new control strands can only be created by interpolation between existing control strands (costs computation time). Also, (1) requires some extra steps (see below) before a user's scalp model can be imported into the program and used for growing control strands.
2: (2) might allow for more control over hair placement, control, and volume [i.e. by changing the percentage of how many polys have control strands growing out of them], but a high-poly scalp could lead to increased computation time, since I would have to iterate through each poly face in the scalp and (based on a threshold), decide whether or not a control strand should grow out of that poly face. With the low-poly scalp, there are fewer poly faces to visit, and no need to check a threshold value, since every face gets a control strand. In addition, sparse control strands are probably more ideal for creating the "cluster" and "strip" LODs; what's the point of having a lot of control strands at my disposal when lower LODs eliminate the need for those extra control strands? A low-poly scalp also comes in handy for calculating hair-scalp collisions.
(Click to enlarge)
This pipeline isn't too tedious, so perhaps it won't be that much of a drawback. If I have time later on during the course of the project, I could try to automate it (certainly there are algorithms out there for mesh decimation, right?).
Monday, September 21, 2009
Hair Pipeline Idea (a.k.a. Nalu is bald)
Hmmm...
Apparently Nalu has some bald spots, not to mention coarse spaghetti-like hair. Maybe the thinness of her hair is really a shadow map side-effect, but it made me think about ways of adding volume to the model used in the Nalu demo. More interpolation might work; adding, say, 4 strands between adjacent key strands, instead of 2. As an alternative, I thought of replacing all of her hair strands with strips (more surface area would mean more volume). But then I would probably end up with a cool-looking, but kind of unnatural (and difficult to render realistically without texture maps) result, like this:
Since strips seemed like a no-go, I thought of using clusters (a.k.a. cylinders) to pump up the volume (lol) instead. These are all different Levels-of-Detail, by the way: "Strands", "Clusters", and "Strips". I couldn't decide which LOD to choose, so I skimmed through one of the papers talking about them, hoping that the paper would favor one over the others. That's when I realized that the 3 are usually used simultaneously. Areas that can't be seen as well (such as hair covered by other hair) are represented as strips, while the next layer up is represented as clusters, and finally, the visible hairs on top stay as strands. This in mind, I sketched up a pipeline idea for my project:
(Click to enlarge)
Apparently Nalu has some bald spots, not to mention coarse spaghetti-like hair. Maybe the thinness of her hair is really a shadow map side-effect, but it made me think about ways of adding volume to the model used in the Nalu demo. More interpolation might work; adding, say, 4 strands between adjacent key strands, instead of 2. As an alternative, I thought of replacing all of her hair strands with strips (more surface area would mean more volume). But then I would probably end up with a cool-looking, but kind of unnatural (and difficult to render realistically without texture maps) result, like this:
Saturday, September 19, 2009
Plans for the Coming Week
09/21/2009 – 09/27/2009: Continue finding and reading research papers related to hair simulation (both real-time and not). Make an Excel spreadsheet that matches up papers (including when they were written) with the algorithms they discuss. Meet with Joe, Norm, or others to examine algorithms discussed in both NVIDIA book excerpt (Nalu demo) and Connelly’s write-up. Decide whether or not they make a good “base case”. Obtain access to a Linux box (or Windows machine) and choose a C++ environment (if Windows machine, preferably Visual Studio).
Project Abstract (1st Draft)
Simulating hair on a virtual character remains one of the most challenging aspects of computer graphics. Currently, there are numerous different techniques for modeling, animating, and rendering believable hairstyles.
With the emergence of new-generation GPUs, real-time hair simulation techniques are beginning to adopt more detail-promoting methodologies. Computations that once took hours, such as hair-hair collisions, now take minutes; likewise, computations that used to take minutes have been condensed to times favorable for real-time results. This general trend is visible across all three subcategories of hair simulation: modeling, animating, and rendering.
This project seeks to use current advances in real-time hair simulation to create two realistic (in terms of modeling, animating, and rendering) hairstyles, one male and one female, for virtual characters performing simple head movements, such as nods and turns. Since “main” characters are involved, the chosen simulation technique should be a merger of both computationally-friendly and detail-promoting methods. The end product would ideally be used in any context where real-time, low dynamic movement and rendering of hair are required.
A Good "Base Case"?
For my crude "base case" simulation, I was thinking of using the same algorithms that NVIDIA used for their Nalu demo.
But I don't know...when she gets close to the camera...she looks kinda freaky. O___O
But I don't know...when she gets close to the camera...she looks kinda freaky. O___O
Friday, September 18, 2009
Organizing Techniques
---------------------------------------------------
MODELING:
(1) low polygon mesh
(2) LODs (level-of-detail)
a. strips
(3) hierarchical hair structure (applicable to most LODs)
a. parent GC subdivided into several child GCs
b. AWT (adaptive wisp tree)
c. auto, manual, minor clumping
d. adaptive subdivision/merging based on what can be seen
(4) sparse guide hairs (curves)
a. interpolated to create hair density
b. dynamically added or removed
(5) vector fields
(6) multiple layers (main, under, fuzz, stray)
(7) 2D texture synthesis (2d feature maps)
(8) super-helices
(9) growing hairs based on scalp geometry (one at every normal)
---------------------------------------------------
ANIMATING (eulerian vs. langrangian):
(1) FFD (free form deformation) lattice
(2) mass-spring-damper (point masses vs. cylinders)
(3) mass-spring-hinge
(4) cantilever beam
(5) multi-body open chain
(6) fluid dynamics
(7) metaballs (for collisions with model)
(8) control vertices treated as particles
a. Verlet integration
b. Euler integration
c. Runge-Kutta integration
(9) spheres/pearls (for collisions with model)
---------------------------------------------------
RENDERING:
(1) polygon tessellation
(2) maps
a. texture
b. bump
c. alpha
(3) global illumination
a. self-shadowing
(4) local illumination
a. anisotropic reflections
c. alpha blending to give illusion of thickness
(5) Renderman
[might be helpful to paint the scalp a dark color so bald patches are less noticeable]
---------------------------------------------------
Techniques in bold are ones that I'd love to somehow include in my project, or at least investigate in depth. Many of them have already been used in a real-time context; others have not, but could potentially be modified/merged with other techniques to do so.
What can be done to speed up the process (goal: real-time for the last 2)?
- fewer primitives
- use approximations
- use the GPU
MODELING:
(1) low polygon mesh
(2) LODs (level-of-detail)
a. strips
- parametric surfaces (can use offset function to make them curly)
- polygonal triangles to mark density/space in between guide hairs
- GCs (general cylinders) [did not seem to work very well for tightly pulled-back hair]
- offset function to make them curly
(3) hierarchical hair structure (applicable to most LODs)
a. parent GC subdivided into several child GCs
b. AWT (adaptive wisp tree)
c. auto, manual, minor clumping
d. adaptive subdivision/merging based on what can be seen
(4) sparse guide hairs (curves)
a. interpolated to create hair density
b. dynamically added or removed
(5) vector fields
(6) multiple layers (main, under, fuzz, stray)
(7) 2D texture synthesis (2d feature maps)
(8) super-helices
(9) growing hairs based on scalp geometry (one at every normal)
---------------------------------------------------
ANIMATING (eulerian vs. langrangian):
(1) FFD (free form deformation) lattice
(2) mass-spring-damper (point masses vs. cylinders)
(3) mass-spring-hinge
(4) cantilever beam
(5) multi-body open chain
(6) fluid dynamics
(7) metaballs (for collisions with model)
(8) control vertices treated as particles
a. Verlet integration
b. Euler integration
c. Runge-Kutta integration
(9) spheres/pearls (for collisions with model)
---------------------------------------------------
RENDERING:
(1) polygon tessellation
(2) maps
a. texture
b. bump
c. alpha
(3) global illumination
a. self-shadowing
- using opacity-based volume rendering (i.e. voxels)
- (opacity) shadow maps
(4) local illumination
a. anisotropic reflections
- reflection maps
- Marschner model
- dual scattering approximation
c. alpha blending to give illusion of thickness
(5) Renderman
[might be helpful to paint the scalp a dark color so bald patches are less noticeable]
---------------------------------------------------
Techniques in bold are ones that I'd love to somehow include in my project, or at least investigate in depth. Many of them have already been used in a real-time context; others have not, but could potentially be modified/merged with other techniques to do so.
What can be done to speed up the process (goal: real-time for the last 2)?
- fewer primitives
- use approximations
- use the GPU
A Helpful Chart (maybe)
Subscribe to:
Posts (Atom)