Tuesday, February 16, 2010

Extrapolating control node data from a point cloud

Here's an idea. If one can build skeletons for motion capture markers, then wouldn't it be possible to extrapolate master control node data during a head/face motion capture session? Say you have your actor with markers on their face (I need to find out the brand of reflective stickers used by Creaform's 3d scanners, because these would work very well on small surface areas such as faces and hands). You capture a snapshot of the actor's face. The captured markers become the "point cloud". Using either Vicon or some other program, you calculate the center of the captured markers/points. Then, you construct a "skeleton" for the cloud by parenting every marker back to a node at the calculated center (which acts like the skeleton root). And there you have it: a way of simultaneously obtaining detailed face deformations (which can be applied to a base template in the form of keyframed vertex movements - what Cat is working on) and master control node position/orientation data over time (which can be used in my proposed GUI setup). All in a single mocap session.

1 comment:

  1. lol nice thoughts, but stick to the hair :>

    I think your stickers will work if the camera's were moved the problem is they are too high up
    and the stickers are not 3d so you lose the easily if you turn away from the camera
    but if your static the vicon cameras do pick them up really really well, I was surprised when I tested them

    ReplyDelete