ZBrushCentral

Kinect and Mocap in Zbrush

Ofer!

It would seem you could support the Kinect in Zbrush and allow us to do simple face and mocap for our creations:) See these 2 examples! Please beat autodesk to the punch.

http://www.youtube.com/watch?v=bQREhd9iT38

http://squaretrianglecircle.com/post/l.a.noire+mocap/1549/1

Thanks

Nick

I don’t see that happening anytime soon. Working with lower polygon meshes is simple to let your head rotate around the object. Cashing all of the millions…or billions of polygons we use in Zbrush, to memory everytime you want to rotate would kill it. Also, you can get the super high polycounts because you can’t rotate and sculpt at the same time. As far as I know Mudbox doesn’t support 3dconnexion devices either…for the same reason. So now allowing your head to do the rotating would be basically the same thing.

I could see rotating your model with your hands I suppose though…with a kinect or something, I don’t know if I really want to have to turn on another device, to move my hand away from my screen/tablet and keyboard just so I can rotate, then go back to them so I can work some more.

eh, my 2cents

I think you’re missing the greater potential. Rotating the model while modeling would be fine, but its a minor workflow tidbit.

With the technology in the Kinect, there is the opportunity to create the kind of revolution in 3d narrative storytelling that Pixologic made possible in the realm of sculpting for anyone with a wacom tablet.

It would be the most significant thing in 3d since zbrush itself.

In addition to basic skeletal mocap, there is enough data in the depth map to extract facial data as well. With a clever approach in combining information from the depth and rgb along with the Pixologics knack for innovation I think there is great potential.

For instance, since in may be assumed that often the person sculpting would be the person controlling the mocap, the system could adapt to the correspondence between expressions and morph targets, for not just one but all sculpts. Imagine being able to draw an expression from a completely different model sculpted years prior or automatically creating morph target starting points by moving your face into the desired position, and tweaking to perfection with an automatic link to that mocap expression in the future.

Zbrushers should be making their own stories for people too see – instead of languishing as cogs in vfx house cubicles (no offense I was one for a good while)

I realize that many folks here are only interested in sculpting, but this would be a storytellers boon, well beyond the scope of the current pixologic community.

Yes…very true Ghost…but there are ways around it. My Idea would be to use transpose master…get the low res. Then you tag your video with joints…and then assign the transpose master low res model with corresponding joints…Bind.

Do your animation…record etc. Then transfer that movement to the high poly in rendered frames.

I could see it working really great for simple facial animation and…

FACE Sculpting. How cool would it be to not have to sculpt a mouth closed or furrow an eyebrow. Just tag your face mesh and your video feed and you can sculpt with your own face.

Of course…doing movement with it would be cool. Just tilting your head one way or the other to rotate model. Voice recognition for certain tools.

It will happen…just hoping Zbrush gets there first.

Here is a new video…some model but user wearing VR helmet to see through it.

http://www.youtube.com/watch?v=WDlvn3voblQ

Alright found a company doing it!

http://www.ipisoft.com/gallery.php

Go buy them Zbrush! please

Ahh and these guys too!

http://oasis.xentax.com/index.php?content=downloads

There has been a lot of this at kinecthacks.net

Its very exciting that actual products are materializing as fast as they are. As simple and imperfect as that ipi software is, its such a huge step forward. Forget about the hardware, the fact that the physical space can be 1/4 the size of what would be needed in a multicam setup opens mocap up to so many more people.

At ILM the mocap stage was very large and the actual area that it could capture was fairly small in comparison. Now that I live in LA I had been looking to find a warehouse just to be able to setup a similar mocap stage (with a bed in the corner). This is making that unnecessary.

I cant wait to see what pixologic can do with it. I have a feeling that for non-realtime mocap, someone is going to really be able to leverage the rgb data in conjunction with the depth for very detailed facial capture. With the RGB projected onto the rough 3d surface I imagine it would be fairly simple to pull the eye and mouth shapes from the rgb image as well as finer detail and folds.