I think you’re missing the greater potential. Rotating the model while modeling would be fine, but its a minor workflow tidbit.
With the technology in the Kinect, there is the opportunity to create the kind of revolution in 3d narrative storytelling that Pixologic made possible in the realm of sculpting for anyone with a wacom tablet.
It would be the most significant thing in 3d since zbrush itself.
In addition to basic skeletal mocap, there is enough data in the depth map to extract facial data as well. With a clever approach in combining information from the depth and rgb along with the Pixologics knack for innovation I think there is great potential.
For instance, since in may be assumed that often the person sculpting would be the person controlling the mocap, the system could adapt to the correspondence between expressions and morph targets, for not just one but all sculpts. Imagine being able to draw an expression from a completely different model sculpted years prior or automatically creating morph target starting points by moving your face into the desired position, and tweaking to perfection with an automatic link to that mocap expression in the future.
Zbrushers should be making their own stories for people too see – instead of languishing as cogs in vfx house cubicles (no offense I was one for a good while)
I realize that many folks here are only interested in sculpting, but this would be a storytellers boon, well beyond the scope of the current pixologic community.