I am trying to understand/research/develope a workflow for outputting digital figures to a physical model using a 3D printing service but have some questions.
Here is what I think I understand so far.
-
Create a low to medium res figure in a rest ( da vinci like pose) in Maya ,XSI , or Silo.
-
Create arig in maya to later pose the figure.
-
Import model into ZBrush to add high quality detail, then create a normal and displacement map.
-
Import the model and displacement map back into Maya and use the rig to pose the figure.
Here is were my questions begin because a this point the normal process would be to use the displacement map during the rendering stage to output an animation or still because a program like Maya could never handle the million of polys ZBrush creates interactively. But I need to get the High Res info stored in the displacment map back onto the low/medium posed figures geometry in order to send it to a 3D printing device. ( I don’t think a displacement map can be used??) So I guess the three. major questions I have at this point is.
-
Can a diplacement map be converted back into geometry?
-
Can a 3D printer accept the millions of polygons ZBrush produces?
-
After the low/medium res model with the displacement map is posed can it be sent back to ZBrush and converted back into geometry and then saved out to a file formate for a 3D Printer.
Look forward to any advise or knowledge Thnkyou