ZBrushCentral

Max/Zbrush and displacement map help

In my current work flow, i create a very basic box model in Max to block out proportions, then bring it into Zbrush and do a high poly sculpt. Once that high poly sculpt is complete I then re-topo it. I then bring that re-topo’ed mesh into max for rigging and animation. Is there a way to get the high poly sculpt information (via displacement map) onto the re-topo’ed mesh that is in max?

I cant use the first subdivision of the high poly because that is not the low poly mesh i will be using (i use the re-topo’ed mesh)

i tried setting up the uv’s on the re-topoed mesh and then merging the high poly from zbrush to render out a displacement map (via light tracer) but there doesnt seem to be an option to render a displacement map in max.

When you do your retopology you should use the Projection feature. This will project the high level details onto the highest level of your new mesh. After you’ve finished retopology you can skin the new model and select that skin. Go to level 1, UV the model, and create your displacement map.

thanks! are there any tutorials with this work flow?

ok, so i figured out how to apply the projection. is there anyway to extract the model with subdivision history? so at the first subdivision would essentially be the density 1 on the adaptive skin and the highest would be level 5 or so. It seems like i need that subdivision history in order to render out the displacement map

Download Xnormal (its free) http://www.xnormal.net/.

Export your highest Sub level from Zbrush to a folder. Export your new re-top’d Max bash mesh to the same folder. Use Xnormal to create normal, displacement, and ambient occlusion maps. You can learn this software in about 15 minutes, so easy.

Make sure you have the latest version of Xnormal, which is now Max compatible.

Cheers,
Matt

That sounds like a very good solution for creating your highpoly mesh and topology mesh in different apps. I’m just curious about working with displacement. What would you describe is a good workflow for creating an animated character and have control over the displaced result. Would you import a dense mesh and split it up and constrain each part to respective joints so that you get a visual feedback of when the mesh is intersecting. What would the workflow be though when you also have a displacement for muscle bulge or facial expression. This can’t be easily represented, right?