ZBrushCentral

Zbrush UV workflow question

Hello all. I would like to get suggestions on a UV workflow. I’m new to ZBrush and I love it. Just trying to see if this proposed workflow makes sense.

I’m making a mobile game. Target polycount for each character is around 15 K triangles. My tools are Zbrush, Photoshop and Blender.

Here is what I ( think ? ) my workflow should be based on the fact that I’m FAR more comfortable starting with a high poly sculpt based on zSpheres as opposed to starting with a base mesh in Blender. I know that the starting point matters, so that’s my starting point.

Here’s my proposed workflow:

  1. Begin with intuitive high poly sculpt in Zbrush until the character looks the way I want it to look.

  2. Manually retopo in Zbrush using Zspheres

  3. Unwrap the retopo mesh in Zbrush using UVMaster

  4. Transfer the UVs from the retopo mesh back onto the high poly mesh ( is this possible? )

  5. paint directly on the high poly mesh. Edit certain parts in Photoshop if needed.

  6. Transfer that texture map onto the retopo mesh

  7. Finally export the retopo mesh with the texture map into Blender for rigging.

Is this a reasonable workflow ? I haven’t completed anything past the sculpting of the high poly mesh. I want to have a good overview of the road ahead before I starting walking down that road. Thoughts / warnings / suggestions?Screen Shot 2018-10-05 at 3.19.07 PM.png

I’d skip step #5. You can’t transfer UVs between models with completely different topology (at least not in zbrush; other programs can try to do it based on closest vertex positions but the end result will likely still be a mess due to the difference).

The highpoly sculpt doesn’t actually require UVs if you’re just polypainting. You can export the model as an OBJ/FBX with polypaint (vertex color) included as part of the file, and only worry about giving UVs to the final retopologized model. Then you can load both the high and low resolution models into a map baking application (xnormal, substance painter, probably even Blender) to bake out your maps. This will save you the trouble of having to physically project geometric detail and subdivision levels between two different models in Zbrush, and will give you more control over the maps used in the game (the tangent basis used in the normal maps, custom vertex normals, etc).

Thank you for the quick reply, @Cryrid . Do you have a link to any good tutorials or resources for baking maps and how this will work when transferring the paint from the high poly sculpt to the retopo version? I guess that’s the part where I don’'t have enough of a conceptual understanding of how to put this all together. If the UVs are not the same, then I need conceptual help of understanding how to paint on a high poly version and then get that work to show up on the low poly retopo.

any pointers to good resources would be greatly appreciated !

If the UVs are not the same, then I need conceptual help of understanding how to paint on a high poly version and then get that work to show up on the low poly retopo.

I think you think it needs to transfer the color detail by baking it to an image map and hoping the other UV layout is similar, but that isn’t the case. Instead baking programs cast rays between the two models, and create maps specifically for the game model based on what the ray sees.

The baking tool will starts a set distance away from the low resolution model, and from there it will start casting rays inwards. When a ray intersects the high resolution model, it records that surface detail (normal direction, polypaint color, etc) and saves that info into a texture map that uses the lowpoly model’s Texture Coordinates.

So only the game model needs UVs. The sculpt and game models only need to occupy the same space (in other words, if you were to load them both into a program then they should be overlapping each other). The rays handle the rest.


Personally, I’d recommend starting with a short demonstration to make sure you understand the process:

  1. Start with a simple sphere, give it some UVs, and save it out as an obj to act as the low resolution game mesh.
  2. Take an identical sphere, but delete the UVs and subdivide it a few times. Polypaint something fast and basic onto it, you can even add a little sculpt detail to the surface if you want. Save this object to act as the high resolution sculpt.
  3. If you're using xNormal, then load the sculpt in the "High Definition Meshes" section (uncheck the 'Ignore per-vertex-color' option so that the polypaint data won't be ignored), and then load the lowpoly game model in the "Low Definition Meshes" section.
  4. Under the "Baking Options" section, turn on the "Maps to Render: Bake Highpoly's Vertex Colors" option.
  5. Press the "Generate Maps" button and watch it transfer the sculpt's information into texture maps meant for the game model.

You should see it work pretty quickly right away. From there you can dive into tutorials for adjusting the ray distance, the direction (based on vertex normals of the mesh), using cages or blocker meshes, and other options for even finer control over the end result.

That was exactly the conceptual explanation that I needed. Thanks a ton ! This points me in the right direction.