ZBrushCentral

ZBrush VS Substance Painter: Normal Map Creation

Hi everyone.

I have been looking to streamline my modelling/texturing workflow to save time.

Workflow & Context

I have been considering using Zbrush to capture my high res detail in a normal map for my low res mesh. Currently I use Substance Painter for this as well as all my texturing.

These are my reasons for wanting to swap from Painter to Zbrush for Normal Map creation:

  • I’ve heard that the projections are ‘smarter’, and are less prone to errors in tight areas, such as eyelids and mouths

  • I wont have to export my high res out of Zbrush. This will be wonderful for particularly dense meshes that like to crash Painter when baking maps — as well as other problems

  • Because exporting isnt required means I wont have to run the Decimation Master and save time on this too.

Any thoughts on this workflow would be appreciated, and feel free to correct me if I got anything wrong.

BUT HERE’S THE PROBLEM:

I have tested this workflow and everything works as expected except for one thing: I cant get the all the detail from my highest subdiv level into the normal map.

Nothing is actually missing, it just seems that the normal map is slightly blurred and lower resolution than the result I get with Substance Painter. Also looks lumpy in some areas.

I have tried 4k and 8k maps to no avail.

I have watched a few tutorials on people using Painter with Normal Maps generated in Zbrush. It seems like these people don’t notice the slight loss of detail or don’t seem to mind. Or perhaps, for them, the convenience of the process is worth it?

Some people might think I’m being overly fussy but I can’t justify switching my current workflow for one that saves time but achieves less detail.

Am I doing something wrong?
Is Substance Painter simply better at this than ZBrush?

If you have any experience with this I would love to hear your thoughts! Any and all comments are much appreciated!

Are you using Adaptive Scan Mode? That’s supposed to provide more accuracy in highly detailed parts, at the expense of being longer to calculate.

Honestly though, I’d recommend sticking with an external baker like Substance, Marmoset, or xNormal.

You’re going to run into issues with Zbrush which I don’t think you’ll be able to fully solve because the control simply isn’t there. Normal maps heavily rely on the vertex normals of the lowpoly model; Zbrush doesn’t use them. Since there is no control of smoothing groups or any way to export custom user normals, there is no way to ensure that the mesh’s normals won’t change around during the final triangulation by the renderer. Likewise, tangent-based normal maps also rely on the tangent basis being synced between the baker and renderer; Zbrush doesn’t let you choose which method is used to calculate this. I don’t know if Zbrush even gives options for the sample quality of anti-aliasing or the bit depth used?

I wouldn’t say the projections are smarter in tighter spaces either, they’re just playing an entirely different game by using the difference in subdivision levels to create their data as opposed to ray-casting. You’re going to run into a completely different set of problems as a result of this. Zbrush can only use a single model by its design, meaning the lowpoly model needs to be subdivided enough to contain all the sculpted detail. Since game-res topology is geared more for optimization and animation, it isn’t always ideal topology for sculpting on which means you’ll likely have to do an actual physical geometric projection at some point to transfer a sculpt’s detail onto the final topology. It’s at this step that you’ll run into projection issues that will require you to spend time manually cleaning things up. You’re going to encounter issues with stacked geometry (belts, buckles, wraps, clothing in general, those horns on your dinosaur, etc) since the Zbrush bake is going to highly prefer these things being fused together at the surface (but doing so would make it immensely harder to revise later on). You also lose the ability to preform other time-saving tricks like using floating geometry.

For me its in the same boat as using ZRemesher and UVMaster for your final result. If it is just a simple model and you’re only striving for “good enough”, then these methods could possibly eek past that passing grade of acceptability and save you some time in the process. But for the best possible results, I’m of the mindset that it can be better (and even quicker) in the long run to learn the more advanced workflows.

1 Like

Hello Cryrid.

Thanks so much for your informative response! This is great info and answered my question.

I did use Adaptive Scan in my testing, and I couldn’t really notice the difference to be very honest. I’m not sure whether or not it was used in my screenshots though.

This Zbrush workflow might not be the time saver I was hoping it would be, as I still need to export my model to and from Maya to create UVs on my low res – I much prefer the control that Maya offers.
Still the issue of exporting the high res remains, but at the end of the day decimation master does a wonderful job at making this easier.

Once again thanks for the quick reply and great info, very much appreciated!!!