Hello to all the Zbrush users out there. I would like to pose a question about the use of displacement maps. Does any one actually ever get a 90 - 100% successful result in their chosen 3D app, Maya, 3D Max etc? I ask the question after viewing the zbrush forum for the best part of two years. I see a lot of very good work, but the vast majority is rendered from zbrush, with very little if any being rendered in other 3D apps. I myself use zbrush and maya, and can get a result, but these results are far from perfect. So, over to you guys, what are your thoughts on this subject?
Map rendering, creation, and organization is all a bit of an art since it involves a fundamental transition from 3D geometry to a 2D map with representations of geometry or light-interaction with geometry. Add to this the complexity of how other rendering systems interpret these maps, it’s little wonder it’s hard to get perfect results. The question, possibly more rationally approached, is can we get acceptable results.
I think, for many uses, the answer is still yes, but not without work. I find myself needing to do a lot of hand-crafting and fiddling to get the right maps, and this becomes proportionally more difficult the more the bigger the transition from higher to lower geometry. The reason for this is that the bigger the jump in geometry from hi-res to low-res, the more distortion is introduced into the mapping, whether it is a normal, displacement, bump, or even the high-freq detail in a color map.
Currently almost all forms of UVW mapping introduce some form of distortion. Even “tile” based mappings introduce some distortion because often the angle of the polygon edges is only the representation a smooth surface and not really the joining of two planes.
My techniques involve:
-
Evaluate what mapping technique of create UVWs will give me the cleanest (e.g. least distorted approach) while giving me the appropriate freedom to correct maps and UVs.
-
Evaluate, based on my renderer, time requirements, type of image I want, what degree of geometry reduction I can tolerate. I often don’t take the lowest “level 1” I can find if I don’t need that much reduction because of the distortion involved going from hi-res -> displacement map + low res -> micropoly displacement through UVs back to hires. I try to pick the best balance here since each of these transitions will introduce distortion of the original sculpt.
-
Based on 1 & 2, create and tweak the best, least distorting UV map for the usage and rendering system I’m going for.
-
Generate displacement or normal maps either through ZB or other tools like xNormal.
-
Test & Tweak.
-
Repeat 5 until acceptable. If after a few reps, I don’t get there, I often go back to 1 and ask myself if I’m choosing the right things to control the distortion of mapping.
-K
One thing that I can certainly recommend is utilizing 32bit displacement mapping in Maya for increased accuracy. You get a huge file on disc, but the payoff is worth it, plus you don’t have to fiddle with alpha and offset values in the shader since the the scale is baked right into the map. Other than that, you are only limited by your subdivision level at render time. I will often add normal or bump maps to help enhance the displacement map as well.
If you would like to know more, Scott Spencer has written a great ZBrush to Maya pipeline guide, as well as the book ZBrush Character Creation, published by Sybex and available at your local bookstore or through Amazon.com. If you haven’t read and worked through the process in either of those resources yet, than I highly recommend that you do. Using these documents as a starting point and guide, you can get renders out of Maya that look exactly as they do in ZBrush.
Best of luck to you!
You might be able to use Mudbox on Mac when it ships in February.