ZBrushCentral

the thing about normal map

hi all:)

i recently found out this thread from cgtalks:

http://cgtalk.com/showthread.php?t=129627

about the normal map and displacement map.

ok, first of all, i realise that in zbrush officeial site, they splited out the normal map and displacement map features. all these while i thought they are the same, but apparently not:P anyone can share some knowledge about that???

secondly in that thread, the mentioned that displacement map have 3 different ways of offseting the vertex( or is it pixel?? me confuse:() tangent space, object space and world space. I would like to know if zb export, which space will it be in?

I read some forum that saying zb export object not deforming well in some 3d software(well, that is lightwave that i read). is it software specific for that case??? not sure if maya or max have good deform but maybe we can start our discussion here about the normal map thingy??

It sounds like you’re still not quite “getting it,” but I’ll try to help.

Displacement maps allow the rendering engine to actually move points, based upon the grayscale values of the map. This can change an object’s silhouette. Normal maps rotate a point’s surface normal, but the point isn’t actually moved. This means that although the silhouette of the object doesn’t change, the features within the silhouette are remarkably lifelike.

There are three ways that displacement maps can be applied by the rendering engine. They can move the existing points. This is generally not too useful because you need to have a very high resolution model to get good detail. They can subdivide the object at render time and THEN move the points. This is a lot better, because you can animate a low resolution model and only deal with high res at render time. Lastly, the engine can displace the rendered pixels. This is the best form of displacement, since it’s no longer related to the points at all.

Tangent (aka object) space and World space only apply to normal maps. ZBrush is capable of creating both types of map. ZBrush also offers specific settings for how the normal map colors are handled, since different rendering engines interpret the colors their own way.

Object Deformation in other programs is dependent upon the mesh structure. Parts of the mesh that have many closely-placed edge loops tend to deform better than parts of the mesh with no edge loops. ZBrush will always create a mesh with the most uniform polygon distribution possible. This is great for still work, but not always what you want for animation. The Edge Loop feature can be used in ZBrush to add those extra loops to the topology, so that the model can be animated with smooth deformations.

A displacement map changes the actual geometry of the mesh, pushing it in/out depending on the grayscale value. The mesh needs to be finely tesselated for good results.
A normalmap stores the direction that each pixel on a mesh is facing and encodes it in a RGB texture. This information can then be applied to a low-res proxy mesh to recreate the lighting information of the high-res counterpart. A normalmap does not make any changes to the geometry, it only modifies lighting on a texel level. With further pixel shader trickery you can create a parallaxing normalmap, which takes an additional depth parameter (from a grayscale map) to scroll the normalmap based on the camera’s viewpoint. That way the surface seems to have real depth.

Object and Tangent space describe different ways of calculating the normal of a texel. In object space, each texel is encoded using the same coordinate system as the object that it is derived from. In tangent space, a local coordinate system is used to compute the direction of the light. You can think of it as a bend modifier that’s applied on top of the existing face. We already know the direction of the face - a tangent space normal map now bends the light in a slightly different direction based on the normal of the face.
Tangent space normalmaps can be shared across multiple surfaces (although it is usually not recommended), because they only modify the direction of an existing normal. Object space normalmaps cannot be shared across surfaces, because the lighting information is stored in an absolute format.
You can easily see the difference if you compare the tangent space and object space normalmap for the same mesh - the object space map will use all the colors of the rainbox, whereas the tangent space map is shades of cyan everywhere.

ZBrush supports both object and tangent space, depending on which options you choose.

hey thanks aurick:

wow, how lil that i know about this things. and i just thought i figured months back.

so, now i sort of understand that why ppl use the word “normal displacement” together. that is the best details achieve or fake the best details out of some what low poly, the mix of both.

There are three ways that displacement maps can be applied by the rendering engine. They can move the existing points. This is generally not too useful because you need to have a very high resolution model to get good detail. They can subdivide the object at render time and THEN move the points. This is a lot better, because you can animate a low resolution model and only deal with high res at render time. Lastly, the engine can displace the rendered pixels. This is the best form of displacement, since it’s no longer related to the points at all.

for this one, is that a way that we can preset the method we wanted in zbrush?? or is more for the programmer to set it in3d game programming??

you guys rock man!

ZBrush cannot dictate the ways that your software renders an object. All that ZBrush can do is give you the options to output a good quality map. In fact, the displacement map looks exactly the same regardless of the rendering engine that will be applying it. All rendering engines are unique, and it’s up to you to understand the capabilities of your particular engine. For some engines, you might get the best results using displacement mapping for medium frequency details and bump or normal mapping for high frequency details. Other engines will be different.

However, as a rule you will find that game engines do not use displacement mapping at all. It’s normally tangent space normal mapping, and even then is usually only used for the most important elements in a scene – such as the characters. The thing to remember about game engines is that memory is at a premium. Every texture requires memory, and every time the texture doubles in size it requires 4 times as much memory (1x1=1, 2x2=4, 4x4=16 – you see what I mean). The more textures that are used and the higher quality that they are, the fewer objects can be placed in the scene in order to stay within the memory budget. Since normal maps are a second texture that is used on the object, you can see why normal map usage is limited even though it gives such great results. Of course, the game industry changes as fast as computing technology does. So what holds true right now probably won’t be accurate in just six months. :slight_smile: We’re seeing normal maps appearing more and more frequently in games, and there’s already talk of real-time engines that can support displacements.

hopefully someone can help here… im having problems rendering normal maps out in Mental Ray for maya. They work perfectly in maya’s standard renderer.

I’ve attached the file’s out-color node to the material’s normal camera node.
any clues? is there an extra step i should be doing for Mental Ray? thanks in advance.