ZBrushCentral

Pixologic Release - ZMapper (PC) - Now Included in ZBrush 3.1

Thank you very much:+1: :+1: :+1:

my test
[Snap4.1.gif](javascript:zb_insimg(ā€˜22690’,ā€˜Snap3.jpg’,1,0))

Attachments

Snap3.jpg

I have been wanting this FOREVER.

Greatest program and greatest online community ever is here

Although I’m not finished with this model, I wanted to make a quick test to see how the ZMapper plug-in would work with Lightwave 8.5
The image on the left is the low poly model in ZBrush at around 16,000 polys. The image on the right is the lightwave render using the TB_NormalMap.p plugin. There are a few anomalies in the mesh, but I think that’s because I have a little bit of overlapping mesh…That’s a no, no. Over all I’m really happy with it.

Hi!

This is a great tool! :slight_smile:
Thanks a lot Pixolator. :+1:

I made a quick and rough test in C4D. There are some issues visible in the ears. Any tips on how to fix these?
Cheers / Alex

Attachments

test.jpg

Badtastic - Please post questions here. What you are seeing is the result of your settings not be set correctly for Cinema 4D. As there is no Cinema 4D preset at this time, this will require trail and error and the support of Cinema 4D community. :slight_smile:

Cheers,

Ryan

Here is mine. I used the Maya tangent space best quality settings.

I used my bump map in the alpha slot and set as a displacement before NM generation time. I think it said it rendered over 9 million polys.

This is a 4096 X 4096 normal map!

tort_NM_wire_01.gif

Peace,

NickZ.

I love this tool! Thanks ZBrush.

Ok I thought I would write a quick Tutorial on max 8 and also post my max 8 settings.

So do your normal stuff. Make low poly model, Sub d it up in Z. Then with your model in it’s lowest settings run Zmapper. Load up my presets, which can be found here

http://www.jessebrophy.com/images/ZMapperCustomConfig_Max_8.zmp

The more samples and Division you use the better the map. Now export out your normal map. Load up max and import your low poly model. Now add a material to the model. In the bump slot add a normal to bump map. In the normal slot that comes up load in your newly made normal map. Add a light and render. There is no need to change any of the normal settings as long as it is set to tangent.

Let me know how it works and if I should change anything in this brief description. Good luck.

Can someone explain me clearly the difference between bump and normalmaps in non realtime rendering?

There s a chapter described in the PDF, but to be honest… I dont get it :o
I dont see any difference…
DO you have more volums informations using normalmap instead of bump for cinƩmatics?

Sebcesoir - I can’t explain the difference but I can tell you what I was told and what has helped me.

A normal map can do everything that a bump map does to the same level of quality that a bump map does it.

However, there are things that a normal map can do that a bump map can not.

Normal maps are not necessarily better than bump maps in terms of quality.

They are better in terms of what they are able to do.

Hope that helps. :slight_smile:

Thanks for all your great work!

Ryan

Sebcesoir - I shall try to answer this as I understand it. I could be wrong so please feel free to point out my mistakes.

When you use a height map as a bump map in a 3d program or an engine, it is, essentially made into a normal map. The, I shall use the word engine from here on to describe any rendering package or game engine, converts the height map into a normal map by looking at the difference in height changes from pixel to pixel and determining the angle. So a normal map is this, essentiallyy the angle that is created from comparing one pixel to it’s neighbors. This happens on the fly in 3d programs. But if you can remove this computation at render time, in theory, you can speed things up. So if I have a 3x3 map the middle pixel has to compare all its neighboring pixels to determine it’s angle. This creates a normal map that is the same resolution of the bump map, 3x3 in the end, but the fidelity of the map is lost because it gets its information from the pixels around it. The pixels on the perimeter have less pixels to determine their angles. So a normal that is computed with Zmapper has much more detail than one that is created from the same size bump map, in say photoshop with Nvidia’s plugin.

I hope that made some sense and was somewhat correct.

I’ll try to give it a shot. Maybe it’ll help or just make you more confused.

In short, a normal map is a bump map on steroids.

Displacement map:
A displacement map is a greyscale image that is used to ā€œactuallyā€ change the shape of the model. This is probably the best method to use, but there’s one big drawback. At render time, the model must be subdivided close to or right at the same number of polygons that your high-poly model was at when you created your displacement map. At render time, the map then moves the normals in 3D space to create the needed detail. The downside is that it takes FOREVER to render your model making it unusable for use in games.

Bump map:
A bump map is a greyscale image as well that is used to ā€œfakeā€ the way light and shadow falls upon a suface normal. It’s great for making it look like there’s bumps, wrinkles, and cracks in a suface. The way light is calculated using the greyscale image is done by using the pixel information to calculate it’s ā€œvectorā€ ( location in 3D space ). With a greyscale image though, this info is ā€œrelativeā€ to the actual normal itself.
This is why you have to use the same greyscale image map with the same model that was used to create it ( unlike a normal map ). In other words, this vector information is ā€œstuckā€ to the actual surface normal.
You can notice this in some games. If you see bullet holes or craters in a wall, it really looks like there’s depth to it. If you walk up close to the wall and look at it at an angle, you’ll see that the wall still looks flat. This is what I mean when I say that this bump map information is ā€œrelativeā€ to the actual normal itself.

Normal map:
This works much in the same way as a bump map except that it uses the Red, Green, and Blue color channels of the image to calculate the location vectors ( X, Y, Z ) in 3D space. This will tell the renderer how light and shadow will be calculated when you render your image. This method is better because it allows much more detailed normal vector information to be encoded.
This information, unlike bump map information, is not relative to the surface normal ( is not stuck to the surface of the normal ) which is why you can apply this normal map image to a different model. Still, it has it’s limitations. In other words, you can’t take a normal map image from a highly detailed monster charater and apply it to a stick figure type of model and think that you’ll have the same results.
If you’re playing the same game, looking at the same wall but this time the craters or bullet holes are created using a normal map, you can walk up to the wall and look at it at an angle and it will look like the wall really has craters or holes in it.

There are still a few things that I’m not completey sure of myself, but I hope that is is the basic understanding of it.

Comment and question.
Obviously this is not what we call 'light reading"
However page 88 has the following footnote.

13 Such as your author, who would dearly love to explain the details, but will restrain himself as he understands that sentences like, "renderers generally assume that a surface normal is pre-normalized, so using a non-unit vector in a normal map can affect the final light intensity,! do not necessarily inspire fascination in the user community.

People have different ideas of what is funny.
Have to admit choking and gagging on a sip of
Diet Pepsi when I hit this.
Question, who wrote this?

Pixolator, obviously.
Hope you’ve recovered from half-choking on your drink…I for one found the manual easy to read, and this one sentence made me smile.

haha almost sounds like something mr. petroc would say.

but have been wondering myself who has written it…very well done!

Bump maps are not affected by directional lighting, but normal maps are (i.e. moving a light in your scene to light an object with only a bump map will not affect the shadows of the ā€œbumps.ā€ If you use a normal map, the shadows caused by the detail in the normal map will change accordingly with the positions of your lights…soooo, it’s more realistic.)

Is there any way to use a low res mesh that isn’t the base mesh for the high res? For example, use a poly reducer program to generate your low poly after you have finished the hig res, and then use your high res and new generated low res to get a normal map?

sumpm1 yes, Chapter 3 and Section 5.6 of the manual.

this is great, i normally make my meshes, then take into zbrush for detailing, then once thats done, go back to my lower res and reexport that and reimport into 3d ap to test my renders. This will save oh so much time.

for the love of god pixologic, MAKE US A GAME ENGINE!

Wow, that is a pretty extensive manual!

Thanks

I think the coolest feature is that you can bake your bumpmap into the normalmap. This is impossible to do anywhere else. I have been trying to find a way to do this for a long time. Instead of doing all of your fine details in geometry, you can do them quickly in the bumpmap and keep them!