Hi, I am a zbrush newbie so bear with me if this stuff seems obvious. I would like to take a photographic source - say a picture of a concrete wall, with cracks and pits and such - and turn it into a wall texture for use in a per-pixel-lighting game engine, with the photographic source as the diffuse channel and a normal map based on the photographic source for the normal map channel. While there are other methods of doing this (the nvidia normal map tool, etc.) I would really like to get some zbrush type action going on to see if it produces better results.
There are two ways I have tried this in Zbrush and neither has worked. I’ll describe the problems with each of them but if there is another way I have not considered please let me know!
1. Photograph -> Greyscale -> Z channel in Zbrush
I would like to use a greyscale version of the photograph (prepared in photoshop) as a basis for further tweaks in Zbrush but I can’t figure out how to get Zbrush to load the greyscale image into its Z channel. Essentially what I want to do is this:
a. import RGB image -> RGB channels in Zbrush
b. import greyscale image -> Z channel in Zbrush
c. edit the Z channel in Zbrush
d. export Z channel information as a normal map
I can’t figure out how to do b.
The other method I tried was this:
2. Greyscale image -> import as Alpha in Zbrush -> polymesh
but no matter what I do (various smoothing/resolution settings when generating the polymesh, etc.) the mesh always comes out pock-marked, with little spots where vertices are inexplicably much higher or lower than their neighbors, where no such micro-peaks or valleys exist in the image.
Even if I fixed that problem though, how would I align the mesh back up with the original photograph so I could see it as I made further edits, and so that it would export a normal map at the end that lined up correctly with the diffuse component?
Thanks for any help.