ZBrushCentral

Questions and Troubleshooting for Displacement Exporter

yes it does. Ideally you have ATLEAST, 1 pixel in your Normal map for every 1 polygon in your highest mesh level. I mean obviously having more than one pixel to represent each polygon is better still but if u have less the detail is lost and not able to be represented in the map.

This is me paraphrasing some Pixologic docs.

I usually bypass trying to super divide my mesh and use a bump map for the high frequency details because of said instability. With an extra gig I can see really exploiting this.

I even found some tools worked fine until I saved them then tried to reoopen on the same system only to get an insufficent memory error and it closed. Such are the perils of only 2 gig : )

I suppose this isnt as applicable to Normal mapped models but being able to bake bump info into the normal map in upcoming versions of Zmapper is going to really change that.

S

ahhh and you know, I still model heavy in Maya : )
So that has a lot to do with it : P I suppose I will break out the calculator next time and model with ZBrush poly limits in mind.

S

Hello everybody.

Spaz8,

I see your point and I know that there many renderers that support micro displacements and huge polygon datasets etc but Mental Ray does lots of things that are part of my working method and I cannot switch to another renderer without sacrificing tones of work, mainly to do with workflow. Also I like the look of MR renders and I haven’t seen a better skin shader in any other renderer that I can use with Maya.

The main point I was trying to make in my last post was that I cannot (yet) justify using multi displacements because the amount of detail I can produce in ZB with one map is in excess of what I can render in MR. An analogy would be like having an old computer with 16 meg ram and a scanner that can do 1200dpi making an image that is like a zillion meg. She just cannie handle it Captain!! Ahem, sorry, but do you get my point?

Scott,

… 8, 9, 10, Coming ready or not! I love a game of hide and seek! Now for the reason I have stated above I won`t ask too much about multi displacements at this point but I think I understand the gist having read all the posts.

I have however found that by using the MR contour shader to show the wireframe of the displaced objects parametric and spatial yield identical results (and also looking at the poly count). The only parameter that seems to govern the spatial method is min subdivisions which mirrors the N Subdivisions for parametric. I am assuming that you still have the feature displacement turned off, as in your video. Give it a try.

I have been using .map files for a while now mainly because it is the only way to get Maya to read 32bit files but thanks for the tip.

Excellent advice for the poly count, thank you. Ktaylor,

Thank you for your great description for multi-displacements, who knows if I get a new machine or start using a different render engine I may be able to use it :frowning: but you said something in your post that reminded me of a technique I was working on.

The basic idea is you have say a finished head that has several sub-d levels and you want to add a body (I bet loads of people have wanted to do this). First off make a good displacement map of the head. You then export the level 1 geometry as .obj import into maya add your and body merging points at the neck then re-export the mesh. The big trick is to take all the Uvs for the body and squash them in a corner that is 50% gray on the displacement map. This way they will have 0 displacement in ZB. Back in ZB import the new mesh into ZB, subdivide and apply the displacement map. You can now continue modelling!

Bear in mind that this is a very short version of the technique and I haven’t yet worked out all the glitches but I have had successful results. I think that a combination of multi-displacements and this one could be very powerful. I will be posting a workflow on that soon.

By the way that was an amazing coincidence with testing the displacements! I went through a few techniques before settling on that one (using displacements to polygons and the hightfield tool but they were not accurate enough). You know we could be living parallel lives and not even know it!!! Enough of that now as I don’t want to scare you.

On a slight tangent, I have another technique that uses the ZB modelling tools to help layout perfect UV’s (it is particularly good for heads.) Now the thing is that I would like to put this on the ‘ZBrush in production pipelines’ board but it doesn’t do images yet, so you guys tell me where you think is the best place?

Cheers dudes, Shadowfiend.

Use the Highend3D post tutorial link… It has a WYSIWYG editor that lets you build your tut online, creates a forum for it where people can ask questions, as well as allow you easy wasy to update and maintain the tut!

As for Spatial I find it gives me better control since I can be assured a tri edge is 1 pixel/unit long or less… This was especially apparent on the dinosaurs I did but frankly with 32bit maps the quality is such I have found in some cases I didnt even need to subdivide back to the 1mil point of my ZBrush mesh.

Spatial is in MR just for backwards compatability. From what I understand it is the same as LDA.

EDIT

To clarify the length attrib. and its purpouse, it will not add subdivision but if you have a surface with spikes or protrustions it will certify that teh edges tesselated there are no longer than what you specified. It helps produce a higher quality surface on the render… You will see the benefit a lot in the forementioned scales and spikey surfaces. Parametric is a uniform subdivision that breaks a surface up into patches and offers no way to adjust the tesselation. Length is adaptive to the needs of the specific displacement.

END EDIT

I am curious, do you have a displacement approximation attached to your mesh?

S

So far I havnt used the Displacement approx with a SubD approx, I thought I read somewhere that if you use the SubD approx it doesnt even evaluate the displacement approximation anyways, Im not sure but I think I read that somewehre,

so I usually use the parametric SubD node and just increase the divisions if I need to usually to around 3 or 4, this seems to render just fine for me, but I remember reading about the spatial parameter with the length adjustment and seems like there was a view dependant option for when the subject gets farther away from the camera, So do you think this gives better results, Id like to know more about it if you dont mind going into somemore detail about it, ill mess around with it to familiarize myself with it, thanks for the info guys.

Yeah, thats true it came up a while back in the old Rendering ZBrush Displacements in Maya thread and I think Sunit mentioned it before that too. It was a huge hurdle since most of the early tutorials reccomended both.
At the moment there is a bug in the Attribute Editor template for the Displacement Approximation. Robert Rusick was kind enough to repost this information on the Maya Highend3D list for me.If you feel comortable hacking some of your core Maya MEL ( no harm just back them up ; )

This will address the issue you cannot seem to change the Approx Style. It is also worth noting that if you add one of both approximations then remove one the node does not seem to actually be removed. I have no idea why this happens : /

So for now I always use a Subdivision approximation. The difference as I understand it is that as Subdivision Approximation applies to the underlying mesh surface whereas a Displacement approximation applies to the displaced surface. Large areas without lots of detail get subdivided as much as areas with very fine details.
Please someone chime in here if I am mistaken though.

This is from the Mental Ray docs (link should open manual page on machines with Maya 6 installed)

Displacement mapped polygons are at first triangulated as ordinary polygons. The initial triangulation is then further subdivided according to the specified approximation criteria. The parametric technique subdivides each triangle a given number of times. All the other techniques take the displacement into account. The length criterion, for example, limits the size of the edges of the triangles of the displaced polygons and ensures that at least all features of this size are resolved.

If you use Parametric you will usually get the same tesselation as Spatial. Spatial just being a simpler version of the LDA Method.
The benefit is that with the length setting you can control the length of a tri edge or the percentage of a pixel (when in view dependent mode). This allows better surfaces when displacing irregular bumpy or jagged forms like horns, spikes, etc. as the renderer will not allow your edges to get stretched there and create weird kinks or “nickling” in the surface. The effect is subtle sometimes in a still frame, try rendering one of each keeping the render in the view then flip between them. It helps to use a displacement with protrusions too.
With everything though, what works for one render may not work as well for another so both Spatial and Parametric are good approaches : ) I dont think one is really better than the other I just tend to use Spatial.

Scott

PS View dependent seems like a a useful LOD tool if you are in motion. It wont bother to uber divide something far from the camera.

Cool, man thanks for all the info, ill mess around some and see If I get any differences with the 2 different approaches, Thanks again!

Ktaylor, I spent the last couple days applying your multi uv technique to a character I’ve been working on. And I have understood everything fine except for when I get back to Maya and start applying all of my .map files. For example I am trying to apply the maps to the torso area of my character, and I have a main torso map and then the extra seam maps which are the shoulders and neck. But when I apply the maps to their designated polygons by going into the texture editor selecting the faces and applying a shader to them when I go to render I only get one section to displace. Is there something I’m missing , maybe you have some special netwok hooked up? Everything up to this point has been pretty staight forward and I feel so close to getting this thing down but I just see how to bring it all together. If anyone could shed some light on this I would be grateful I’m sure it’s basic I just can’t seem to wrap my tiny brain around the concept!

Jeff

JerFoo,

Well it sounds like you are doing everyting right, your correct in that is is straight forward with nothing fancy is going on.

Just so I understand you correctly you have

3 Displacement maps

1 Torso
1 Neck
1 shoulders

you apply all three Dismaps/shaders to the respective polys but only the torso Displaces in the render and not the shoulds and neck.

I would say double check those maps that arnt rendering and make sure that there isnt somehting wrong with them, like apply the torso Dismap that does work to the shoulders and neck, if they displace (even though it will be messed up looking) then youll know its the maps for the shoulders and neck that are messed up,

If that doesnt work then try applying the shader that does work to all the polys and see if that works then youll know if its the shaders for the neck and shoulders that are busted,

thats all I can think of, sounds like you have it down and just something is messed up.

ktaylor ,I should has edited my post earlier upon success, but I guess I was just too happy! your walkthrough was perfect aftetr all, It just required a little testing on my part which was expected and necessary. I just want to thank you and everyone who have contributed this this thread, I am so happy to have this one down! Hopefully now that I know whats going on I can contribute back to this thread a little during my projects like the rest of you.

Jeff

gah_1.jpg
I feel my question was answered in the massive amount of information already posted but I did not catch it, so I will post the problem directly:

When using the Multi Displacement 2 script, I build my dispMap and take it into Maya. However, the map does not have the detail I would expect from a 2048 (or maybe I’m just expecting too much). Anyways, after applying a bump map I get a bastardization between the high poly and low poly meshes. Needless to say I am confused as to what I am doing wrong. Am I missing/skipping some mundane tedious step?

Hey kurokawa,

Whats the process you follow when applying your map and setting up your approximations in Maya. Can you show a render of the displacement map next to a ZBrush render?

Usually when you loose detail (if the map was generated correctly) its because the mesh isnt subdivising enough at rendertime or the alpha gain is too low.
:slight_smile:

Scott

The shot in the bottom left corner of the 4 images is a rendered version in Maya. Top right is from Zbrush. As for the process…

After using the Mutli Disp,
1)I opened maya
2)imported the low poly mesh.
3)Applied a lambert shader.
4)Applied the bump map using the dispMap.tif genereated from MultiDisp2
5)Rendered
6)Dissapointed with the bastardardized result

kurokawa -
I am not sure what your specific concern is here. Also, what is your direct intention? Rendering with bump maps or rendering with displacement maps? Are you rendering in mental ray or software renderer? Please be as specific as possible when posting questions. :slight_smile:

If you simply want to apply the map as a bump map, I would include in those steps smoothing your normals in Maya, Edit Polygons>Normals>Soften Harden> Soften All.

Best,

Ryan

in Zbrush, when i export my mesh? do i need to turn off GRP and turn on MRG? why?

I used renderman to render the displacement exporter in maya alpha gain and alpha offset but what about in renderman? what i have to set for the gain and bias, and also displacement bound and Kb?

this is the result i export my level 3 with cage on and used displacement exporter, but the problem is some part is too puffy not sharp enough
this is the setup
gain -> 0.03
bias -> 0.5
Displacement bound -> 2
Kb ->0.1

do displacement -> on
use shading normal -> on

test.jpg

By Maya render I mean a render where you are trying to render displacement.
Are we doing Mental Ray or Renderman here, or possibly both?

Im confused since it looks like you rendered the bump as displacement in Mental Ray or even Software renderer. I dont know Renderman nearly as well as Jason Belec but I think the gain and bias are equivalent to Alpha Gain and Offset in Mental Ray. Do a search on here for renderman threads. There are some users who did post their successful renders with settings.

: /

S

Hey andylou17

Yes you want to select Mrg so that all your UVs remain sewn when you export the mesh, if you dont select merge and you export the obj back into Maya you can open the UV window and you will find that every polygon now has a UV boarder on it, I guess its not a crime but in real time apps this can potentially slow down your rendering engine.