ZBrushCentral

millions of polys each subtool in maya

Zbrush can have many subtools, each with millions of polys. But how do you go about importing say a character with 20 subtools and each subtool has 5 million polys into a 3d program like Maya? Would you have to render each piece of geometry with a displacement map individually and composite everything together later?? I’ve never understood this and was hoping someone could answer this? Thanks :+1:

The idea is basically, hrm let me try to put this in an ultra concise way, or it could get wordy.

The idea is you generate your displacement map, texture map, bump, specular, all of your maps from your high res geometry. Pack as much detail as you need based off of that highest res.

Then you import the lowest level of geometry into your rendering/animating program, in your case Maya and have all of those maps take care of giving back all that high res information by “projecting” it all through your renderer.

The idea is to keep everything looking high res but actually being low res polygonal support mesh underlying everything. Maps are what makes this possible.

This keeps the “overhead” in terms of rendering time, processing time and handling down to a minimum while still preserving all of the fine detail.

In the end, the eye is the judge, and can be fooled into thinking that the low resolution mesh is actually a high resolution mesh because of all the displacement and texturing that was done, but in reality it’s just a low res mesh with some fancy “tricks” applied to it via mapping that allow all that magic to happen without dealing with millions of actual polygons that would have to be processed.

To sum that up, the processing requirements in terms of time taken (efficiency) are greatly reduced when it comes time for your renderer to calculate how the light in a scene works when it only has to deal with a small amount of polygons and can refer to the various maps to present that information out in a render. Instead of calculating the position of each photon of light bouncing off of each one of millions and millions of polygons and how they interact with all of those other millions of polygons, the renderer only needs to calculate that information for a relatively few polygons and for the pixel information stored in a map. *edit, keeping in mind that pixel information, alpha information, depth information, shading and other things can all be stored compactly inside the information of a set of maps, saving the renderer from having to figure out all that stuff and can instead rely on that compact information stored in the map to figure out how to display things in the render.

Consider that for animating purposes in things like films as each second of animation is composed of tens of frames, you’d want each frame to render out as efficiently as possible since you’d need to render out so many. And for game engines to handle displaying all that geometry interacting with physics, you’d also want that all to run smoothly so you can have an acceptable experience for the players running the game.

You want to have things manageable and efficient while at the same time looking their best and I’d say that all boils down to efficiency being the name of the game. Tried to keep it short, but I type fast so it’s still alot of words, hope it’s clear. At least that’s how I see things.

hey extra thanks so much for takin the time to answer but to be honest i dont think any of what you said was the answer i was after lol :o

i’ll try to explain some more.

If a character consisting of 20 different pieces of geometry is to be rendered and each piece of geometry has a displacement map to give the hi res details to each piece of geometry, then at render time maya/mental ray would have to subdivide each of the 20 pieces of geoemtry say 3 million times each to get the details from the displacement maps on each piece of geoemtry. but this would probably cause maya to crash on most computers because that wud be 60 million polys to render. So i’m thinking if you absolutely need to keep these details then you would have to render out pieces of your characters geometry (which could be say high detailed armour, shirt, pants, etc) individually, then composite the frames over each other in another program like photoshop, fusion, after effects, etc. Or is there another way around this? I hope my question is more understandable now :slight_smile:

Why don’t you use a combination of displacement and normal mapping. Use the displacement map for the major changes in geometry and the normal maps for the finer details which are the things you want the very high poly counts for. Get the visual match to your high detail model with fewer polygons.

hey thanks richard good idea, thats wat im after, just different ideas/solutions about how people go about rendering many objects and tens of millions of polys in a frame

Haha yeah I think I misread your post as a “why” would we want to do this instead of “how”. My bad.