ZBrushCentral

UV Confusion

I’m posting here because ZBC forum seems to host some of the most knowledgeable and helpful people in the 3d community.

I’m having trouble ‘wrapping’ (pun there) my head around the whole UV layout concept. Admittedly I am a novice and entirely self-taught through the various free tutorials available on the web and a lot of trial and error but I think I understand the whole pipeline of it all. Please correct me if I am wrong, but it goes something like this:

Concept > Sculpt > Model > UV > Maps lights and Shaders > Rigging > Animation > Rendering > Compositing

The Concept -This still seems to be for the most part the perrogative of 2d art. Though Zbrush has made the ability to conceptualize in 3d a very real option.

The Sculpt - Here’s where, imo, Zbrush truly revolutionized the industry. You take a lump of hi-res geometry (or method of choice, zsphere, box etc) and go nuts. Edge flow and poly count mean little. Before you say good edgeflow and topo can be achieved in Zbrush, lets just skip ahead.

The Model - This is the realm of the 3d package, Maya, 3DS, XSI, C4D, whatever. It is where you actually build the geometry with the intention of rigging and animation and rendering. The method I’ve found to be the fastest and most useful is to take the low res sculpt from ZB, and use it in a reference layer in Maya to build your base mesh. Image planes take too much effort to set up and getting decent reference images can sometimes be nearly impossible, time consuming or at the very least, costly.

UV Layout - Once you’re all done building, before you can really do anything with texture, displacement, normals or pretty much anything else, you need a UV layout. ** This is where I am extremely confused ** Why are UV’s necessary? Why, if it is mathmatically impossible to accurately represent a 3d object in 2d, are we still doing this? Why not make a Maya (or other) plugin that can read a 2.5d image format and simply apply a polypainted 2.5d map (normal,displacement etc) directly to a model and have it render from that information? There are certainly enough 3d painting applications available to make this form of conversion legitimate.

I’ve Googled UV tutorials to death and of all the elements of 3d imaging, this, the most confusing imo, seems to be the one with the least informational resources and the most requests for assistance. Browsing through just this ZBC forum produced a vast number of results for questions concerning UV’s.

So far the best method for UV layout I’ve found is , oddly enough, Blender.

If someone could help me better understand this process or direct me to some decent information or tools. I would be most grateful. I did buy the UV learning kit from Digital Tutors. Unfortunately I found it frustrating and painful to listen too. The teachers S’ can make your ears bleed :stuck_out_tongue: and the whole time I was left thinking “This can’t be the easiest/best way to do this.”

Please Help. Thank you.

I am no expert really but this is my humble view on this…

The computer needs to know how to wrap a 2D bitmap over a 3D object, so it needs the co-ordinates of each poly in relation to the bitmap.

As for your wider point as to why, I think it’s legacy really, early 3D on computers couldn’t really handle very high poly counts or texture sizes (thinking of games production as an example), so this method of mapping bitmap to 3D model was perfectly acceptable.

As for the future, with technology like ZBrush, being able to paint per pixol on multi million poly models, a new file format may arise which like you said stores the information differently, doing it the ZBrush poly painting way. There will be no need for Photoshop as all those same tools and layer blends will be available in 3D.

Could be a way off though. The reason these tools exist is to serve industries like the games industry. These still use bitmaps, and the games hardware needs to be mighty powerful to display multi million poly models, for the polypainting technique. So there will be the need to convert to 2D bitmap for a long time to come I believe. Which means UVs…

I’m sure others have different takes on this.

I go along with TrackZ’s view.

How should the computer know how to apply a given texture? Even if you paint directly on the model the software somehow has to store the information that on the coordinates “x|y|z” it has to display the color “A”. Basically, that’s all what UVmapping is about: you tell the software what pixel of the map goes on which coordinates.
ZBrush, too, saves that information in some way. That’s why you can export the results of your polypainting as a 2D texture map in the end.
The advantage of having a well-made UVmap is that you can export your mesh to any application that can read the file format, and the texture will always be applied the same way. Based on the template, other people can texture your model even without having your mesh (theoretically; in reality it’s much easier if you have the mesh to test what your texture will look like when it’s applied to the model). And by applying different texture/transparency/displacement maps the model can take on completely different looks, without any need to change the model itself.

By the way: I do all my UVmapping with UVLayout, that’s a great tool and gives excellent results.

I get this part, and understand that prior to 3d painting applications, the textures had to be painted by hand in 2d illustration software. Thereby nescessitating the need to unwrap and flatten the 3d object. With the tools available now it is no longer nescessary to ‘see’ the texture in 2d. Normal image formats can only store the x and y coordinates, but with the pixol’s abiliy to store the z as well, you might not be able to pull up and view your texture in an image viewer, but why would you need to? Just apply it to your model in 3d, paint it in 3d, render it in 3d. 2d should no longer be part of the process.

As for the future, with technology like ZBrush, being able to paint per pixol on multi million poly models, a new file format may arise which like you said stores the information differently, doing it the ZBrush poly painting way. There will be no need for Photoshop as all those same tools and layer blends will be available in 3D.
The Future is now :slight_smile: Maybe I’m way off, but I see the technology as being available right now, just not implemented.

Could be a way off though. The reason these tools exist is to serve industries like the games industry. These still use bitmaps, and the games hardware needs to be mighty powerful to display multi million poly models, for the polypainting technique. So there will be the need to convert to 2D bitmap for a long time to come I believe. Which means UVs…

I’m not asking to be able to display multi-million poly models. The process would be essentially the same. A high res texture on a low res mesh. Just a different way of storing where the colors should go :slight_smile:

I appreciate your response and please don’t take this as an assault. Your reply helped me get things straight in my head. At this point I guess I’m just sounding off my opinions.

escha- Thanks, I’m looking into UVLayout, it just seems awfully expensive.

The future might be without UVs and texture maps, but if you want compatibility with other software you just have to put up with it, at least for now.

  • Mind you, I’m talking about real 3D models and not about 2.5D and rendered images. If you do your work only in ZB go ahead and ignore the UVs.

But in my opinion ZB’s painting tools simply can’t compete with Photoshop’s :wink:

I do think that you have a point that the “uv” method of texturing is legacy, but the thing to realize about it is that for the same reason that zbrush puts out wavefront files uv’s are pretty base in the way that not only 3d apps, but also engines and other editors (esp for games) look at models for production- this could change over time and should since so much more is possible technically with the methods that zbrush implores in it’s application. I for one, being a c4d user wish that there were some way to run the zbrush editor as a plugin so that the texturing functions of bodypaint could be integrated with the rendering/hair/cloth/particle/etc of c4d and the sculpting tools and mesh subdivision control of zbrush in one app- a zbrush as plugin would be a solution to all of that in one easy blow