ZBrushCentral

Why is resolution of colour map so poor after projection?

Hi everyone

I’m trying to ZRemesh a dense imported obj and then reproject the diffuse map back onto the remeshed model but the resolution of the reprojected map is very poor.

This is my workflow:

  1. Import the original dense OBJ into ZBrush and run through Zremesher to bring the poly count down and improve the topology.
  2. Import the original object again and reapply its color map (so that I have something to project).
  3. Subdivide the remeshed version.
  4. Turn off Zadd and turn on Rgb. Append the tools and run Project All.
  5. Color map projects but is very low resolution.

You can see the different models and resultant low rez map in the attached images.

Can anyone give me some pointers as to where I’m going wrong?

Attachments

Untitled-1.jpg

Untitled-2.jpg

  1. Subdivide the remeshed version.

How many times are you subdividing it? If this was only done once, you’ll want to do it a few more times. One rule of thumb when going from texture maps to vertice colors (or vice versa) is to base the geometry resolution off the texture resolution (and always be generous with the number, since neither the UV space or the vertex distribution is ever 100% utilized).

So if the original texture was 1024x1024, you’ll likely want at the vert least 1 million vertices (subdividing it an additional time couldn’t hurt). If it was 2048x2048 then you’ll want more than at least 4 million (and again, going past 8 might be advised).

Otherwise, now that you have a zremeshed version, you could give that UVs and then use a program like Xnormal to bake the map from the original mesh to the new one so that you don’t have to worry about the geometry, just the desired texture resolution.

Hey Cryrid

Thanks for replying. In answer to your question I subdivided 6 times. The original texture was a 4K map so I should probably have seen an improvement in the res I think.

I am a Modo user and I believe I can bake across by placing the relevant mesh as a Bg layer. I will also give xnormal a look too

Thanks!

A 4k map could need more than 16 million vertices to contain the same amount of information.

It’s also possible that it’s not using the texture map for projection, but it is converting the texture to polypaint on the original scanned mesh, and projecting that. If that were the case, you’d need to subdivide the original mesh as well, and you might even have to convert the texture itself to polypaint before projecting.

Thanks Cryrid

Yes that all makes sense especially the polypaint conversion.

Thanks for all the suggestions :slight_smile: