ZBrushCentral

textured 3d scan to multiple subtools

Hello everybody,

i have a 3d scanned urban landscape (created with photoscan) with 2.5 million points total with multiple uv’s and textures. Its not manifold and a bit dirty (because its a rough 3D scan). I want to optimze this model for realtime purposes. My plan was to tile the mesh with uvgroups and preserving uv information -> decimate and remesh the seperate parts and reproject the textures to the optimize models.

But i stuck to uv group the mesh… it just happening nothing after a while, cpu load is just at 14%.

Is there a better workflow? Its important to preserve the textures because the most details are stored in there.

Thank you!

If the existing UVs are assigned per material then you can turn on Preferences>Import>Import Mats as Groups and import directly for an OBJ. Alternatively, if you can get an FBX file of the scan then the FBX plugin has an Import Mats as SubTools option which will split into subtools and automatically assign the textures.

Thank you,

but it seems this doesn’t help me much. Zbrush creates 191 Subtools… But i have 94 Textures. It seems that everything is split up even parts who share the same texture. The UVs are created arbitary through the photogrammetry software. Its like a huge puzzle. Is there a script or something like this which automatic loads the correct texture for each? :wink: I guess not. I try to create a new uv channel in max and creating one giant texture with “render to texture” for creating polypaint to get the texture to my optimized model… But according to the 4096² Texture size, i will need a 40000x40000 px texture…

Any thoughts and hints about that? Thank you!

Its a real pain to work with 0,01 frame per second :wink:

What method did you use? If you can load the file into Max then you could export as an FBX file and import that into ZBrush.

I tried at first with obj. Now i tried with fbx and it works! Thx!

But what now? I would like to project this puzzle stuff to a new clean geometry with the texture data on it… But how to do it? Normally I would rise the polycount for each object drastically and project texture to polypaint, merge it and project polypaint to the new texture of new geometry. But I have to do this 190 times… is there an alternative? :slight_smile:

I don’t think there’s a way to easily automate the process. It wouldn’t be difficult to create a macro that subdivides each mesh a set number of times and then creates polypaint from the textures but that would only be satisfactory if the subtools are much the same size and you can be sure of getting enough resolution. You may be able to combine some subtools after transferring the textures to polypaint but that will depend on how dense the meshes are.

I’m afraid that however you look at it there’s probably a fair amount of work involved.

I use Photoscan. You’re talking about the crazy UVs it produces. Photoscan picks a section of each camera and projects a section of the texture from that camera. It leads to disjointed puzzle-like UVs and texture maps. If you read up on the Photscan website, it describes a good way of cleaning this mess up.
Essentially, you import the model into zBrush or another program and create new UVs (without moving the model) then import the model back into Photoscan and re-apply the texture to the new UV.
This gives you pretty UVs and understandable texture maps.

Thank you, this workflow really suits to my needs! I now have a problem how to clean up architectural scans in Zbrush. But I better start a different Thread for this here.