So I was sitting around today looking at all of the gorgeous Z3 beta images and the former CS major in me began to wonder how this HD-mesh technology actually works. I’m sure pixolator’s not going to chime in here and give away any trade secrets, but I enjoy speculating on things like this. Z3 beta testers are claiming that they’re using 10 times as many polygons on the same machines that they were formerly using Z2, so there’s obviously some technical magic happening behind the scenes.
I came to the conclusion that the Z3 rendering core must use some sort of normal/bump trick to give the impression that you’re working with more polygons than you actually are. For example, when sculpting in the digital realm, there are two main axioms to follow in regards to handling geometry: #1) if the change in your mesh alters the silhouette, displace it (so, use real geometry). #2) For medium to high frequency details that do not change the figure’s silhouette, use Normal and/or Bump maps.
My guess is that the Z3 rendering core intelligently decides whether or not your mesh has changed enough to generate the geometry for an area, and decides whether or not to use smoke and mirrors (normal/bumps), or the real deal (geometry). I suspect that this is all behind the scenes, as sculptors no doubt want the geometry they think they are sculpting, so at export time Z3 probably creates the geometry based on the same logic it uses to determine what is real, and what is faked.
Just my guess, it seems the only logical explaination due to physical restrictions of hardware that’s on the market today… any thoughts?

