Torch, it all really comes down to workflow. Which one would you feel more comfortable with? As to losing scale on the mesh, store a morph target before you divide the mesh and project the details. When you go back to your lowest subD level you’ll need to switch the morph target and then create your maps. This will bring back the exact shape of your LP mesh so you can ensure your tangents are the same for your LP mesh that you built.
The best thing to do is bake in a program that allows for you to use the same tangent space as the engine you’re going to be rendering in. I.E. if you’re building a mesh to work in MAYA then you should bake in MAYA, if you’re using unity, UDK, Crytek, then you should bake in a program that allows you to match the tangents of those engines. xNormal allows you to do this and is what we use for production works.
We also prefer to bake outside of zBrush because zbrush won’t let you project across multiple subtools…at least it doesn’t do a good job of projecting. Other baking algorithms handle this part much better. Baking your diffuse and normal maps, etc in a traditional baker also allows you to ensure that your maps will all align correctly. Zbrush has a tendency to have the diffuse and normal textures not line up correctly because they use two different types of projection when creating the maps…at least if you store a MT and project the way I mentioned above. While this ensure your normals line up correctly with the lowpoly mesh, the diffuse can mess up. So…basically, it all depends on what you want to sacrifice on if you’re generating maps inside of zbrush.
Zbrush also doesn’t have realtime lights or a normal map viewer, which to me is a real turn off. soon maybe, or another nice plugin like Zmapper…but only as a visual tool for checking normals would be nice.
eh, my 2cents