ZBrushCentral

Nudge in ZBrush not working properly

Being new to ZBrush, I try to do the same as what is shown in the tutorials. One thing that works differently is the nudge-tool in Spotlight. In the tutorial the distortion of the image is in the direction of the mouse-strokes.

When I try to do that, it does not matter in which direction I move the mouse, the result is always that the distortion is pulling the image in. So I can only make the part of the image look smaller.

Can somebody tell me what I am doing wrong?
Hope you can read my settings. I tried to enlarge them, but the display in this window was the same.

Attachments

Nudge-setting-1.jpg

Nudge-setting-2.jpg

Nudge-setting-2.jpg

Nudge-setting-1.jpg

There’s nothing wrong with your settings.

Are you using a graphic’s tablet? If you start ZBrush using the eraser end of the pen then that can get registered as the ‘drawing’ end for the session; the pen end will then act in the way you describe.

The effect you describe would be available when you hold Alt down as you draw - Alt is the key that inverts the current behaviour. Have you got sticky keys switched on perhaps, so that the Alt key is stuck down?

Thanks Marcus, but I found the solution myself. For some obscure reason I could not change the brush. The one that was active was the pinch-brush which I had used before this action. Later I could change the brush to the standard and then it worked ok. Perhaps as a newbie I still have to get into what is custom in ZBrush, but I am confused about the ZBrush-interface. Especially the following:

  • a new sculpt/mesh is automatically a new “tool”.
  • the meshes on the canvas can only be changed into edit-mode just after adding them. Why not be able to select a mesh and start editing?
  • why is it not possible to build a ZPheres-skeleton and then add the mesh with the help of all the brushes? You can only use e few brushes and they are not the ones to build a descent sculpture. Further more, I think it is annoying that in the ZClassroom-tutorials the words “simply” and “very easy” are used. When you start with modeling in 3D in general nothing is easy or simple. Especially the workflow for rigging is not easy. I tried to add a skeleton to the demo-soldier. It almost worked, except for the feet. They are too close to connect to one bone. Make the distance too low and not the whole foot is connected to the bone. Set the distance too high and the result is that part of the other foot is also connected to the bone. After binding the mesh it is not possible to adjust which mesh is bound to a bone. In Cheetah 3D you can adjust it. I switched because I thought sculpting with clay would be easier.

Am I missing something?

Howdy,

Just to chip in from my own experience, it’s extremely helpful to think of zbrush’s foundation as being more of a painting application. Sort of like taking a model, positioning it in photoshop to be just how you like it, and then saving it as an image. That’s why models are referred to as tools, since you’re using them to produce the final image seen on the canvas. You can draw out a model onto the screen, and by editing it, can move it around the document and pose it as needed. But once that information is dropped to the document it’s just like any other photo or image; a 2d grid of pixels instead of 3d vertices. And just like you can’t take a photo of a person in photoshop, select just the person and spin them around inside of the image to see what’s written on their back, you wont be able to do the same with the document’s pixels in zbrush either. You can always go back and edit the tool itself to continue sculpting, but the tool and the document are two different things.

I assume it just so happened that zbrush became absolutely amazing when it comes to editing those 3d tools, which is why a lot of us use it for the 3d side of things.

(It may sound strange, but there are several benefits to this. For example, you can model a brick that has the detail of 10 million verts, and then copy the tool all over the canvas. Because that information just becomes part of an image, you can have the detail of 60+ million polygons worth of brick on your screen with the same impact on preformance as your desktop wallpaper might have. A key difference with Zbrush is that it’s pixels are called pixols, and they also store separate z-depth and material information aside from just regular RGB values (zbrush: painting with depth). So once you have that image of bricks, you can still change it’s material around, play with the lighting, and extract height and normal maps from it. This makes zbrush helpful both for creating textures, and for creating alpha’s to help customize your sculpting brushes. And even if you don’t touch the 2.5D stuff and just want to sculpt on a model, because zbrush doesn’t require the same scene management resources that typical 3d programs do, it lets you easily work with highly detailed models even on lesser hardware such as my own.)

why is it not possible to build a ZPheres-skeleton and then add the mesh with the help of all the brushes?

Zspheres have their own system going on, and probably aren’t as keen to heavy sculpting since you can always severely change the underlying zspheres on a moment’s notice (or the algorithm used to calculate the resulting skin/topology, which any existing brush strokes might be dependant upon).

All that is required is to convert the zspheres into a regular 3d model, then you can add to the mesh with the help of all the brushes. This can be done by previewing the skin (‘a’ key) and hitting the Tool: Make Polymesh 3d button, or by going to Tool: Adaptive Skin: Make Adaptive Skin.

If you’re interested, Ryan Kingslien has a fresh 4-part video series on his blog at http://www.isculptstuff.com/ where he showcases his workflow that involves posing a zsphere amature, converting to to a polymesh and then using the other tools to help sculpt it into form.