ZBrushCentral

3d images in real 3d

This is a test that I did with a simple image that I did with POV-Ray. Notice the image shown in the photo is in real 3d?

I can use the same technique with objects imported from ZBrush into POV-Ray as well. I think I might try that next.

Anyone care to venture a guess how I did it? :smiley:

render 2 pics with different camera angles and then have those printed with this cool photopaper with the plastic prisms kinda surface or however you like to name this special paper which looks funny but gives you a 3d illusion?
Lemo

Close, but those lenticular screens only gives you 3d horizontally. Notice in the pictures I can look toward the bottom or top of the train as well? :wink:

Since nobody else took a guess, Lemonnado was on the right track.

Instead of a lenticular screen I used a #300 Fresnel hexagonal lens array.

Within pov-ray I set up a virtual array of 11,009 concave lenses between the pov-ray camera and my subject matter. When rendered, this basically gives me 11,009 mini fisheye renderings, each one having a different view of the scene. I rendered the scene at a resolution of 4000 x 3200 pixels and printed it out on a Lexmark photo printer on Hammermill 110 lb heavy card stock and placed it in the frame behind the hexagonal lens array.

When viewed, each lenslet of the hexagonal magnifies a small portion of each ā€œfisheyeā€ rendering beneath it and the portion that’s magnified changes with your viewpoint giving you a different view of the scene depending on your viewpoint.

This was a technique for 3d photography developed by Gabriel Lippman in 1908, but I wanted to try it with a computer generated image.

http://www.microlens.com/Pages/Lenticular%20History.html

Fantastic! Now we just have to create a crazy ZBrush layer which caters towards that 8-). Where can one procure those lens sheets? Creating the lens array in XSI also seems possible… What a great idea! I bet the original looks stunning!
Thanks for the inspiration, I never heard about that before.
Cheers
Lemo

:cool: I would be also very interested in any information you can supply on just how to do this. I have heard of lenticular methods but not what you are doing here, getting 3d in both directions. Just where can I get the materials. Do you know if anything is available for Lightwave, I guess I need some kind of plugin to create the image matrix or can I just process the final images with some Photoshop filter/plugin?. Sounds like this would need to be VERY specific to the lens characteristics, so I would think things need to be adjustable or standardized.

My intuition is that each image ā€œcellā€ is just an upside down undistorted image that is the same diameter as each lens.

Do the interleaved cells need to be offset by eye to eye distance to get 3d? Can you do this with just 2 images or do you need several to make a smooth transition.

Gregg

I was thinking if you could get clear backed 3M ā€œScotchliteā€ sheet it would
have interesting lensing effects also.
Was that 110 lb. paper as heavy as it sounds?

No, it is 2.521 times heavier than you were thinking.

Sorry so late in replying. I’ve been busy with some stuff.

I ordered the item #300 hexagonal lens array from Fresnel Technologies (http://www.fresneltech.com/). Including shipping and handling the lens array cost around $62.00, so it really wouldn’t be practical for mass production kind of stuff. Their catalog can be found here: http://www.fresneltech.com/pdf/FresnelLenses.pdf

When I get the chance I’ll post the pov-ray routine that was used to generate the lenses in the program. Maybe that’ll help.

Yes, but the images aren’t inverted, they are right side up. That’s why I had to make the pov-ray lenses concave rather than convex.

No, the lenses in pov-ray take care of everthing automatically.

This is the pov-ray code that I used to create the lenslets. Sorry, it took so long for me to post.


#declare X_Count = 0;
#declare Y_Count = 0;
#declare X_Offset = 0.045871559633027522935779816513761;
#declare Y_Radius = 0.045871559633027522935779816513761;
#declare Y_Offset = 0;
 
#declare Lenslet =
difference {
cylinder {
0*z, (X_Offset/2)*z, X_Offset/2
}
sphere {
<0,0,0>
X_Offset/2	 // radius of sphere
}
}
 
#while (Y_Count < 51)
#while(X_Count < 109)
object {
Lenslet	 
translate <-2.5+(X_Offset/2)+(X_Count*X_Offset),(3-(Y_Radius/2))-Y_Offset,-2> 
material {
texture {
pigment {
color rgbf<1,1,1,1>
}
}
interior {
ior 1.35
}
}
no_shadow
no_reflection
}
#declare X_Count = X_Count+1;
#end
#declare Y_Count = Y_Count+1;
#declare Y_Offset = Y_Offset+0.079131372549019607843137254901961;
#declare X_Count = 0; 
#end	 
 
#declare Y_Count=0;
#declare Y_Offset=0;
#while (Y_Count < 50)	 
#while(X_Count < 110)
object {
Lenslet
translate <-2.5+(X_Count*X_Offset),2.938-Y_Offset,-2> 
material {
texture {
pigment {
color rgbf<1,1,1,1>
}
}
interior {
ior 1.35
}
}
no_shadow
no_reflection
}
#declare X_Count = X_Count+1;
#end 
#declare Y_Count = Y_Count+1;
#declare Y_Offset = Y_Offset+0.079131372549019607843137254901961;
#declare X_Count = 0; 
#end

Nice work. Wish I had found this a year ago. How did you figure out the spacing/positioning between the virtual lens array and the camera? There are now cheaper arrays available. This could be a viable production method.

Since I used a virtual orthographic camera the distance didn’t matter.

Might be easier to remove some bricks from a wall and put a toy train in the hole.

It’s not as hard as it seems. So many people view interlacing (for lenticular or flyseys lenses) to be really difficult. Well it’s not. All it takes is a little reading and some experimenting with your equipment.