ZBrushCentral

GPU Rendering (3D Studio Max question)

Ive got another question… what about animations? how do i make gelato render like frames 0 to 200 and then build up a 30fps NTSC uncompressed avi?

Talk soon,
ArkangelFX

Will the max plug-in handle that for you?

Interesting bit about the speed of Gelato on the 5200. I was playing with Aqsis, and then converting the rib files and running them through Gelato, and even on that older card Gelato was still outpacing Aqsis. Aqsis is pretty darn slow though from what I’ve seen of it though.

Hi,
This past weekend, I tryied Gelato on my 5200 and voilá! 12 Seconds for the vasefiled render… gelato seems tu run much faster than Vray and MentalRay, the only thing is that when i export a time segment like 100 to 200 i get those frames each on a separate .pyg and i have to render each frame separately using a batch file…

Now my question is… once i have the 100 frames in .tiff format… is there any app that can make a 30fps avi or mov from my frames? or how do film studios put all toghether?

Talk soon,
ArkangelFX

If I understood well you use Max. and when you have like a hundred frames named like this : frame0000.tiff, frame0001.tiff etc then just go into menubar render (not the render menu itself the menu (like the file and edit etc.)) if you look all the way down there is the ramplayer.

click on it and a new window will open. click on the open channel A button scroll to the folder with the different frames click on frame0000.tiff and down the menu click on “sequence” normally it will indicate how many frames there are. then click open it will ask you how much memory you want to use it (I never change it and keep the general settings) it loads the animation into memory and from there click on save and then save it as a .avi or .mov or as you wish :slight_smile:

(don’t know if the explanation is really clear hehe :s if not let me know :slight_smile: )
hope this helps :slight_smile:

is there something like this for ATI cards aswell?
greets

1st of all, thanx very much for the advice! I think there are some other solutions for GPU rendering on ATI cards, but i couldn’t find out how… besides that, ATI and NVIDIA, both sell GFX Cards designed for non-realtime rendering…

Talk soon,
ArkangelFX

Would quicktime (pro) be able to do that?

Just something that I have always wondered.

Dont really know… I just tried Vegas 6, but when i import the .tiff and add them one next to the other… each frame lasts like 2 seconds… :confused:

Talk soon,
ArkangelFX

PS: Ive tryied another GPU rendere… RTSquare… but the quality is really poor!

Sounds more like a slideshow maker than a video encoder.

I did some searching and came up with mencoder.

http://web.njit.edu/all_topics/Prog_Lang_Docs/html/mplayer/encoding.html

edit maybe you could ask about quicktime here… http://discussions.apple.com/category.jspa?categoryID=122

Thanx for that tip! Ill give mencoder a try when i have some time…
I still need to figure out how to solve my rendering time problem…
The thing is that i dont want to loose quality, but 30hs for 5min clips y outrageous…

Talk soon,
ArkangelFX

Having max generate all of those .pyg files might actually work well for you if you really want to decrease your time to render. You could easily get a couple of half decent nvidia cards in a couple of machines and run half of the .pygs on one and half on the other.

I’m real curious about how you end up solving the problems. So if you don’t mind I hope you keep giving updates.

There’s a lil problem with that… i live in argentina… and a decent machine would be? a Pentim IV 3.0Ghz / 1Gb DDR2 / GeForce 6600gs? that costs like 700u$s here in argentina… or more!

I still have to think alot how im gonna solve it… and that the solution I pick is the best one fitting to my problem… its a lot of money to give a try to something…

Actually, im thinking of migrating to Maya and use renderman to render on loads of Machines (cheap ones… like 100u$s) or even use mental ray distributed rendering… dunno yet :confused:

Talk soon,
ArkangelFX

Max is capable of net rendering aswell I think? with backburner. dont know if that is maybe a solution. it comes for “free” with max 8

Yeah… sure max has net render… but Max 8 also comes with Mental Ray… and with max 8, mental ray’s distributed rendering also comes for free… but the thing is that if I render a 5 min clip y 30hs… with 2 machines it’ll take 15hs… but i need it to take less than 10hs…

Talk soon,
ArkangelFX

Well finally I got something going on here!
I’ve tested GPU Rendering on ATI and NVIDIA cards using Gelato and RTSquare and know what… they suck! its really time consuming to batch render 2000 frames that take 1min each using those… since you need more time to script the batch to render those 2000 frames and the time isn’s really that different… (thats for NVIDIA) now for ATI i tested RTSquare and the quality sucks… its really awfull…

So i started a little render farm with some comps at work:
WorkStation: Pentium D 2.8Ghz / 1GB DDR2 / ATI X550 / 160Gb x2(fast raid)
Slave1: Pentium D 2.8Ghz / 1GB DDR / 80Gb x2(fast raid)
Salve2: Duron 800Mhz / 256Mb DIMM / 40Gb IDE

And the times really decreased!! Im so happy… maybe in some near future i can get my hands on some Xenon Dual or Opteron Dual… but its tooooo expensive here in argentina!

Talk soon,
ArkangelFX

Watch your frames… If you render on different CPU’s you will have slightly different results due to the ever so slight difference in floating point rounding errors. I had a massive Intel/AMD problem lately… So… stick to one type when you build a render farm. The worst result I had was with a bucket renderer where every second bucket looked different. The frames looked slightly tiled after that. I was looking for quite a while for the root of the problem as every render maker is swearing that this does not happen. Of course it does happen!
I would go so far and make a bet that the same scene rendered on a MAc with ZB WILL have a difference to the same pic rendered on a PC. Hmmmmm would be worth trying… Just for ‘laughs’.
Lemo

Its allways recomended… as I have read… to use at least the same subarch… like AMD or Intel… doesn’t matter really which AMD or which Intel… as long as they are the same brand…

Its suposed that the render should be exactly the same… and i’ll tell you why…
its just math… 1+1 = 2… intel or amd give exactly same results to those computations… and since the rendering engine tells the CPU which calculations to make… the results should be the same on any CPU you choose…

Talk soon,
ArkangelFX

Not true when you’re dealing with floating point numbers. There are small differences that begin to add up, and they could affect your calculations.

And the rounding mechanics… ALU’s are notoriously complex. And actually, all the results are correct. But if you have a rounding error ‘way’ behind the decimal point, and you run through the calculations multiple thousand times a certain value starts to add up slightly different. Resulting in a different value for color or whatever the render engine tried to figure out. Binary representation of floating point numbers and their natural habitat are an entertaining artform themselves. Having programed in assembler in my former life I can only recommend to apreciate the render results and otherwise stay far far away fromthe matter for the sake of mental health.
Lemo

Ive tryied out Mental Ray’s renderer and I ended up using Backburner :confused:
Its really cool… since it can run as service and the users dont even notice their computer is rendering… so i installed 3ds on every single machine at my office… and guess what? ITS AWESOME!!! it runs great! really low rendering times… less than 3hs!!!:o and the frames difference is unnoticable! Im really happy with the results!

Talk soon,
ArkangelFX

Well… i need to buy 2 new comps for net rendering now… wich would be the best buy? Btw, Opteron’s here in Argentina are extreamly difficult to get.
Hope to hear news soon…

ArkangelFX