In the latest we-can-render-bigger-special-effects-movies-than-you vendor story, HP is going for the Oscar nomination with DreamWorks' Monsters vs. Aliens. Yet networked HP storage was rejected because it slowed rendering down. IBRIX Fusion software was used instead. Monsters vs. Aliens made tremendous demands on the movie- …
Double?! I doubt it...
Regarding the doubling of rendering requirements for stereo. If that's true, they're dumb -- and I doubt it. In reality, you take advantage of coherence. The geometry is basically the same, just the camera position changes. Double the computation is dumb and a waste, but of course it's not free, so somewhere between 1x and 2x. Separate the yolk from the white and don't compute on both.
It will be news when...
Dreamworks do what many other post-production houses already do with a clustered file system like SGI's CXFS.
"audiences will experience monsters that move like liquid, whose arms and mouths disappear, and whose bodies are transparent...[It] required more than 40 million computing hours to make – more than eight times as many as the original Shrek and nearly double what it took to create Kung Fu Panda."
CGI/3D/whatever may make wonderfully fluid blobs and impressive battle cruisers but when faced with animating non-bloby non-cruisery things they still fail miserably.
Ed Leonard, chief technology officer at DreamWorks Animation, said: "HP’s unique ability ..." blah blah blah.
There's nothing unique about it. What probably IS unique, is the kickback from HP for vomiting such turgid nonsense in a canned statement.
SAN doesnt scale
For these data intensive apps "DISC" to use the current acronym -and Google MapReduce, Apache Hadoop implementations - SAN isnt what you want. It just creates bottlenecks (max shared bandwidth is the SAN network speed), and adds to complexity and datacentre cost/bootstrap times.
by moving to separate disks you gain a lot of IO bandwidth, at a price: disks fail, and when they do your stuff is lost. You need a distributed filesystem to store copies on other machines, you need to expect and handle failures. you also need to design your scheduler to do work near the data (same server best, same rack second best), and design your application so that if the work gets restarted it doesn't do any harm.
This doesnt mean that SAN doesnt have a place -it is great for enterprise systems where you dont have that scale, where you want to move VMWare images around to give the lllusion of stable web sites. It is just at the scale of high end datacentres today, new applications are being written that need less of the datacentre hardware details hidden by the hardware and the OS. If you code for a massive datacentre with unreliable hardware, your app works without extra features in the datacentre.
3d Rayracing basics
I'm afraid they are quite correct on the matter of doubling render time for stereoscopics.
Yes, the geometry of the objects has not changed, but the camera position has, and no matter which way you cut it, that will require a full frame render, independant of the other eye angle.
Where you may be getting understandably confused is the common 'tricks' and 'shortcuts' used in software and hardware for realtime 3D effects, commonly used in directx and openGL. While they can work pretty well, they will still leave telltale glitches.
The best example I could give is anti-aliasing. (the feathering of an objects outline to remove "jaggies").
In realtime 3D, the code in hardware/software to do this, while lightyears beyond my rudimentry programming abilities, is crude to but it mildly. This is because these algorythms rely a great deal on guesstimating what the final output should look like.
Most raytracing software will re-render huge portions of the image to acheive this. In its simplest mode, the ray tracer will 'jostle' the virtual camera about and re-render the entire frame several times, then merge these multiple images together. Most software is optimised now to only re-render aspects of the image that will significatly change, or to detect and only render the elements of the image that have noticable high contrast changes between pixels.
The problem is, there are limits to this predictive ability because ray-tracing a photo-realistic image is highly unpredictable on a pixel-by-pixel basis. reflection, diffusion, specularity, refraction, liminosity, radiosity, lens distortion, motion blur, volumetrics and even lens flares are impossible to be predicted ACCURATELY without going through all the donkey work of actually calculating them.
Moving the camera by a few inches throws every single one of the prior calculations out the window. Hence, two eyes, two angle, two renders. Yes, you can try to cheat, but the difference in the quality shows.
25 years on, and this is the singular lesson that has been well learnt by the ray tracing industry. You want quality? Don't take shortcuts, just throw more horsepower at it.
I often smile when I hear this. I'm afraid to say that you have been tricked many times by CGI, you've only spotted the shoddy workmanship. Just like every single image in a magazine has been photoshopped, every major movie in the last ten years contains a very large number of CGI elements that you don't even notice.
An extra car here, the filling out of a crowd there, a few extra helicopters, explosions, and the replacement of stuntmen. Everybody likes to think they can instantly spot every bit of CGI in a movie, and everybody is wrong. Yes, your eye picks up on the cgi human bodies when they perform physically impossible feats, but what you don't spot is where the human begins, and the cgi ends.
Here's a quick game for you. Can you tell me which explosions in 24 are real, and which ones are digital? Go, just for fun, see if you can draw up a list. I did this one to my friend, while we watched a season back-to-back.
He was gutted when I threw his carefully drawn up list in the bin and explained they ALL were.
Yeh yeh, but does it have enough juice to boot vista?
Bet you're great fun at dinner parties.