Now where did I put my copy of Unreal Tournament?
Why not build a cluster out of WORKSTATIONS?
Australia's Monash University has just opened an amazing visualisation facility called Cave 2. The facility offers an eight-metre long, 320-degree wall comprised of 80 3D monitors with a combined resolution of 27320 x 3072 pixels. “We spend millions of dollars building supercomputers and then look at the results they produce …
-
Tuesday 19th November 2013 06:33 GMT Anonymous Coward
Timely.
Had IT on my back the other day as to why I couldn't use a generic server for the many-screens system I am looking to install in our art gallery over summer break. Just sent the relevant manager (who used to by my boss several years ago, so we get on fine, really) a link to this.
(I am not as ambitious as the story in this article - I am looking at driving three full-HD signage screens in our foyer and up to several HD to full-HD projectors/monitors in the gallery space propper - the latter will vary as to what artworks are installed at the time).
-
Tuesday 19th November 2013 08:15 GMT poopypants
Re: Timely.
Looks like you need the Large Pixel Collider.
-
-
-
This post has been deleted by its author
-
Tuesday 19th November 2013 12:11 GMT SecretBatcave
If you're using HP workstations its proberbly cheaper to use the dl380s than a beefy z820.
I know you can get at least one k5000 into the dl380, I can't see why you wont be able to get two. Super micro can defiantly do it.
As for "servers aren't optimised for graphics" that's patently bollocks. What do you think most workstations are? server motherboards in a fancy box.
-
Tuesday 19th November 2013 20:36 GMT Tsunamijuan
Depends on the board setup
Some of the single processor workstations in the last few years have had a significant advantage over the servers boards and multi proc setups.
For example, Up until the Ivy Bridge E chips came out from intel. You where better off doing multi video card setups on a single socket 2011 setup since they had a much higher pcie bus speed. Since the Sandy Bridge E X79 chipset had pcie 3.0, instead of 2.0 on the servers.
The other big advantage insome situations becomes how the bus is setup on the board. Are you sacrificing PCIE density so that you run management cards drive controllers vs a series of large 16x wide pcie cards.
To go a step further depending on the setup and just how much they are saturating the bus. It might be far cheaper to just use a high end desktop board, with PCIE bridging and switching capabilities. For a dual card setup, Than spending the extra cash on a socket 2011 box.
-
-
Tuesday 19th November 2013 12:28 GMT Grahame 2
Office render farm
A friend at a games company many moons back once told me they had an idea, why not give everyone in the office a high(ish) powered workstation and distribute the render farm through the office using unused cycles. That way everyone gets a decent workstation and they build the render farm for cheap.
They deployed said solution, one problem though, heat. Lots of heat, more than the feeble office aircon could handle, resulting in a lot of sweaty meat bags.
-
Tuesday 19th November 2013 13:32 GMT Steve Todd
SETI@Home
Clustering general purpose/home machines isn't a new idea. For something that needs a lot of GPU work it's not a bad idea, or in the case of SETI if you've got lots of spare CPU cycles going begging. What they do lack is the resiliency of full server class machines (no ECC memory for example).