There are a number of different annoying things about the fanless GPU co-processors that Nvidia and Advanced Micro Devices are peddling as adjunct computational engines for servers. First, these Tesla and FireStream co-processors, as their lines are respectively known, cost more than regular GPUs. And they also have a different …
Close but no cigar
1. If it is to really do "multiple jobs per lifetime" it needs a DisplayPort or DVI. Both can be converted to HDMI if need be while driving "proper" displays too.
2. If it is to use peripheral airflow properly it need the plate on top of that fin array extended to the end to ensure that airflow goes where needed and goes out of the back. The way it is done at the moment airflow will skim across its top and not cool it properly so welcome to thermal throttle. There are probably even more clever options, but this one is bleeding obvious.
3. You get 30cfm at a PCIe slot in the "fan-always-on-full-blast" compute unit like the MSI 2U jobs people use for rendering. You do not necessarily get 30cfm in a "proper" multi-purpose server because the fans are PWM controlled and will throttle down to a fraction of their usual speed if needed. So a more clever heatsink and an optional controlled fan for it one can clip on are actually not such a bad idea.
Close, but no cigar.
No need for obsolete DVI connectors....
You can get a real cheap HDMI to DVI cable or adapter if you have an old monitor without HDMI inputs; HDMI has more capabilities then DVI (includes audio), so it makes no sense to include the more limited interface, when cheap adapters are available. In addition, pretty much all current LCD monitors have HDMI inputs, at least here in US, as it also carries audio so you do not need any additional cables if you can live with the speakers built into your monitor.
Display Port is Apple only - I have yet to see a normal priced monitor or large screen TV which supports it, so the question is why bother? Yes, it is nice, but HDMI is by far the volume leader and supports the higher resolutions as well.
I have a hard time believing that the inability to use a server GPU as a video card is what is hindering adoption. More likely, the difficulty in porting legacy applications to make full use of the hardware is to blame.
Or flat out fear of the unknown?
I'm currrently trying to encourage various researchers to look at GPGPU for various applications which should benefit from the technology but most of them are scared stiff if it aint 'x8.
This is _despite_ gpgpu modules being available for the software packages they use.
- Just TWO climate committee MPs contradict IPCC: The two with SCIENCE degrees
- 14 antivirus apps found to have security problems
- Feature Scotland's BIG question: Will independence cost me my broadband?
- Apple winks at parents: C'mon, get your kid a tweaked Macbook Pro
- Driverless car SQUADRONS to hit Britain in 2015