Who the hell uses PCI anymore?
288GB/s bandwidth but only connecting to the host at 133MB/s!
If you think you can build the next Youtube, Facebook or Google, you'll need to invest heavily in artificial intelligence and GPU-accelerated engineering to get any kind of edge over rivals. And Nvidia just so happens to have some hardware for you, or so it says. The graphics chip giant has revealed a Tesla M40 GPU …
The quoted transfer speed is for the internal memory bus. But the bus is 384bits wide, and the memory is GDDR5 clocked at 6GHz, so - yes - it will fly. What are called device to host transfers (that is CPU to GPU memory transfers) will have to go via the PCI Express bus, and so will be much slower.
You all might like to know that gcc5 supports offloading on the GPU for general C/Fortan code using the OpenACC instructions.
This would permit ffmpeg (from Debian, Redhat etc...) to be accelerated *without* dependence on one type of GPU...
And yes, for wobbling molecules...
P.