One of the most interesting things I saw at SC11 was a joint Mellanox and University of Valencia demonstration of rCUDA over Infiniband. With rCUDA, applications can access a GPU (or multiple GPUs) on any other node in the cluster. It makes GPUs a sharable resource and is a big step towards making them as virtualisable (I don’t …
this is big news
Some may say "meh" since remote access might imply long latencies, but this is InfiniBand. Its latencies are only little more than 1 microsecond, although I'm curious about other overheads in such system employing rCUDA.
For comparison, single context switch in Linux kernel on modern Intel x64 is in the range of 2-3 microseconds, so we are indeed talking low latency access to remote computing resource here. This makes CUDA practical for whole range of new purposes.
Esteemed Author» To me, the future of computing will be much more heterogeneous and hybrid than homogeneous and, well, some other word that means ‘common’ and begins with ‘H’.
Point 1: 'Heterogeneous' doesn't mean 'common'. It's posh for 'different' and, generall, things that are different have much in common, or they wouldn't be different. Except in Orwell.
Point 2: It seems to me that, with the Clouds looming darkly on the horizon, that computing is becoming much more homogenous and much less heterogeneous. Modern day mainframes are back.
It's probably all cyclical and we'll be back to something akin to the diversity of computing that existed in the early 1980s once the mainframe-mentality came to be realistically challenged.
I'm wondering how long it will be before the entire virtual memory mapping of a modern desktop CPU is completely subvertable by code loaded into its GPU. Or indeed, whether that's happened already. And whether soon, those programs will be loadable by any entity that has network access to the system.
Hoping that's a black-helicopter speculation. Fearing otherwise.
I know what this is about
Bitcoin app in 3...2...1...