RNA Networks - a startup based in server development hotbed Portland, Oregon - has launched a stack of systems software that provides memory virtualization and pooling for servers that are connected by a network. While most server virtualization tools aim to carve up a single box into multiple virtual machines with their own …
"This overcommits memory by 800 to 1,000 per cent, and then along comes a peak workload - what now?"
Whilst I can see the advantages for when you have an app that has peaky but infrequent memory needs, what happens when more than one server needs that peak memory? Whilst they slag off virtualisation engines that overcommit memory, sharing memory offers the same problems when more than one server hits the peak. Then you're back to waiting on swap, etc. What's needed is as was pointed out earlier in the article - faster memory technologies and wider busses to on-chip memory controllers. Or maybe plug-in chips made completely of L3 cache which can populate every second CPU socket - you lose half the CPU space but get massively increased amounts of speedy L3 cache.
Sounds like an NFS-mounted swap partition
Jeez, back in 1987 I had a Sun 3 with no local disk, so it had to page over (10 Mb/s) Ethernet. Isn't this substantially the same thing (but with faster networking)? Try telling the server owner that he has to trade nanosecond-scale RAM latency for millisecond-scale LAN latency.
Cache isn't the answer
I may be a bit out of date on my CPU theory, but I seem to remember that caches can only get so big before it takes too much time and/or electronics to be effective. The real answer is faster RAM.
Where do I start
1) Wot's a transaction, then? What exactly are you measuring here, and do you mean (as is often meant when transactions are mentioned) to persistent backing store? Difficult, at microsecond times.
2) "...costs between $7,500 and $10,000 per machine...". And I just bought 2 gig of server memory for £25. so, 20 * £25 = £500 on the nail for 40 gig of memory. Inc tax, delivery, retail not bulk. About $710.
3) Native mem is going to be a touch quicker than sucking it off a remote machine.
5) Or you could consider having machines with different sized memories and moving the *application* around instead to follow the workload.
6) I give up.
No more of these silly articles for me.
RE: Sounds like an NFS-mounted swap partition
More like DEC's old Memory Channel technology from the Alpha clusters. With NFS-mounted swap the lag would largely have been due to it being on disk at the far end. I suppose the idea of sharing memory over the LAN has merit for groups of servers with intermittant memory peaks and very fast LAN links (10GbE? - not cheap), but it seems expensive compared to just boosting the main memory in the first place.
I suppose one way to implement it would be to have a shelf of app blades loaded with memory, then a single-CPU rack server which can take more memory but does little sitting to one side as the memory bank the blades dip into.
It's great to see someone resurrecting and productising distributed memory, a concept that's been around for donkeys years.
@Matt B: DEC Memory Channel would be more akin to, say, RDMA over Infiniband, which if I read correctly, is just one piece of this stack.
At that price point, the use cases for this will remain slim and are basically identical with those of main-memory databases i.e. transactional systems with large working data sets. Although I can also see possible uses by the starry-joined OLAP crowd.
Ultimately this stuff just forms part of a hypervisor and permits what amounts to main-memory lending between VM hosts. The cloud computing people will love being able to have a memory rack next to their CPU rack.
And thus we reinvent the mainframe ...