Feeds

back to article Hazelcast signs Java speed king to its in-memory data-grid crew

In-memory data-grid specialist Hazelcast has landed the guru behind Java caching framework Ehcache as its chief technology officer. Greg Luck is joining Hazelcast to refine its in-memory data-grid product for enterprises and to develop paid-for packages. Luck is famous for leading Ehcache, the most widely used Java cache …

COMMENTS

This topic is closed for new posts.
Roo
Silver badge

"Hazelcast you can lash together hundreds of nodes to build pools of hundreds of gigabytes of memory with nano-second access."

Are you sure you have got that right ? nano-second access across hundreds of GBs of memory isn't doable in hardware, let alone through a JVM sat on the end of a network connection.

1
0
Bronze badge

I'm guessing that should read with "with nano-seconds access". In the same way that a Renault 4 can get from 0 to 60mph in seconds. 24 seconds to be exact (which surprised me when I just Googled it, as it felt longer than that even on a downward slope).

0
0

"Are you sure you have got that right ? nano-second access across hundreds of GBs of memory isn't doable in hardware, let alone through a JVM sat on the end of a network connection."

Why isnt it doable, can you explain a bit more?

0
0
Roo
Silver badge

""Are you sure you have got that right ? nano-second access across hundreds of GBs of memory isn't doable in hardware, let alone through a JVM sat on the end of a network connection."

Why isnt it doable, can you explain a bit more?"

The answer lies in just about any memory chip datasheet you can get your hands on... The round trip of issuing the read request and having it complete is > 1ns in every part I have looked at. Writes are faster, but again for dense commodity parts I very much doubt any of them are in the sub 1 ns access range at the pins.

For the sake of argument even if you did have a huge pile of DIMMs that offered < 1ns read/write latency at the external pins, you would have the speed of light to contend with. IIRC electrical signals travel across circuit boards at ~0.3m per nanosecond... So for such a machine using 12x (for 192GB, the guy did say 'hundreds') magic 0.3ns latency 16GB DIMMs you would need to find a way of packaging and laying out the memory in such a way that every single PCB trace was < 0.2m end to end.

The good news is that I think the PCB side of things is doable if you can find 16GB DIMMs with guaranteed <0.3ns read-modify-write latency. The bad news is you won't find any DIMMs that can deliver that kind of latency to a piece of code running under a JVM. ;(

0
0
This topic is closed for new posts.