Sometimes you have to give to get. And Terracotta, which specializes in caching programs for Java applications, has decided to give away a freebie version of its BigMemory in-memory caching appliance to try to expand its base of customers who are willing to pay for the full-on and much more scalable version of the product. …
" A typical Java heap size is on the order of 2GB, with sometimes 4GB or 6GB sizes; beyond that and Java's memory management starts to thrash."
Is this a sponsored story? This is pure weapons grade balonium. First post I could find here, it's old and they happily settled on a 100Gb heap on a large server ->
Very old JVMs where heap fragmentation was an issue and this would be true (untuned) - I'd guess pretty much any modern JVM and this is simply untrue.
I wonder if they mean GC pauses. In certain circumstances with a very large heap you can see a GC pause of a minute or two every so often if you have a very large heap (say, the 30gb in the article) and it needs to be GCd. That is in certain circumstances though. At that level, you need to start understanding/ profiling your heap usage and tune the garbage collection.
The big selling point when I used the terracotta product before it was the ability to have memory in use larger than that available on any one server. So you could keep huge object graphs in memory, and they would be split across the cluster, and serialised to disk as a backup. It also propagated thread messaging across the cluster (wait()/ notify()). Very pretty. That was 3/ 4 years ago, no idea what this BigMemory thing is or if its related.
"upgraded in 2010 to be Level 2 cache for Hibernate" ?
Huh? It originated in 2003 and was an L2 cache for Hibernate since very early days.
"Patent pending" though
I am not sure if the community can be won over as easily, especially in the context of http://patents.stackexchange.com/questions/311/to-what-extent-is-a-java-off-heap-management-patent-applicable