HP has revealed a little more about its "Project Kraken" in-memory system that it is cooking up in conjunction with the engineers at SAP. It's talking about a future in which there are lots of scale-out servers like its Project Moonshot systems and big-memory systems like Kraken on the other end of the spectrum – with not as …
Database numpty question time
Why does the entire DB need to be in memory, as opposed to just the hottest parts or indexes or memory mapping it or any of the other locality & cache & 80/20 notions that we all teethed on? Is the data being used in such a chaotic way that true random access is the only hope?
<=== I so feel a blonde
Re: Database numpty question time
Shush!! The salesgrunts will hunt you down and cry on your shoulder if you carry on! They need to sell all that lovely memory, it's so much more expensive than just selling the customer half that much memory and the rest as disk.
Re: Database numpty question time
Salesdroid here, but trying to be both sensible and honest:
The idea is to let users make more searches with more variables and get faster responsetimes than ever before. More advanced searches can help you get immediate insight in data gathered seconds ago. Having older data in memory as well allows you to incorporate trends etc. in your reports.
Some get 1.000x or even more than 100.000x faster responsetimes from their reports, compared to traditional methods.
For some it is useful in decision making, for others it's just a feature. If you are a utility provider, being able to realtime/near-realtime monitor end-customers usage and could help you make the most profitable decisions regarding your production.
When everybody is talking about the possibilities of big data, think of SAP HANA as a "small big data" solution, allowing you near-instant reports on the data you already have.
Re: Database numpty question time
Because it uses all the information and it doesn't want cache roll?
Because it has massive amount of cores and they're doing vector processing type parallel jobs where the time to complete is on the worst scenario?
in mem vs ssd
HP really thinks there will be a lot of folks out there doing large scale stuff in memory in single images? Seems strange.. I think most expect most of that to be done with fast SSDs, and systems that use large amounts of memory would instead be some sort of distributed system.
Anyone know if SAP has or is building an alternative to this in memory system to running on SSD instead? I'm not familiar with the platform myself, but it strikes me as an older product line which was developed long before SSD.
Also the DL980 is a 8-way system not 4-way. Though perhaps SAP only supports 4 sockets on the 980.. I doubt it though - DL980 tops out at 4TB which given the memory controllers are on the CPUs would mean all 8 CPUs are required.
Flash is the way to go
Our SAP image is plenty small enough to put into all flash. The real big db is in our data warehouse....err sorry its called "Big Data System" now.
The DL980 is two four socket boxes tied together with two glue chips. The problem is the I/O is tied to the processors so it is unbearable to get from one processor thru the glue chip to the other processor then finally out to the network.
Will be interesting to see when HP actually gets this system out the door. ivy wont be until 4q at the earliest and HP never does anything before OCT 31st so many 1Q '14 or will this be like Integrity schedules which never had ny integrity and after being 3 months late HP would finally admit that it will be another year before they come out....i.e. Tukwila it was not ddr3 that held up product for 18 months.
Cheers...off to cancun again this weekend....I need summer after moving to NY.
Re: Allison Park: Flash is the way to go
".....The DL980 is two four socket boxes tied together with two glue chips. The problem is the I/O is tied to the processors so it is unbearable to get from one processor thru the glue chip to the other processor then finally out to the network....." The IBM FUD on the DL980 has had years to ferment given that IBM can't make x64 above four sockets. And why is it modular systems, glue chips and crossbar switches are only a problem in non-IBM systems, and NEVER in IBM systems like the Power servers? LOL!
".....Will be interesting to see when HP actually gets this system out the door....." Don't worry, Alli, by then IBM will have sold their failing x64 server biz to Lenovo and retreated to the mainframe market, and then all you'll have to worry about if FUDding NonStop, ClearPath and BS2000.
"Survival of the thickest"
SAP: Make our system perform better, give us more memory
Unix: Tune your f**king bloated application better
SAP: But its easier to add more memory
Unix: And its expensive and a short-term fix
SAP: We don't understand the principles of tuning
SAP: Not enough time
Unix: Delay the implementation then
SAP: We already promised this as a solution without checking with you.
Unix: That will be 1 jillion dollars for new hardware
SAP: Our ego is worth 10 x that, sold!
Unix: Those who can,do; those who can't, SAP
(Variations of this conversation used to occur monthly at my last employer - I'm the Unix guy BTW)
If you want to name it after dark things whose awakening brings despair, why not go the whole way and call it "Project Cthulhu"?
Copyright expired in 2008, so that's not what keeps them back.
"...adding that it is difficult to find a single database with 6TB, 9TB, or 12TB of data."
Really? We have several. And doesn't stop at 12TB either. I guess that means HP has lost any reasonable size customers.
big working databases
" .... that it is difficult to find a single database with 6TB, 9TB, or 12TB of data."
Urrr -- I have a cluster or 6 around here supporting at least 10 DB's of that size, and several of those are copied out to "QA and Testing" environments.
And putting the entire DB in memory STILL wont help if the app is crap.
Its Friday of a long weekend, after a long night beating on a problem.
Re: big working databases
The new Oracle M5 SPARC server has 32TB RAM and 32 cpus. With HANA compressing the data, the M5 will be capable of handling 64TB in RAM. Would that suffice for your needs? (Fujitsu has also a 32TB RAM server called M10-4S but it has 64 cpus).
"This is a much larger memory footprint than most relational databases have today, says Miller, adding that it is difficult to find a single database with 6TB, 9TB, or 12TB of data."
No. No it isn't....
Big SAP Databases
In this context he may well be talking about big SAP databases.
There aren't that may SAP DBs over 12TB.
The beauty of having your DB in memory is fast reporting without having to set up a dedicated data warehouse reducing server count and storage requirements.
- Put down that Oracle database patch: It could cost $23,000 per CPU
- The END of the FONDLESLAB KINGS? Apple and Samsung have reason to FEAR
- Pics It's Google HQ - the British one: Reg man snaps covert shots INSIDE London offices
- Review Porsche Panamera S E-Hybrid: The plug-in for plutocrats
- Mozilla fixes CRITICAL security holes in Firefox, urges v31 upgrade