7450 can have 240 SSDs / 96TB on the data sheet (for a 4-node system)
Though I think the data sheet may be obsolete given they have 600G ssds now, which would make it 144T raw.
HP has warned El Reg not to get its hopes up too high after the tech titan's CTO Martin Fink suggested StoreServ arrays could be packed with 100TB Memristor drives come 2018. In five years, according to Fink, DRAM and NAND scaling will hit a wall, limiting the maximum capacity of the technologies: process shrinks will come to …
"7450 can have 240 SSDs / 96TB on the data sheet"
You're right [PDF] so I've adjusted the article. Thanks - but please, next time, use email@example.com - we get those emails immediately to our desks, home machines and handhelds whereas there's no guarantee we can read every single comment :-)
"confidently of HP popping 100TB Memristor drives into StoreServ arrays in five years"
Talk is cheap and confidence predicting the outcome of 5 more years of bleeding edge technological development seems fanciful.
Good luck to them hope it works out, at least they are not a startup trying to con money out of investors with their confident predictions.
predicting the outcome of 5 more years of bleeding edge technological development seems fanciful.
Ever heard of Moore's law? Not quite up with the Higgs Boson, but still one of the most startlingly correct predictions of future progress.
Once CMOS and VLSI fabrication were working, Moore's law became inevitable. It was based purely on the physical scaling laws. Prior to CMOS, engineers talked about the "smoking hairy golfball" problem: a CPU with the sort of performance we take for granted today, would have to be the shape of a golfball because of the speed of light, hairy with wires connecting to it, and smoking because there would be no way to get many kilowatts of heat out of it.
But CMOS scales at constant power per area of chip, so Moore's law worked until the minimum size of a channel became limited by the discrete size of atoms. We're pretty much at that limit now, with Flash and Intel's FIN-FETs.
Going back even further, you might want to search our a lecture given by Richard Feynman "Plenty of room at the bottom".
Some of us have long memories. HP's predictions have no credibility for us. Memristors will completely supersede DRAM and NAND flash? The Itanium was touted as the technology that would completely supersede the x86. Itanium was pushed so hard that it seriously distorted the entire industry from about 1997 until about 2002. This time I'll wait until I can buy one, thanks.
Do some reading, it's a fundamental idea that has been around for years. It's a building block just like a transistor is, it is a component that has been dreamed about and the fact that it is possible means a big rethink in many technology companies.
The logical conclusion of the claimed performance and characteristics of memristor is startling. The only storage anywhere in a computer will be directly attached to its CPUs, just like RAM is nowadays. No SATA, SAS, PCIe SSDs, Fibre Channel, nothing. Just DDRx (whatever the 'x' has become by then).
That memristor storage would likely be divisioned into an area to take on the role of long term storage, and another to be the equivalent of DDR RAM. Except that the DDR-RAM part won't forget what it stores when power is removed.
And we'll have to get used to the fact that switching off your PC won't necessarily mean that it's memory goes blank. Everything in its terabyte sized memory will be retained between power cycles. Sounds like a security nightmare... Just to be sure the shutdown process will have to consist primarily of securely deleting its short term storage so all that those decryption keys, important data, etc. are wiped out. A power cut could be a real security nightmare for some people. And losing a laptop - eeek!
If you think about how software these days deals with important things like crypto keys, etc. so much of it assumes that memory is volatile and will be forgotten when power is lost. With memristor based RAM, things like BitLocker will be extremely vulnerable to power cuts.
> The logical conclusion of the claimed performance and characteristics of memristor is startling. The only storage anywhere in a computer will be directly attached to its CPUs, just like RAM is nowadays. No SATA, SAS, PCIe SSDs, Fibre Channel, nothing. Just DDRx (whatever the 'x' has become by then).
One problem with this sort of flat design is that the more devices you try to connect to the CPU, the longer the access latencies will be. We've seen this already in designs with memory controllers built into the CPU (and previous generations) where memory expander needs to be deployed to increase the SIMM socket count the memory access times degrade significantly.
The bit of the memory that is performing the role of longer term storage is still likely to want to be connected by some form of interconnect with longer range potentials. You'll still need to have a second copy of all your valuable data 100+ KM away from the primary copy to cope with major disasters.
I suspect it's possible to store enough power in a supercapacitor to allow for the wiping of (say) the 16Gb of MRAM that's playing the role of DRAM, that wants to be wiped on power failure for security reasons. Certainly enough to secure-wipe a much smaller area dedicated to storing crypto-keys. OTOH I also wonder whether such a wipe can ever be proof against theft of the hardware and application of serious data-recovery hardware. (Really secure sites never let used hard disks off site. They smash them to small pieces, then dissolve the pieces in Ferric Chloride! )
Incidentally if you worry about someone with physical access to the hardware, it's possible to recover a surprisingly large amount of information from DRAM minutes after its power is failed. A data thief can usefully remove DIMMS from a system and plug them into a data-copier. (And then test a mere few billion possible bit-strings as crypto keys for stuff in the hard drive, with a significant percentage chance of success).
An entertaining alternative to dissolving things in Ferric Chloride, is to employ an angry person and supply them with a lump hammer and gas torch.
That's stage 1: break into small pieces.
Domestic and probably commercial data security is quite adequately served by just drilling a couple of holes in thetop of the HDA and pouring a cola drink through one hole until it comes out of the other. (Just think what it does to your teeth! )
But a really secure site worries that even a small piece of a disk platter might be placed under an electron microscope and decoded bit by bit. Therefore, do not allow off site until its very atoms have been dissociated by dissolution in a vat of ferric chloride.
"Except that the DDR-RAM part won't forget what it stores when power is removed."
RAM doesn't forget when the power is removed for quite a while. The contents are recoverable for minutes afterwards using equipment that is cheaply available. This almost certainly extends to hours with specialized equipment.
There is an easy technical fix here. The CPU can generate a random encryption key and use it for the "volatile" portion of the storage. The CPU will need some serious hardware-assisted encryption/decryption to avoid a performance penalty. The randomly-generated key will reside only in a register inside the CPU. When power is cut, the register loses the value.
The same hardware can also use other keys for other memory ranges to support memory-speed access to encrypted non-volatile data. As with today's encrypted disk, you obviously cannot store these keys on the non-volatile store.
> that's 24,000TB
And how, exactly will you do an off-site backup of that?
Even if you just do a device - device copy and you have a 100GBit/s link to your beta site, that's 24 million GByte - and a frightening number of I-O channels needed to keep the connection running at 100%. So, running at 10 GByte/sec will take nearly a month to perform that backup.
Looks like the world will have to re-embrace a 1980's adage: never under-estimate the bandwidth of a van filled with CDs.
"Real time replication so 24,000TB off-site grows at the same rate as the on-site, together with snapshots on the off-site to allow some form of point in time backup."
I'm glad you said snapshots or the whole damn thing goes "phut" whenever anayone does "rm -rf /"
In the decade+ that I've been tunning multi-terabyte-scale backup systems, the number of "disaster recoveries" remains - "1" (and that was becaise someone fornatted the wrong FS).
OTOH the number of "I need to restore XYZ file deleted 1 (minute/hour/day/week/month) ago" runs to several thousand.
Firstly I mis-read the article, reading 100TB as 100GB (it's early), which for a new technology would have been a start but not awe inspiring.
While 100TB is a mean amount of storage in one array, what's the individual unit capacity going to come out as for more normal storage "devices"? It's those that I'm more likely to see personally, with even moderately lunatic amounts of storage available at high speed which will make a big difference in mobile (hand held) devices and laptops / desktops. Power use will be important as well, does anybody know what the power consumption is shaping up to be in these kind of storage systems?
I confidently predict that applications will become even less efficient to make up for the increase in speed. :)
So, once again - we are reassured that our flexible (roll-up; not curved...) mobile displays and OLED TV screens (along with personal jetpacks, flying cars, cheap fusion energy, and the Year Of The Linux Desktop) are right around the corner, only a few years away, as they have been last year, and the one before that, and the one before that, and the one...
The difference is that this one is working in the labs and there aren't any obvious reasons why it can't go on to development and production. If we didn't already have flash and DRAM it would already be on the market. Because it's got to compete with established technologies, it'll have to stay in the labs until it's a demonstrably BETTER product for at least one value of better. During which time we'll all gain confidence that it doesn't suffer from any unexpected premature ageing processes.
Things could still get delayed. It's even possible that there's an iceberg out there. But my expectation is that this will be the biggest breakthrough since CMOS.
You mean other than 3Tb USB3 Hard disks?
I doubt they'll be devices in the first instance. I think with Memristor tech, the primary MRAM will be tightly integrated with the CPU and NIC, built onto the motherboard if not into the CPU assembly for bandwidth reasons.
(And we're going to need better than gigabit networking to the desktop! )
We techies understand this better than most. If all goes according to plan, HP is a must-own investment.
Except, Memristor tech is about the only bright spot for what's otherwise an uninspiring behemoth of a company. What are the chances of HP coming to grief before it can capitalize on its Memristor know-how, or some eejit in HP management selling the immature technology for about a thousandth of what it will be worth long-term? Six years is a long time on the stockmarket.
(In my dreams, HP never bought Compaq, never divested its instruments division, maybe did divest its printers division, and it's the old HP now developing Memristor tech).
" What are the chances of HP coming to grief before it can capitalize on its Memristor know-how, or some eejit in HP management selling the immature technology for about a thousandth of what it will be worth long-term?"
The best shareholders can hope for is a demerger into two companies, so that they can choose what to do with the EDS disaster bit (incl PC making), or the real techy bits with potential and high risk. In fact that's part of HP's problem, that they claim to be a tech company, when in fact they are a conglomerate, doing all manner of non-complementary activities, and all the while hoping that some new acquisition will change their misfortunes.
But worth considering that for all its potential, memristors could turn out to be a billion dollar blind alley, in which case EDS could seem like a good business.
Which is best:
- One 1000TB mram, one 1000TB hot mirror mram, one storage controller appliance.
- One thousand 1TB 2.5" drives, another hundred in hot spares and redundant devices, a much more sophisticated storage controller appliance with half a terabyte of DDR or flash caching?
If HP get the tech working, they'd have to either price it insanely high or limit device capacities to avoid outcompeting themselves. The only good point for them is that HP don't manufacture the physical drives themselves, just resell them.
"If HP get the tech working, they'd have to either price it insanely high or limit device capacities to avoid outcompeting themselves. "
If HP get the tech working, they can dump the "stuff it competes with" and have the world plus dogs beating a path to their door for several years. They can even have profit margins to dream of - as long as it's cheap enough to put everyone else out of business the world is their oyster.
And all the DSL speeds will be turned down to Dial-up modem speeds just so that Extremely Deep and ulta deep packet inspection can be done on every bit of information both sent and received.
Anything not approved by our News Corp/NSA Overlords will be banned.
Oh and a Mr R Murdoch will be President.
IMHO, the chances are that a hybrid DRAM+PCM will be the dominant memory architecture in that timeframe. PCM is already being used in mobile devices and the economies of scale from that alone will make it almost impossible to compete with it from a cost point of view, and with Intel buying Numonyx a year or two back, it looks like thats the tech they'll be betting on too ...
Biting the hand that feeds IT © 1998–2019