Whoa IDC needs a math checker!
If you look at the total shipments in the first table a drop from 79 million units to 68.9 is a "5.7% reduction".
Someone needs basic math lessons. That's should read 13%.
13% is a whole lot worse reduction folks!
275 posts • joined 16 Apr 2011
If you look at the total shipments in the first table a drop from 79 million units to 68.9 is a "5.7% reduction".
Someone needs basic math lessons. That's should read 13%.
13% is a whole lot worse reduction folks!
The ODMs only had 9% market share, but this translates into perhaps 20 to 25% of units shipped, since ODM boxes are really inexpensive compared with EMC and HPE. Another factor here is that the customers of the ODMs typically buy drives direct from their makers, while the others include expensive drives in their revenue numbers.
In other words, ODMs are eating up the market faster than we think. IDC needs to get up to date on how it reports all of this.
Saving the cost of several power stations is very laudable, but this may be a case of uninformed bureaucracy at its best. Let's face it. The IT industry is changing fast. We are just getting announcements of terabyte SSDs with 4 W operating power and 0.4 W on standby. HMC brings 30 to 40 percent saving in CPU/memory power and 3D X-point will bring more. Hyper-converged systems mean no need for storage boxes.
More important, desktops and workstations are going away. They no longer fill the use cases when compared to tablets. Even CAD and advanced video-editing has moved to the cloud, and just uses tablets to display results. There are power savings galore in the future of computers! We don't need an out-of-date and out-of-touch standard to get there. (I bet prices in California go up a lot!).
Anyway, the technical options are much better than the standard. We can get power supply chips that are better than 99 percent efficient...mandate those! We have storage compression that cuts capacity use by 80 percent. Mandate that!
Microsoft advertised free upgrades...no small print!
When I tried to upgrade a 7 year old desktop, the online upgrader stuck at "99 percent complete" for 3 days. The clean upgrade options using USB and DVD both failed. These reported they couldn't see my new SSD. Likely all the failures are due to missing drivers or BIOS.
These existed for Win 7 a year ago...I did a clean build from DVD and upgraded to current level. We must therefore ask if Microsoft messed up or if they deliberately removed old elements to force some obsolescence. Either way, they fail a truth in advertising test and I suspect that there are many other Windows 10 upgraders who've had similar problems. Perhaps Microsoft should extend the upgrade window (no pun!) and fix the problem.
I thought this useful right until I saw the Toshiba price per drive data in the first chart. Their SAS enterprise drive clocks in at around $30,000, based on $30/GB. That may be Toshiba's system price, but it beats even EMC's massive markups.
SATA SSDs good enough for enterprise work are around $300 (forget all the enterprise and near-line hoopla). That's Dell's price for such a drive! The differential sort of makes mincemeat of Toshiba's whole presentation.
Head in the sand!
Dell sells a 1TB SSD online for $300. Try getting an enterprise hard drive for that price (Hint $500 for 500GB)
Large size drives won't work. Tolerances won't allow the bit densities we get with 3.5 or 2.5 disks. Even if we had them, there are no servers that can accomodate them. Server density would be half what it is today, too, while drive power would be more like 30 to 40 Watts.
Tall stacks are more feasible, but tolerances are again an issue. Those actuator head arms can't get much thinner, and that's a real limitation on how many platters are possible. We might see one more platter in a standard full-height drive. Bit densities are at their limit, while going to HAMR will probably mean one fewer platter because the head structure is bigger.
With 16TB SSDs already announced, with SSDs cheaper than enterprise hard drives (at least in distribution and from Dell!) and with Google stating that MLC drives wear as well as SLC drives in real life, spinning rust is in a battle for relevance. Prices are dropping fast and even bulk storage will be challenged next year by 4-bit-per-cell technology and 3D NAND. Oh, and I forgot to mention that SSDs are 100x faster!
Currently, SSDs are cheaper than the enterprise hard drives they would replace...just don't buy from EMC...go to distribution or the Internet!
The reason we went to 3.5in drives was that tolerances are far tougher to achieve with bigger disks. Also, spin power goes up roughly as the square of the diameter, there are no systems with space for these drive sizes, and the cost will be a whopper.
Samsung just announced a 16TB SSD. I bet we'll reach 30 TB in 2018....rust is already behind the curve
How can you spend $1.2 billion on a "Web portal"?
It probably uses Z-series to talk to the internet and other money-wasting solutions.
Sounds like a visit from the state auditor is needed!
PS Ask any commercial firm how much they spent for their whole website....I bet it was just a bit cheaper!
IBM has apparently changed their layoff compensation policy so that many employees get 1 month's pay rather than the 6 months average traditionally given. $70,000 sounds more like the 6-month number, so if anything IBM will be hitting as many as 84,000 staff, based on the article's premise of the total amount to be spent in layoff pay.
Whoever wrote this at Google seems to be a naive. Basic HDD technology has hit a wall. Getting more density on a platter will mean HAMR, which looks very hard to fabricate commercially. so what are the alternatives? More disks? That means helium-filled drives, but that's OK. But it also means stabilizing the spindle runout and we are at the limit for that already with current track densities.
HGST tried dual actuator drives and dropped them. The actuates interact via vibration, and they also create turbulence that messes up smooth flying by the heads. Anyway, that only increases IOPS to 200 which isn't in the same league as SSD.
Maybe we need to bring back the 5.25 in form factor. But wait! That has disk stability problems on the outer tracks, so it's a non-starter at current densities.
Any way you look at it, HDD speed and capacity growth are effectively at a standstill, which makes me wonder if Google's engineer knows what he's talking about. Perhaps the idea is to bluff competitors into staying with HDD!
Chris is pretty well on-point with his predictions about the cloud winning, but he missed a couple of other things. First, the Chinese ODMs are well positioned to attack the US incumbent traditional vendors...that's because they supply AWS etc with huge quantities of gear already and also because they make most of the gear for those US vendors! If they roll into the market with distribution and end-user sales, they'll undercut the US "vendors" by 60 percent or more on price...a classic "cut out the middleman" scenario.
Second, the poor state of WAN connections in the US and EU makes the hybrid cloud model awkward to implement. We need fiber connections, but it takes bullying by Google to get Verizon and ATT to budge.
If hybrid cloud isn't attractive and ODMs are cheap, the traditional vendors are going to hurt a lot.
I had just started my BA senior year radio-astronomy project and remember walking into to an impromptu demonstration of the pulsar with Jocelyn Bell and Tony Hewish playing a bleeping sound over a loudspeaker, interspersed with excited explanations....heady stuff!
I spent several months going back and forth to Lord's Bridge,, gathering data using some dipoles spread over two metal frames. Two of us created sort of an equivalent of the railroad antennae in the field next to the One-Mile setup, carrying the frames back and forth manually and using a theodolite to position them.
Having learnt FORTRAN and written a huge program to process the results, we had a bit of spare time and sat down with Steven Hawking to figure out all sorts of corrections, including Einstein's relativistic adjustments.
Looking back, it was one heck of a great time!
For $6.8 Bilion, the gov could have bought Symantec, gotten decent systems, provided NSA with backdoors galore and still have change
It looks like we are still on course for price parity between SSD and HDD in 2017. At $300 list for the Transcend 1TB drive, will we see $200 by July as 3-D NAND and TLC or even QLC hit the market? We'll certainly be there by the end of 2016 and 2017 promises to see big drops too.
Many states have ridiculous laws (paid for by the ISP lobbies?) to limit cities from offering infrastructure. all of these should be struck down as impinging on Federal rights to regulate communications. Then the cities could solve the fiber problem and offer a la carte services across the wire...now that's real competition!
The SSD market is, despite the best efforts of drive makers, NOT slavishly following the "enterprise" classification beloved of HDD makers. Enterprises are moving to appliance level redundancy and that implies cheaper SSDs are good enough.The figures from Gartner don't include this large SSD segment and so understate the impact of SSD on "enterprise" HDD sales by a considerable factor.
What is surprising is that there are still good sales levels for those very expensive "enterprise" drives. Prices are 2 to 3x those of comparably sized mid-range SSD, which makes any rationale for staying with these HDD unfathomable!
Can't afford a backup system?
Go to the cloud!
Amazon would have saved them, if G-Cloud couldn't!
With the now-apparent problems of moving data between private and public clouds in the hybrid cloud model, is this the future of the private cloud...rented long-term gear in AWS hosting space?
This likely puts the remaining hosting companies into a death spiral, but smart CIOs will see that this assured renting is no different form using hosted gear to make a private cloud.
Will storage follow down this path too....I'd bet on it. We do live in interesting times.
HP's Ryan hasn't quite got it. The whiteboxes account for 7 percent unit share because they are so cheap. If they are only 2 percent of revenue, that means they are roughly 1/3rd of the price.
If sales continue to grow at the current pace, or even turn up, whiteboxes will break the back of HP's pricing scheme when they reach roughly 40 percent share.At that point, the price per unit will drop drastically for all vendors and whitebox revenue as a share will move up.
What Ryan is dissing is in fact the ultimate demise of HP's server business!
So everyone will rush out to get a laptop with 3D so that it will recognize them instead of typing a password??? No way! It's just as likely as we'll all embrace Windows 10.
As for 2 in 1's, it's far cheaper to get a $150 tablet, add a $30 keyboard and enjoy Android! Google docs are as good as Office and collaboration is much easier, which is increasingly important.
Tablets are displays...they don't really store programs or data much.
Because of that, a tablet lasts until the display breaks, which is turning out to be a long time!
Desktops and notebooks needed to be upgraded to keep up with Windows and other applications...every 2 to 3 years
Apple isn't immune! As a model of what is happening, the hype-driven upgrade cycle drove PCs for a while, then crashed as the models tended to blur into each other and no longer offered incremental value.
Tablet sales decline follows the same path, with the added pain of being a much more reliable product and of being ousted by smartphones somewhat.
Pressure on the phone space is coming from cheaper phones with identical features, while iPhone innovation has essentially stalled (gold colored cases don't count!). Prediction is that Apple will struggle a bit next year.
Try this new Nescafe Pure - Everything is removed except the caffeine!
With today's servers there's no reason NOT to raise the maximum inlet temperature to 40C (104F). Airflow is typically well-designed, so this means that for most of the year, most places DON'T need chillers!
Disk drives used to restrict temperatures a bit, but recently drives running up to 65C have been the mainstream, so they are able to handle the inevitable temperature increases inside a server or storage box OK.
I've delivered COTS servers with specs up to 50C inlet air, without excessive cooling support, so 40C is safe in the general commercial space. The lack of chilling is a huge saving in power costs. The only issue is filtering ambient air to keep out the dust of the prairies!
I complained to Oracle's ethics exec about the ASK toolbar that Java updating added automatically...it took me two hours to purge its pieces from my system. Oracle cleaned that one up quickly.
This Microsoft thing is far more egregious. It's probably a bit like that Volkswagen fiasco...instituted by low-level guys to get ahead. Will complaining to Sadya Nadella get this monster cleaned up before we all are driven insane? After all, he said, "We set high ethical standards at Microsoft and we expect every employee to live up to those standards."
Can Microsoft do as well as Oracle?
Except regular hard drives do seem to show wear-out. After 4 years or so, failure rates tend to climb, sometimes steeply. So much for HDDs wearing better than flash!
Execept regular hard drives do seem to wear out after around 4 to 5 years. The failure rate tends to climb, sometimes steeply. So much for being better than flash!
Flash isn't more expensive than 15K hard drives. It's most definitely the other way round. A 1TB flash drive is around $360 today. Sure, it isn't the fastest "enterprise" flash drive, but it's still 1000x faster on random IOPS and 5x on sequential.
A 500GB 15K HDD is around $650.
Puts things in a different perspective!
Chris must be feeling a need to defend the drive vendors after the huge drop in enterprise disk demand this last quarter. 15K RPM drives are dead already and 10K seems destined for the same fate perhaps by the end of this year, though we'll always find people with nostalgic desires to use spinning rust. Violin is essentially right for primary tier storage!
That leaves the thorny question of what happens to the bulk storage tier, The "media evolution" comment above is close to the mark...economics will decide, and if there is a sure profit, the market will fund the foundries. With prices closing fast, the economics start to become compelling even before price parity is reached..
But will we need Chris' trillions. Data compression and deduplication will reduce space demand by as much as 5x. And, once 3D NAND gets out of the lab properly, the incremental cost to add 2 or 4x the layer count will be small. New error correction will move the sweet spot to TLC.
1 Trillion now looks like $40 Billion and possibly less! That's not much given market size.
4x the sequential perfomance isn't much?
Micron's new crosspoint memory may have a place in the sun here, at 1000x faster than flash. Package it in Hybrid Memory cube modules close to the DRAM/CPU/GPU complex and that is a screamingly fast solution - timeframe to able to do that is about 3 years which is when real money starts getting spent on the new super.
Note that HMC could be touching 1 TB/s bandwidth for DRAM in that timeframe, and that's per CPU/GPU module.
NVMe may form the second tier memory, but it really isn't enough on its own. The data has to go to networked storage, so compression accelerators are needed to get the data rate down, assuming there is reasonable compressibility in the data. Then we'll need some rally fast networks to move data out.
Let's get real! The traditional large storage vendors get their gear built by ODMs in China. There is NO secret sauce in the hardware, though quality varies depending on supplier. The same is true of drives. The large vendors do quality control (which costs maybe 1%) and then mark up the drive price 10X.
Appliance based redundancy has made much of the storage mantra redundant. We don't need enterprise drives with 2 interfaces. We don't need RAID.
The mega CSPs figured this out 5 years ago and they buy storage platforms direct from China. They roll there own code, but that's beyond most companies.
Traditional vendors have a lot of code, but it was designed in a different era with proprietary architectures and RAID as the focus. By starting from a blank sheet, Nexenta and others are offering a modern code base with today's design focus, and ultimately that should be better than the traditional code sets.
If you look at the numbers, you'd question that reliability statement. The facts support SSD as being reliable, while HDD show high early life failures and also batch-related or model-related failure.
Now SSD do wear out, but a bit of care in getting the right class SSD to match your write rates will give 8 years useful life, if you need that much.
Any hard drive over 8 years old is wearing out fast, too. In fact failure rates seem to increase after 4 to 6 years of operation and then rise rapidly after a couple more years. IT's a wash on wearout, and late this year we'll see improvements in error-correction in SSD that will increase wear life by as much as 100x. That will put SSD well in front of HDD on reliability!
I think the workstations argument is wrong. Even with the pitifully slow Internet that US Telcos provide, it's a wash to use a cloud-based compute cluster for a job and just have a display on a tablet...there are many articles on that issue - just google Adobe!.
With faster Internet, the balance swings in favor of using cloud clusters instead of workstations, especially as that solves collaboration and parallel working on a job.
Even the gamers are going to the cloud for that reason!
There's no room to remove cost from a drive. We are not going to see any major component disappear form the parts list. The only play for the drive makers is density, but here they are slowing down as the technical barriers for each step get greater.
Hard drives can't get faster, and they are already way slow compared with SSD. Add to that, SSD are lower power, silent and robust.
The demise of the hard drive is inevitable, just as PCs will be replaced by mobiles and tablets. It's happening faster than WD or Seagate would like.
There must be some real dinosaurs out there. Who in their right mind would buy 15K drives? They are 2x the price of top-end SSD and 1000x slower in random IO, which is their only reason for existence.
There are still people who would opt for a KIA even if a Ferrari were the same price!
Barnum was right!
Hilary may have decided to keep her email on her own server so she could "edit" her legacy, or perhaps it was just laziness or typical arrogance, but the real question is whether it was secure.
If she applied all the safeguards big governrment secure operations use it was probably safe, but if, as is more likely, firstname.lastname@example.org had a password like "Chelsea" or "I8Monica" that never got changed, the Chinese probably read her email before she did.
That's the real extent of what she did - not just being a bit naughty, but exposing America's privates to the world!
I've worked extensively with NVRAM architectures. Any of the byte-addressable tape open Pandora's box on software changes (all for the better, I might add).
Just using it as RAMDisk doesn't cut it. The OS virtual file system and SCSI stack are too clunky to even try, and they don't provide granularity or atomicity. Moreover, apps need to understand the NV nature of NVRAM memory to take advantage of it. That means compiler and link editor changes galore.
Then there's the issue of data integrity and a minimum of RAID.
There's a lot more, but you get the point!
This is my hypothesis.
Google cycles hardware on a much faster rate than most corporations. Hardware has a typical 4 year life. what do you do with the old stuff...it's starting to increase failure rates, right. Meanwhile the market for old computers is slow, so selling them isn't to good an idea.
The answer is to use older gear for bulk cold storage where it is rarely powered on. This extends drive life and reduces power drastically. And the nice thing is there's no installation costs!
The economics are compelling. There is no acquisition cost since the gear is depreciated.
The big question is what happens when they run out of old gear!
Is it true the Portuguese have a low-methane donkey now, and have stopped using cars?
"A few times" - that sums up the renewables problem. They are unreliable and that means traditional generation is always running at idle to save the day(light). This is why Europe is paying as much as 5x the price for electricity.
The concept of wind/solar renewables is stupid. The technology just isn't ready for mainstream.
If we want to get CO2 out of the air, we need to accept clean nuclear as the alternative
Violin is calling the changes in the market correctly...and they are pretty unique in doing it. Disk is dying...two or so more years and it's over.
They can deliver what they boast about, too, so as a way of getting the story out this campaign is fine...and a bit of good fun, too.
Why there are so many admins who deny the benefits of flash eludes me. I suppose they are the people who would still buy a KIA if it were the same price as a Ferrari!
With 3D NAND hitting the market and bringing flash and SSD pricing down fast, we can expect capacity parity early next year and price parity by the end of 2016...and that's parity with bulk SSD. In fact, SanDisk projects having 16 TB SSD next year
There are already SSD cheaper than "enterprise" HDD at perhaps 60 percent of the price!
As the spinning drive makers say: "Winter is coming!"
I'm pretty sure it was motion sickness. I felt disoriented on the rollover (which was the standard game entry video. The image quality emphasized the motion and the headset reacted to head tilt etc. Typically, viewers looked sideways to watch the tunnel walls, which definitely contributed to the problem, since they were caught by surprise at the rollover.
As a bit of releif from all the porn experts:
I led a team developing a high-performance headset like Oculus some years ago. Our test bed was a copy of Descent, with very high-res graphics and really fast GPU processing. We found that 1 user out of 3 couldn't take the "roll-over" at the start of Descent, where the user dives down a mine shaft.
We kept a bucket by the test setup!
When you buy a new car, you balance your budget against features an performance, but if I offered you a BMW for the price of a budget family car, which would you buy? There wouldn't be many people saying it's too fast!
That's where we are going with SSD. Next year, at some point, we'll see SDD get as cheap as the cheapest hard drive. With lifetimes well beyond any realistic wear-out, and better reliability, would you buy another hard drive in 2017? You'd be nuts...or you'd swallowed all that FUD hook, line and sinker.
Reality is that we are already making a dumb comparison. SSD are cheaper today than "enterprise" hard drives., and come in much larger capacities. The enterprise HDD is already dead!
I think the math was a bit off.
There are two sorts of problem with loss of data. One is a random failure, and there the BER is near irrelevant since the random failure mechanism that would cause loss of data has to occur if the SAME block is corrupted on two drives. The probability of this is the UBE rate multiplied by the number of blocks on a drive, which is roughly. This multiplies the UBE rate by another 2.5E+08, which means its really unlikely this would happen.
The other problem is a drive failure. Now the question is will a UBE occur during the rebuild time, if there is no parity left. 1E14 is an awful large number of bits, and no consumer operation gets even close to that per day (1E9 is more likely!). (The probability of a second killer BER is actually lower than this on real-world consumer drives, since only a failure in used space should be counted.)
The rules are different on servers