FCoE is the future. . .
And always will be. . .
58 posts • joined 7 Sep 2008
And always will be. . .
In fairness, I think the transfer of info must have been rushed and not a lot of follow-up questions. I'll bet the 720 drives is accurate so working back from there is how they got 20 drives per tray (which is incorrect). It appears obvious that there will be 360 disks out front and 360 disks out back in those 1U trays each of which hold 12 disks. Regarding the 5, 7 controller heads.. .with the drives taking up 30U , the UPS take up 3U, the controllers take up 3U.. 1U service drawer, they have 37U accounted for. Current box is 3U for UPS, 6U controllers, 32U disk, 1U service drawer 42U in total. New box appears to have 5U for additional controllers (and maybe other things). Imagine one of the goals would be N+2 on controller heads. POC is an even better story: "Go ahead and pull cables to two controller shelves." Customer: "We lost a controller." ... "okay, we'll be out next month to fix it."
Those trays show 3 lights for each row. Each tray looks like it holds 12 disks, where do you get 20 at? Are they doing front and back? Those trays would be fairly short. That would be 24 disks x 30U to do 720, which makes more sense with the UPS, management and interface? nodes plus controller would bring them to a standard height or less.
"re-architect with HPE?"
They state they are COTS. You understand what that allows? This for example:
"New technologies, such as Intel® 3D XPoint™ and Supermicro NVMe* connectivity solutions, provide new advancements and opportunities. "
They are already kicking or about to kick Dell out the door. Wouldn't anyone else in the storage space that had a lick of sense and embedded OEMed Dell be looking to Supermicro or another low-cost tier1 whitebox vendor?
"but at what cost?"
Why not do some digging and find out? You can read a dollar a gbyte:
If their effective capacity goes from 2.8 to 5 PB, that comes close to halving that $1/Gbyte.
"Don't doubt that for a minute for many workloads especially ones that fit in the cache."
They have DRAM for the hot data and up to 200 TB for warm data. Listen here if interested, it's mentioned: https://www.youtube.com/watch?v=nTYFF56qdvA
How they promote and demote that is pretty involved and patents all around it. The 200 TB is just a copy of what lives on disk. And you can bet it isn't any coincidence that random talk about XPoint keeps showing up. It would be pretty slick to speed those warm cache hits up to much faster than 180 microseconds B^)
Chris - I can see you either listened to the youtube or paid attention. There's another article out there today that isn't even close on some of the details.
Apologies to Meg Ryan
"Eventually we think these three will come out with revised systems that have Infinidat-class seven nines reliability and comparable pricing. Until then, Infinidat CEO Moshe Yanai's forward progress could continue for a good few quarters yet"
Microsoft can have a number of loss leaders , along with Google. Several other examples about. I've spoken on just what you've written in the above section to several folks. It's not that they can't. The development effort is huge. Think about EMC/Dell re-writing Enginuity again to do what? Create the same patented methods Infinidat is using? Do N+2 on components, maybe.. okay. So they hit the 7 9s. What about the nasty cost curves Infinidat introduced? You see IDC and how VMAX is doing, not good. So how do they make it up by selling cheaper systems - I'm not getting THAT idea. The other big iron players are in the same boat. These guys aren't Google/MS and will eventually whack or sell off severely under-performing divisions. It appears to me that Enterprise big iron is in a very nasty spot because of Infinidat. Maybe the hope is like Pure going away, Infinidat goes away and the others can return to the good old days of fatter margins. Maybe I'm missing the obvious.
"Every month that a new XtremIO upgrade isn’t announced will add more grist to the rumour mill – though EMC may announce a major upgrade in the next 30 days and we'll all have to eat some humble pie. Which way would you bet?"
It's mid-July, does anything get announced in August? I don't think so. EMC/Dell merger set to close in October. Do you or anyone see an EMC product announcement in September? I don't think so. As commentators in that "XtremIO heading for the bin?" point out the silence is deafening. But okay.. XtremIO refresh announced in next 30 days. Very curious timing as all of Europe is on holiday and most of USA.
You can only control so much that goes on outside your fence. Probably more than one case where DC access went titsup even with multiple telcom providers - which just so happened to be running fibre through the same trench - that the backhoe cut through. I'm thinking Northwest Airlines a number of years ago for one example where just that happened. Google is your friend . . .
> Downvoted for your use of "leverage"
Whatever. Grow up and separate the wheat from the chaff. If I only kept or referred to sources that were spot-on in everything they say, I wouldn't be able to leverage them for much at all.
Not sure how far you've come in your reading. But to communicate differences, I've had to do initial "sell" to help folks understand why they should be doing CM and what it gains them. Many better explanations of "why" then I could pen (nor care to waste time penning). Here is what I leveraged and I'm including just a small portion, but you get the drift:
• Ease of Dependency Management (versioning)
• Standardized Organization (accepted at an industry level)
• Abstraction to separate server configuration tasks from system level details
• Ability to leverage community knowledge (that is guaranteed to embrace all the above principles)
More on Convergence and Idempotence:
Convergence and idempotence are not Chef-specific. They're generally attributed to configuration management theory, though have use in other fields, notably mathematics.
But tools like this have to exist if you are managing hundreds or thousands of machines/instances.
Scripts don't cut it (beyond a certain point of course) and we see folks that are tipping. Scripts that are no longer sustainable , the scaling fell down.
I'm missing something or quite confused or both. How or why doesn't this totally mess up XtremIO uptake or where does XtremIO positioning end up? I see how it would be the product they need to compete against Pure and Nimble, this is good. Over-under on XtremIO run-rate pre and post Unity, any takers? Percent growth / shrink, thanks. But the EMC marketing for all these products that somewhat or mostly overlap must call for the ability to hold more than a dozen contradictory and competing lines of thought at the same time very painful.
It is a challenge to manage RAID for many. In fact, I saw an Enterprise storage vendor that sells a 2 or 3 day service to configure the RAID levels on that large whiz-bang frame you just bought. What are you suggesting? Instead of creating RAID at the frame, ZFS raid? All sorts of options, including triple parity. But you are still "managing RAID." I'll do you one better. Now and going forward, no more managing of RAID or you have a handicapped storage offering. A number of solutions just work, you don't have to fiddle with RAID arrays. The aforementioned XtremIO and Pure. Infinidat comes to mind. That work for you? I guess a ZFS admin would be rather bummed because they would want JBODs so they could do the RAIDing. There's no such thing as a JBOD with those 3. But back to your post-RAID. Sure... two or three years from now RAID discussions will be fewer and fewer as most vendors will move on from that except for certain Enterprise storage vendors that face a daunting task of re-write or all new code base to move in this same direction.
Maybe Nimble who moved/moving to triple parity? But several top AFAs have their own schemes. RAID5/6 aren't even part of the discussion. Pure has RAID-3D "better than dual-parity" and XtremIO has XDP , a ppt you can google and toggle through 'cause it ain't simple to grasp, able to handle "up to 5 failed SSD per brick." A much better direction on SSD rebuild might have been comparing and contrasting all major AFAs and how they line up.
"The reported latency of 7us is likely due to PCI overhead and the current controller and might be avoidable in DIMM form factor."
See SFD9 Intel presentation referenced below. XPoint media contributes 1us, PCI (the rest) which would be 7 or 8.
"Re IOPS: note that the reported IOPS of 78,500 is for queue depth of 1 "
Hard telling how that came about. But if you peruse the SFD9 presentation, you can see where the presenter shows 96800 IO/sec with a queue depth of 1: https://vimeo.com/159589810
"It is plausible that XPoint has the most advantage over flash for low queue depth applications and in DIMM form factor, and that that advantage dimishes at high queue depth."
That's right. Elsewhere in that presentation he speaks to that, doesn't pay to go beyond 8 or something like that. You can see in the demo a reference where they are doing nearly 160K 70/30 random read/write IOPS (iirc). They must have cranked the queues all the way to 8... So I'm not sure what your point is of diminishing advantage. Are you envisioning architectural design issues?
I had a long winded reply but realized, why do all the digging. What Intel did was a bit disingenious, if you go back to the link Chris mentions they are comparing XPoint to flash. No doubt comparing XPoint dimms to "flash" SSD. it should have been comparing 250 ns latency to 80 microsecond latency, which is a 300x or so factor speed-up. But if you look hither and yon, you see SSDs that deliver read streams at 250 microsecond , there is your 1000x speed-up. Marketing (in my opinion) should have spoke about a 300x speed-up and explicitly mentioned they are comparing XPoint DIMMs to SSD flash.
They have superior ingest rates. The problem is (from my view) is disobedient upstream customers or TAs that think "that's your problem." You got large backup servers and DB warehouse servers that will and can over-run ports, etc. I would *think* the only reason they want to intro QoS is for very large customers with disobedient internal clients and very large customers asking for this feature. Most folks will look at the QoS radial and be like: "okay, whatever... don't need this."
We know about these things. Let's define delays at each point and how much each contributes. Want to play along with real numbers or just do hand-waves like we read in this article? My point is "FC delays" is a canard and no one puts actual numbers to it. It's rather silly ... really. FC-NVMe demoed and coming and apparently 30-40% reduction in transport delays: http://www.theregister.co.uk/2015/12/08/old_school_fibre_channel_gets_new_school_nvme_treatment/. It's good old FC that originates the "FC delay" chatter. But it isn't as if they are ignoring NVMe.
I get a giggle when I see this, quite often lately:
"Fibre Channel/iSCSI-type network transit delays and provide very much faster access to data by servers."
Riddle me this Batman, what is the delay in 16 gbit FC (more common now and going forward), what will the delay in 32 bit FC be? Google is your friend. Point here is somebody is passing around a really bad batch of koolaid in that folks are so concerned with the big-bad FC delays. Yes... you can cheat for this quiz and look at each others' answers, open book, zero point quiz.
"There are days in which I think the next 20 years will see a massive consolidation of power in a very few corporations (the afore-mentioned four being right up there), while everyone else becomes a small supplier, trying to win favor from the goliaths."
Yep... we'll get to the Standard Oil / AT&T stage. Every now and then the gov stepped in and busted things up. With corporate lobbying, not so much. But all is not lost, every now and then a google comes along to change the landscape. Not everything has been invented , VC gamblers lay down their bets and some are winners and boy when they hit do they hit big.
And collisions are a non-starter on SHA256/512 and aren't most newer implementations using something other than SHA1? The article you link to is a circa 2007 SHA1 walk-thru.
I've been having internal discussions similar to what is in this post... prior to the big news.
Storage Ed in this other thread says this:
"You dismiss EMC dropping their pants. Their long datacenter dingdong will steamroll Pure and all these Flash startups. They can drop their trousers all day cus Xtremio like Pure is cheap commoditty hardware. The materials cost for a 50TB atray from either vendor is around $80k.. Tops. And EMC can operate with a dingleberry of margin while Pure starves."
What I mention internally is that same hammer is aimed at EMC. I'm sure EMC just loves the fact that XtremIO wins the VMAX takeout not "the dreaded competition." A pyrrhic victory. Not sure what it means other than being dependent on hardware margins (and software and maintenance thereof) is not a good place to be.. now and going forward.
Read the article and elsewhere, Infinidat is doing log structured writes also.
And as they describe it , virtual RAID groups. Each disk participating in numerous RAID groups, more here:
And no I wasn't implying that mainframe support is rocket science. It's just that the two have different target audiences. Without mainframe support, is it Enterprise (yes Enterprises run a whole bunch of kit, maybe a fairer description would be it isn't targetted for the high-end Enterprise?) But yeah, Nimble is interesting. Do you think they'll ever turn that standby controller into an active controller so it too can participate in serving IO?
Nimble went a traditional RAID under-pinning. Even went to triple parity in 2.1. The RAID arrays in Infinibox are 14+2 64k chunks dispersed as described. Caching appears similar but there are a number of similar caching schemes with SSD at this point. Nimble has standby controllers. Seriously? That's kind of lame. Infinibox is geared towards the Enterprise with mainframe support coming soon. Yeah, many Fortune 500s still support mainframes and some have quite a few of them. Regarding fragmentation, I don't think they fear it but embrace it. Writes hit "idle" disks (idle being relative but with 480 disks, some must be more idle than others) and reads are mostly cache hits.
Wait a second...
Some of us are paying attention out here, by the way.
What about this, doesn't HD growth still outpace SSD in the future (see chart in link below):
Is it no accident WikiBon study doesn't touch on total PB on the floor and ratios SSD<->HD?
If one of these guys is using tape, we could hope for a price war. At price parity and consumer as an end-user here, I'm not leaving Glacier. Photos and work docs when I leave is what I have in the big ice cube. Someone writes a nice interface like Fast Glacier and Google cuts their prices in half, I might consider switching. About taking it out, small users like me can trickle it out so I wouldn't pay to do that.
> IBM was the first to introduce sub LUN automated tiering,
Soran and crew created it and introduced it at Compellent in 2005. I'm not seeing anything
that talks about Easy Tier existing in 2005 nor 2006. Hard to tell when Easy Tier was introed via
dated google searches but it looks later than 2006, is that right?
As far as I can tell, the ceiling on the service life is mechanical. Everything I read, everything I've been able to google. If you have some hidden knowledge that shows otherwise, please share I'm much interested and would have me re-work my presentation B^).. heh. If there were a 7 year warrantied hard drive, the big players would use it in the 1000+ drive frames and perhaps extend their bundled warranties out further and make their long term costs less, ROI better , etc. There are so many reasons a much longer warrantied drive makes sense, but it isn't here yet - if ever. I believe it is a hard physics/mechanical problem. Again Charles 9, if you can show us otherwise, much appreciated.
> But note your own words: "other factors being equal".
The only thing I was trying to do there is "apple to apples", in other words a 7200 RPM 2 TB versus the same. 15K 2.5 900 GB SAS versus same, etc. So some twit didn't try to run off in a direction "of what about this and that" which invariably happens.
> IOW, they didn't have a good reason to build a seven-year drive.
Well.. this is like the 100 miles per gallon carburetor, it doesn't exist but fun to speculate none-the-less. I'd speculate that if vendor A were to deliver a 7 year warrantied hard drive, they would have captured a large segment of the marketplace (other factors being equal). The reason to go beyond 5 years is there and always has been.
Well actually, the service life (or warranty.. how long it is under warranty - take your pick) is 5 years. The reason manufacturers don't go beyond that is they can't. Spinning parts and the failure rate greatly increases. Plenty in the GooglePlex that speaks to this. These guys think that 50% of their drives will still be running in year 6:
"Going forward, we'll want our 3/4/5/6TB drives to last longer. My current server box is running 2TB drives, and most of the drives have 4-5 years of spin time already, with no urgency to replace them any time soon."
Maybe because you don't understand the risk? Everything you have RAID6? Because at some point UBE/URE may bite you in the ass and a RAID5 rebuild will go belly-up.
I don't think extending service life of hard drives will happen. It's 5 years for a reason. If someone could do 7 years, they would have and they would have cornered a nice chunk of the market. The problem of course is spinning parts, they only last so long.
You like to trot that article out don't you? It's from 2003. I think I read an article in 1980 that said IBM was killing off mainframes.
"POWER8 is the last POWER generation."
"IBM was showing off a part, has systems of all sizes up and running in its labs using the Power8 chips, and has been designing the Power9 processor for quite a while already, according to Starke."
Say... you aren't perchance a lib are you? Libs tend to make up facts, don't let facts get in the way of a good story, etc.
Haven't goggled it, but that wasn't a five year plan perchance? Five year plans don't work out so good. Heh. Problem with some of these larger companies, they lack elasticity. They can't change fast enough and/or they mis-interpret the change around them and head off in a direction and land some place the industry isn't going. High-end monolithic storage arrays on the decline... "hmmm... what does that mean? Where should we be heading?" etc. As Potts quotes Jobs: As the much-venerated Steve Jobs said: "If you don't cannibalize yourself, someone else will." I wouldn't suggest for a second this is easy, some will get it right others will shore up the walls for a time being and speak good wall street babble on earnings calls until it becomes quite clear they have been hollowed out and there is considerable erosion behind the walls. The "two for one" spends that Amazon, Google, FB, MS are employing are having a devastating effect on IBM,HP and other traditional players. Borrowed that last bit from Campbell, here: http://storagemojo.com/2014/10/17/shadow-it-pt-2/
"So you choose, 32-socket POWER7 or 16-socket POWER8 - you will not get better performance by choosing POWER8. So what is the point of POWER8"
In this case, cheaper licensing costs. One reason Power eroded Oracle on Sun hardware over the years is the Power perf was so much better even with the "factor", Sun made a lot less sense to run Oracle. I've been part of a migration , a huge migration from Sun to Power, migrating to a lot fewer cores and saving a bunch on Oracle licensing costs.
"A table of hardware costs Wikibon has prepared shows that, looking at 10-year cumulative hardware costs, a disk-only archive costs $5.5m while tape-only is far lower at $0.8m."
There's a Dilbert cartoon moment out there somewhere.
"Tape isn't sexy, let's kill tape"
"What's sexy got to do with it, it is a lot cheaper"
"I read in an airline magazine that facebook has a cool archive, we need to do something similar or we won't be cool"
"We don't have budget"
"I'll find the money, I think there are several projects that are a bit fat as it is"
"What will it gain us in the long run?"
"I'll call up me buds and we too will be in an airline magazine, free adverts, plus we will be cool!"
Actually... Comcast's greatest fear is when Verizon FIOS hits a neighborhood. I've watched our neighbors one by one switch to Verizon. I can tell collectively we are still with it by the wireless routers available. I got tired of the nickel and dime. I'd call them up to get the discount after my "special" expired, get sent to "retentions" and quite politely mention the great Verizon deal I am staring at, Comcast would do the right thing and give me the "special" pricing. After last go-round , I tired and switched to Verizon with lifetime DVR+triple-play, etc. they don't play the game , after special, my price went up all of 10-15 bucks, no biggie. The in-laws went with Comcast... I spent 3 months and a number of phone calls on their behalf just to get that $200 Visa debit card. You have to fight tooth and nail for those things.
Where will they come from? Overseas I would bet.
It won't be me. Been courted on several occasions to do storage. Not a chance. The nights and weekend work is enough to kill you. I've watched a few tired wretches do the wrong thing, one unfortunately blew up several hundred VMs ("hmmm... SRDF in this direction.. , wait, no, this direction ... oops"). The stress and demands are not worth it and the smart ones avoid it like the plague.
I wouldn't pretend everything belongs in the cloud. Certainly not with some of the workloads and access you describe. For others, they felt they had no choice. Netflix being the poster child.
http://bit.ly/1gVnxqX "The overriding reason for the Netflix move to a public cloud generally, and to AWS in particular, was that the company could not build data centers fast enough to meet the often-spiky demand of its users and that AWS was the only game in town in terms of the scale that Netflix needed." More details about that here and about - Google is your friend.
But yes, there will be corner cases and situations where it will not be a good fit. But the cloud folks are bending the cost curves down and the bean counters will count the numbers and the dash will begin. I'm not saying the dash is underway - but it will happen. "Then tell me, with a straight face, that the future is to have all workloads in the cloud." Maybe the reasons won't be good, but the numbers folks rule the day in the long run there will be a tipping point, I'll bet you on that and it will become quite apparent.
"The cost in lost profits from downtime" Yes. I often use this example of a cable cut and the havoc that resulted: http://bit.ly/1eCCFJT I expect there will be SMBs that totally moved to the cloud, cables are cut and folks have to work from home for a few days. Yes high profile burps in Amazon, but those are becoming less frequent as they tighten their processes.
I agree with many of these points. Because of latency and bandwidth, all pieces would reside
in the datacenter you are accessing remotely. Trying to do the hybrid thing is very expensive.
Regarding DR, I would hope there would be next gen solutions for that SRM on steroids or something
Elsewise, no Stuxnet to slow down a uranium enrichment program.
I recall back-in-the-day at a conference sitting next to folks at lunch and asking about what they all up to. Surprised at the number of consultants that worked at Nuke plants supporting the VMS infrastructure. Of course, I'm sure a lot of that has been ripped and replaced with Windows, with multiple layers of firewalls and VPNs - one would hope (if not air gaps, I have no knowledge nor care to regarding actual setup). If the majority of SCADA was still VMS based, we would have been so screwed ... no way to stuff a virus on it and slow down uranium enrichment programs - that's for sure. Bombs away (a lot sooner than planned - heh)!
Oh for those not good at reading between the lines or interpreting intent, there is quite a bit of snark in this post.
... and makes the nightly news as Brian Williams intones about the cesium plume approaching California. Narrative? Nuclear = Bad. Bald Eagle killing Wind power generation = Good. Oh the ecological gordian knots the greenies twist themselves into.
"This could go a long way to explain the c. 2°C temperature difference between urban and rural areas." That and two other factors. Cities are concrete and asphalt heat islands , secondly temperature recordng stations are often very poorly placed. On roofs or too close to heat sources like parking lots.
Long term trends? Sure, how's this work for you?
"And maybe a bit a scientific analysis might be welcome, as well: what part of the physics underlying the concept of radiative forcing do you find issue with?"
What are you talking about? Trot something out. Be specific.
The problem of course it is a very hard sell (warming) when you are freezing your ass off and you haven't seen this much snow in decades (large portion of US of A). Likewise, Europeans probably remember the winter of 2012: http://en.wikipedia.org/wiki/Early_2012_European_cold_wave
Couple that with a pause or plateau for 17 + years now in temperature rise:
http://www.forbes.com/sites/jamestaylor/2013/09/26/as-its-global-warming-narrative-unravels-the-ipcc-is-in-damage-control-mode/ toss in lowest ever Antarctic ice melt: http://www.theregister.co.uk/2014/01/03/antarctic_ice_shelf_melt_lowest_ever_recorded_just_not_much_affected_by_global_warming/
And the warmists are suddenly feeling like a politician with numerous scandals without a kiss-up press in their pocket. Oh. that's right, they have a compliant press, still a tough sell, ain't it? Barry was doing a full court warmist press conference from a golf course in Cali, handing out a billion plus for California to fight warming or some such. USA ain't buying it at all as warming polls very poorly here. Press on warmists, it'll be a tough slough.
Yeah.. and I suppose it gets worse. Imagine best laid plans. Google isn't coughing it up but I recall an airline in Minnesota that did the right thing, two separate carriers and then the construction company that cuts through the fibre bundle that is carrying both carriers. Easy prediction: We'll read about a company that went out of business because they were cloudified and the day and a half they were down, the customer abandonment rate was so severe they never recovered. And it will be a cut cable that put them out of business.
"On-premise data centres will also need bulk, online disk storage, with RAID rebuild time a continuing problem"
I keep seeing this... time isn't the issue. Failed rebuild is (obviously). The straw man is an additional
drive failure while rebuild is taking place. That's extremely rare. The problem in a RAID5 rebuild is
a bad block when rebuilding, tits-up at that point. To get around this, RAID6 is the answer for most.
But now we veer off into the more painful write penalty of RAID6, throw a lot of drives at it and the
pain lessens (simplification). MTDL for RAID6 is 110 years (google: intel raid6 paper). Which has
one scratching their head when you read about ZFS triple parity RAID. Still trying to figure out why
triple parity. Finally, SMART tech has most drives undergoing pro-active replacements making RAID5
less of a risk. But RAID5 still scares me. I've seen or heard of too many RAID5 failures on rebuild.
I perseverate - sorry.
I stumbled upon this and thought it rather interesting. Perhaps a case of the left hand not knowing what the right hand is doing. Or a bit of indirection?
Either way, one lift quote I found interesting:
Additionally, the economics of Glacier are not competitive with tape. Glacier's pricing is being promoted it as low as $0.12 per GB per year. The comparative costs for a Spectra T-Finity tape library are $0.0008 (cost per GB per month amortized over 5 years). Add in power, floor space and personnel costs for all 5 years and the total cost should still be well below $0.01 per GB per year for the period.
I'd think if it is close to a $.01 per GB per year, there is plenty of headroom for Amazon (assuming Glacier is tape) to come down in price.