46 posts • joined Friday 1st October 2010 06:31 GMT
Re: The HP way..
Good point. Though many of the new hires are being appointed by the same lame ducks that have been at HP in their positions for eons. Never hire anyone better than you for job security :)
Re: The HP way..
Nothing wrong with career progression, however in a tech company change is constant and the ability to see the forest for the trees is only helped by broad experience which requires moving around a bit in the industry.
Anyone who spends 25 years in a single big corporate rarely has the personal drive to be an individual and mix things up and is happy being a number in the corporate machine.
In HP's case this is especially true. Their veterans have been around too long and are stuck in the heydays and their enthusiasm to cling to the past will not transform a company that is struggling to work out what it should be doing. I've yet to meet a long time HP management type that has any value to add.
The HP way..
What HP needs is new blood with fresh thinking.
90% of HP issues are long time HP staff, disconnected from the real world that resist change and insist on the HP way.
Re: yeah right
The base spec for both 7450 and 7400 were the same. The 7200 has lower spec CPU's. There is a single code stream for the whole product lineup, every device gets the same features and capabilities. This has always been the case since the old S-class to current 10000 series.
Don't get me wrong, the 3par will be a good array that's trying to do the SSD-thing - way better than the rest of the mainstream vendors. The new kids on the block are just better, faster and often more cost-effective.
Go and test them. Few vendors will put their money where their mouth is. Let the tech talk for itself, not the product managers.
Re: yeah right
There's only so much you can coalesce by holding off writes in cache. It was a minor tweak to cache code that applies to all systems running the current inform OS, Large disk based systems would also hit the cache bottlenecks under the right workloads. The 3par is a great disk array but its still largely treating SSD as disk. It's architecture is vastly superior to EMC and the rest of the rats in dealing with SSD but will not be as efficient as the purpose built next gen devices.
Don't forget when 3par came to be, they defined what a next gen array was, especially with Thin provisioning.
Technology moves on and 3par is now a mainstream technology trying to adapt to new concepts but it's still bound to an architecture that was made for disk. Put the systems side by side, you'll see what I mean :)
If you look at the SPC numbers for the all flash 3par it's nothing special. They are making a lot of noise about very little. The array doesn't treat flash specially, just cache algorithms were slightly changed, mainly to prevent this being a bottleneck.
You can build this system today on a 7400 - same hardware, it's just given a new model number so HP can have a dedicated flash product on it's books for analysts.
It's all very "me too" and frankly quite late..
Pure is an array that does inline dedupe and compression. At the average ratio of 6:1 22TB gives you over 100TB of actual user data.
Dedupe is something Nimbus claims to do as an option but never uses as it is affects performance too badly..
to stupidity and beyond!
We should send Gordon Ramsay in to sort them out.
Their management GPS is so off course they've driven into the sea and kept on going.
I've done the maths often enough on large VDI deployments. No one "saves" £9M without having some seriously wrong numbers.
ROI is difficult to get right unless corners have been cut.
just use the webapp
I realise that most of the people commenting are clearly so directionally challenged that they are incapable of getting anywhere without their noses stuck in a smartphone.
Go to google maps, when prompted to install a webapp on the device say yes. Enable safari location services and hey presto you have a google maps app.
I'm not an apple fan but it is sad just how much time people that don't have an apple device spend taking pot shots. Clearly Android and every other non-apple device is so flawless that it's not worth writing about but seriously - get a life. It's JUST a phone.
considering how long commercial jet aircraft have been going, there has been rather modest changes in aircraft design and materials.
40 years isn't a long time so it's highly unlikelty that we will see a quantum shift in what constitutes an aircraft.
It takes a manufacturer almost a decade to design, test and deliver a new aircraft.
With safety restrictions and processes it is next to impossible to deliver on this vision.
Getting airports to support a double decker aircraft took forever what are the chances of getting catapults working in a short timeframe.
Expect evolution not revolution.
Re: give me ssd write caching on 3par!
By default any write is comitted to array cache.
With adaptive caching a 90% write workload would use 90% of the 3par cache which is already better than any other systems with fixed read/write cache sizes.
You could also use AO with the "default" tier being on SSD - this would give you the performance you want and would push cold data down to lesser FC or NL (or both) tiers.
Array cache is always faster than SSD anyway, just need enough disk to destage fast enough.
Cheap and nasty car (and butt ugly) - worse review.
Not sure what this has to do with IT. No self-respecting IT person would be seen dead in one of those.
Takes more than calling stuff "kit" to write an interesting motoring piece.
More top-fail than top-gear.
thats midrange storage for you
What do you expect when you ask a midrange storage device to deliver enterprise-class reliability?
You get what you pay for..
Aside from the fact that IBM have one of the weakest storage portfolios on the planet, how can the SVC be a good thing?
Most vendors are striving to deliver virtualisation capabilities in the array.
Adding another layer to your storage environment that still requires the actual storage to be managed is a very bad thing - how can additional points of failure and channelling all your data through another potential bottleneck help?
IBM deleveloped SVC to try compete with capabilities that other vendors already had in their hardware and that is missing in IBM's own tired storage line up.
The only people that love their SVC are those that have no clue about what enterprise storage is.
Flash appliances and enterprise disk systems are in separate markets and fulfil different requirements. Customers need large amounts of capacity that delivers high-levels of performance to a LOT of attached hosts. You can't do this today with a flash appliance.
These benchmarks show efficiency and performance - the cost per IOP was excellent for a high-end device and didn't require additional external virtualisation appliances to stripe across multiple backend arrays.
A single virtualised array that can scale this far - not easy to do or EMC would have benchmarked the VMAX ages ago.
Pull your head out of Netapp's backside long enough and you may understand what storage is really about.
After many attempts it the upgrade on my iphone went through. The app restore failed losing a third of my apps and all my music. Took re-syncing and manually putting some settings back in to get it sorted.
Hopefully the ipad will go better..
Computer says no..
The problem with HP today is that it's full of people that have been there for 20 years or more and have no clue as to the world outside of HP's walls. They need new blood to get this company going again. Unfortunately any newcomers will find it hard to change a business where a massive beaurocracy has been built on "the HP way".
Anyone who has experienced the inner workings of the company will know that massive change is required in every aspect of the business.
Ironic considering HP has always preached about agility but have no clue how to do it in their own business.
round and round we go
Let's not forget its the board that hires the CEO. Since they keep getting it wrong perhaps it's time for new board members.
It is largely irrelevant who is at the helm, the massive amorphous mess that is HP will never change. It's too full of long time employees that have lost touch with what the modern IT world is and are happy to keep their heads down and disappear in the corporate machine.
Latency is down to the array's ability to service request from disk or cache and has nothing to do with the wet piece of string inbetween (assuming you have a fabric made in the last decade).
Running apps on an array is a waste of money and just plain stoooopid!
Who thinks of this cr*p?
Only dumbass end-users and journalists would think it's a great idea.
The T will run the same code that the P10000 (formely known as the V series) will.
So it will support the same features and will be fully 64-bit.
3PAR has a common hardware architecture so all systems run exactly the same software.
As long as the hardware supports it, all old system inherit new features and abilities.
give it a rest
SAS is an interface not a tier. Enterprise Flash disks are usually based on SAS anyway with a handful on FCAL
You can have different tiers of disk with the same interface, larger slower ones that are cheaper than smaller faster ones.
Tieiring is only as effective as the mechanism that moves it and EMC have yet to prove that FAST actually is fast. It certainly isn't easy to use which defeats the purpose of using a technology that is meant to be all-encompassing.
Maintaining the suitable ratio of flash to required non-flash capacity is also not cheap nor practical (yet). Much like EMC's cache to disk ratio - they are happy to charge you a truckload for it but the benefits are limited.
I look forward to an EMC SPC result on VMAX. It is impossible to hide the complexity it takes (and cost) to produce a big number. I am sure they will achieve a reasonable number, just the cost/IOP and commands per usable TB of SPC testing will be massive - and that is something they don't want the world to see.
Try write about things you understand, more fact, less "guesswork". If readers wanted guesswork they'd go to a psychic!
Also, yet another netapp reference, so at least its consistent. Thanks for taking the time to vacate Netapp's back passage to attend and EMC jolly..
The latest delivery from the back passage of EMC's product management straight to your desktop..
Don't suppose EMC happened to pay for El Reg's trip over?
Spend enough time with the EMC brainwash machine and soon enough you will believe the nonsense it churns out.
Just goes to show that EMC doesn't just invite any old tool to these vegas knees-up sessions.
At least the author is consistent in the BS that gets churned out.
One day, maybe one day we will get some decent reporting on things relevant to the real world of storage instead of this drivel.
Put it under load for a period of time, throw in a disk failure and see what happens.
Forgotten how many EVA's I've replaced that have grown beyond their usefulness.
There is a reason that many customers have switched storage vendors after starting with an EVA, often to 3PAR - which HP incidentally aqcuired.
This is the EVA's swansong.
Don't hold your breath for any developments after this (long overdue) model.
The same software and reliability issues have plagued the EVA since it first came into being.
Bolting more features to the device is unlikely to make it any faster nor reliable.
Such a pity that years of underinvestment religated what was a revolutionary device to complete irrelevance.
Considering the IOPs rating of the nearline disks used in an XIV, you would need to drive them beyond the spec where latency is optimal and get a seriously good cache hit rate to achieve that result.
The ATA disks have just over a third the IO capabilities of a similar FC disk so maintaining 30K IOPS is unlikely (and wont be pretty)
The result from the previous Netcr*p (3170) scored 60K IOPS. The latest and greatest 3270 (loaded with PAM cards and all that other whizzy stuff) got it up to a maximum of 68K IOPS.
In 2 years with a major upgrade all it could muster was a feeble 13% more?
Wow, that really demonstrates how well it scales.
Must be super efficient when you end up buying 10 of them to do the job of a single enterprise array.
But then NAS is the future - but then why have they just bought a block storage company?
A few mixed messages from the home of Notwork appliance.
Zzzzz so what?
Seriously. A whopping 70k IOPs?
Of 92GB configured only 32GB was used.
With the latest and greatest tech they have managed a Mediocre result.
Seriously dude, if it's a bank holiday weekend rather write nothing than turn in some complete drivel like you have just released on the interweb.
Shame on you!
I think you're missing the point.
Tiering is about efficiency and cost - making sure that data is on the right type of disk at the right time. Access time is not a relevant metric for tiering, it is generally based on frequency of access so that idle/stale data can sit on cheaper disk until it becomes relevant again.
Irrespective of the tier used all disk should be resillient so data integrity is never compromised.
What you should be focusing on is complexity. As much as most vendors have some form of tiering technology, very few have an implementation that are easy to use and sufficiently granular to not compromise performance.
SSD (for many vendors) is a great marketing tool but does little to make tiering relevant or to improve performance. In a well configured system very few applications will benefit from the reduced latency SSD offers. In fact, in highend arrays, many cache algorithims prevent getting the full SSD benefit anyway. The real benefit of SSD is being able to get many IOPS from relatively fewer disks. Due to cost and size constraints this only works if you can tier data on a sub-LUN basis.
In my experience, very, very few products out there can deliver this in a meaningful and sustainable manner.
Slag the Alpha as much as you will, it was still a performance leader until poor management killed it.
Much of the architecture used today in X86-64 cpu's have DEC engineers to thank for getting them to where they are today.
Let's also not forget DEC's contribution to storage. They had pioneering work in SCSI systems and created what is today's Storageworks division in HP.
Had the company had better leadership and marketing the current lineup of vendors may have been somewhat different..
steaming pile of...
I suggest you read up on some basic laws of physics..
With most organisations clammering for low-latency disk solutions using SSD there will always be a massive market for disk vendors to provide online tier 1 and 2 disk. It does not matter how creative you get with network optimisation the technology will never allow for anything more than archival or backup data to be sent cloudwards.
Most organisations are focused on the private cloud which, guess what, exists on their own infrastructure.
Stop beating the cloud horse to death and recognise it for what it is...
(and please get some adult supervision for your next ramblings)
More stolen ideas
By definition a stack of technology adds complexity and does not simplify it. Clever tools and updated GUI's have to bring it together but does not remove the inefficiencies that exists when multiple layers have to work together.
Modern devices have built these capabilities natively into their product and don't, like EMC, have to rehash existing technology and create a massive storage strap-on.
Every recent announcement from EMC has stolen concepts from other more progressive tech and presented it as their own. These same vendors are the ones EMC refuse to acknowledge when they discuss their competition.
Mr author, how about a frank and reg-style analysis of EMC? The constant rim-jobs must be wearing you out!
The largest change in VMAX was the move to industry standard controllers (which incidentally EMC used to say was a bad idea and gave the vendors that did it a tough time). This approach has been adopted by most of the legacy "big name" vendors now.
Outside of that, the logical deisgn of vmax (how data is adressed and written) is pretty much similar to its predecessors. The core OS (along with the hypers and metas concept) is laregly unchanged. A lot has been bolted on top of this and wrapped in management tools to bring the whole lot together.
This limits the effectiveness of any new capabilities.
Modern day computing requirements are somewhat different to the mainframe era and as long as the fundametals of the vmax are based on what is now a fairly archaic approach to writing data it will never be able to compete with emerging mid-range technologies and new generation vendors and will become as niche as high-end UNIX systems and mainframes are increasingly becomming.
EMC up to their usual marketing rubbish:
It's FAST - because we say so.
How about some public benchmarks or better yet letting people freely post theirs without being taken to court.
VMAX is based on the same tired architecture, the only thing that has improved is the quality of EMC's powerpoint slides.
Bin this relic and move onto something that doesn't trace it's origin back to the dawn of mainframe time.
Yet another overhyped EMC press campaign "revolutionising" the storage industry. Record breaking Zzzzzzzz blah blah.
Doesn't matter how much lipstick you put on a pig it's still a pig.
Brace yourself for more powerpoint waffle, "leaks" and gullible journos that will buy into it.
In the end all that is left is a Clariion in drag, you can't polish a turd.
it's the end of the world as we know it..
So if LSI are killing the product and HP pulled the plug on the EVA implementation why would the SVSP still exist?
I think if you look closely you'll find it has shuffled off it's mortal coil..
enterprise is more than a spaceship in star trek..
that may be a cool sounding idea and for really, really small organisations that think data is dispensible it might be a good idea.
What differentiates an enterprise hardware solution from something cheap and cheerful is the integration and error checking of all underlying components. When good tech goes bad it is the ability to manage the error conditions with the underlying disk that allows you to preserve data and service delivery. Just ignoring the underlying hardware and managing everything from software puts this responsibility elsewhere and with jbod the resillience simply isn't there to begin with.
High-end disk isn't going to disappear anytime soon, and trendy software/virtual appliances aren't going to change that.
If your current disk is complex maybe it is time to find a newer more relevant vendor.
64GB to 1TB in 5 years not impossible but fitting that into a small formfactor without some major changes to memory density will be fun.
Unles of course you want an iSlab.
May as well wait for wireless FCOE and hook your iFad to your home storage device :)
HP haven't added anything to the XP in the last few years, apart from cost.
Considering the acquisition of 3par this messaging from HDS doesn't come as a suprise.
The ship has sailed and HP are trying to find the life jacket.
It is amazing how many ill informed people think that flash is the future.
The underlying architecture is no less flawed that rotational disk.
In the disk array world there is a whole lot of technology that is required to make sure your data is safe and sound and this, for the foreseeable future, will not go away and will in most cases be the bottleneck. As much as memory technology improves, its ability to meet growing customer data requirements will always lag behind. The cost per gig will also make broad adoption difficult.
Flash based storage has a place within a tiered environment but is a long, long way from ruling the datacenter.
The point solutions from TMS and others simply lack the pedigree and many other features that people need in the real world.
- iSPY: Apple Stores switch on iBeacon phone sniff spy system
- It's true, the START MENU is coming BACK to Windows 8, hiss sources
- Chinese gamer plays on while BMW burns to the ground
- Pic NASA Mars tank Curiosity rolls on old WET PATCH, sighs, sniffs for life signs
- How UK air traffic control system was caught asleep on the job