That depends on the EMC system
-Disclosure NetApp Employee-
Some of the EMC kit that is not designed for 5 nines of uptime (e.g. Isilon, DataDomain) is based on stuff like super-micro servers, which is mostly pretty good, but there is a reason the hardware is cheap (and it's not massive economies of scale). Unlike servers designed for virtualisation which like cattle should be shot when they get sick, a device designed to maintain the health of critical data needs to be built out of stronger and more resilient stuff.
The kit with decent hardware engineering does have intel bits in it, but are designed to much higher standards than your usual commodity stuff. You'd be surprised how much engineering goes into making something you can really rely on. When you expect to ship tens of thousands of units, even very low annual failure rates can turn into lots of very very unhappy campers. Good companies with a decent sales volume take a lot of care to make sure they dont lose their customers data. Many startups on the other hand take advantage of that risk aversion and push products out the door with impressive stats, but less impressive resiliency as fast as they can before they run out of VC funding. If they have a 1% annual failure rate, then that translates to maybe 10 unhappy customers, which is a manageable risk.
In short, dont fall for the old "its all the same hardware" schtick that is being pushed by a number of relatively uninformed folken, and keep in mind that there are a bunch of very good reasons why vendors like EMC, and NetApp, and HDS have QA departments that dwarf the entire development teams of the startups.