I don't think direct attached virtualization requires high end hardware for HA.
What you need is fault tolerant software. Something that is scale out rather than scale up is ideal for direct attached virtualization. Where if a physical server with 20 VMs die you don't care.
A couple companies ago a decent portion of our web servers were deployed this way. Some of the core web apps could not scale beyond 1 physical CPU. Rather than try to re-write the code it was simpler to put a hypervisor on a dual proc quad core box and run 8 copies of the app (with 4 servers/site or roughly 32 copies of the app per active-active site[multiple sites for reduced customer latency]). Another app was higher performance and could leverage the underlying hardware to it's full extent so that web application ran on bare metal. The cost of a good shared storage array was going to outweigh the cost of all of the rest of the equipment at each site, so it didn't make a lot of sense or cents from a cost stand point, as much as I would of jumped at the opportunity to have such a system for an ease of use standpoint.
You forgot to mention the hybrid approach., ala vSphere 5 VSA or things like HP Lefthand VSA which turn direct attached storage into fault tolerant network storage - good for SMBs with low I/O needs, can get the best of both worlds.
I don't think your storage numbers are realistic either, measuring storage based on throughput when virtualization is for a large part random I/O workloads rendering throughput numbers meaningless, it's all about IOPS, the network is not the bottleneck. Sure there are some workloads that are throughput based but in the vast majority of cases your going to run out of IOPS long, long before you even get close to running out of bandwidth(consider, peaking out at maybe 4-8MB/second/disk at 15k RPM if your lucky). Latency is just as important as IOPS too. If your truly throughput bound then you may be better off running on bare metal. Hypervisors don't make sense for everything.
Depending on the organization you can design your network up front so that once your hypervisors are deployed and your virtual switches are configured you rarely have to touch them again. For my last VM deployments I can count the number of times I had to configure a vSwitch after I installed the hypervisors on one hand (~60 VM hosts, several different clusters). My new VM infrastructure which is going in early next month is planned so I don't expect to have to touch the vSwitches for the lifetime of the product - at least 2-3 years. Not that it's a big deal if I have to, but if I don't need to then so much the better. Anything can happen but my experience tells me vSwitch configuration changes are few and far between.
Memory (capacity not performance) is the driving force behind virtualizaiton, which is why Vmware added the vTAX in v5. Memory availability is just as important in these big boxes making technologies such as HP Advanced ECC and IBM Chipkill absolutely critical to any VM deployment. ECC by itself is not enough, and has not been for years.
If mainframes were so good then why is IBM using KVM + Red Hat for their developer cloud instead of running Linux on mainframes (IBM used to advertise a lot about leveraging the multi tennant abilities of mainframes they don't seem to advertise that nearly as much anymore I haven't seen such an ad in years). IBM after all unlike anyone else has got to have the lowest cost of operating their very own gear, and I'm sure licensing their own software comes at no cost as well. I wrote about this a while back, the IBM developer cloud was focused around Java apps so from a technical perspective it wouldn't matter what platform they ran it on.
What do people use when milliseconds can cost millions of dollars? More often than not these days it seems to be overclocked x86-64 systems(El Reg has many articles on this). Mainframes are what you use when you can't tolerate downtime.