There is so much fail here
There is so much fail here, I hardly know where to begin:
1) FC does NOT run SCSI over a high-level networking protocol. The parts of FC that actually pass traffic are rather low-level. Other than the simple (and woefully lame) BB-Credits, it's a low-level, fully-switched transport with no more overhead than straight Ethernet.
2) They magically ensure lossless delivery. Great. How does it pull this feat off without adding overhead? Or does it merely assume lossless delivery like Fibre Channel, and then go to crap when the network isn't as lossless as hoped. (Getting Ethernet congestion control working and deployed is what is retarding FCoE deployment.)
3) "It found that it could take LUNs offline while virtual machines are running and not have a server crash. You can't do this with most if not all Fibre Channel and iSCSI SANS." False. Back-end LUN mirroring is not a new feature. IBM's SVC does this I know (vDisk mirroring), and I'm sure it's not a unique feature.
4) "Rock-solid reliability"? They tested this how? Of COURSE a few live part pulls didn't cause it to fail; those are the easiest failover cases to test.
5) Even with existing reasonably-simple SAN arrays you don't need a "storage administrator" to provision storage; minimal training and you are good to go. But once you have a large reasonably complicated environment, it isn't the complexity of the CLI that holds a server admin back.