Re: Who says UCS are using FCoE, all our templates are iSCSI vNICs
A lot of customers have FC infrastructure already, and a lot of customers don't refresh servers and storage at the same time. It's not uncommon for us to have to spec out this years new server solution to work with last years new SAN solution and also make it future proof for next years LAN refresh. Outside of schools and small businesses it's very rare to see a complete refresh done all at once as a single project.
If the infrastructure is 4Gb FC we'd normally rip it out, at which point it's fair game what protocol we put in (in agreement with the customer of course). But with that said 8Gb FC has been around for so long now (7+ years?) that most FC SAN refreshes we end up leaving the switching etc in place, effectively just swapping the storage array.
Frankly it's a myth that iSCSI is cheaper than FC. Sure, 1Gb iSCSI is cheaper than FC but when you look at it properly 10Gb iSCSI can actually be really bleeding expensive to implement. At small scale (i.e. before you start having to buy port licenses) FC is quite often cheaper. There's also the point that the overheads on ethernet and iSCSI mean that 8Gb FC is faster than 10Gb iSCSI in terms of actual performance.
In terms of the NAS you're using, I don't object to your premise but I question that two of those units would cost £160k. I'd question whether playing EMC (not known for being the cheapest) against QNAP is a remotely fair comparison. I don't know EMC pricing but you'd be able to get a storage array like that for around £25k easily. If you played the vendors against each other and entertained quotes from some of the cheaper vendors (e.g. Dell and Lenovo) and weren't too fussy on what you ended up with you'd likely be able to get it for around or possibly even under £20k.
Sure that's still more expensive but with most storage arrays you'll get things like dual controllers (rather than a single motherboard) which will likely offer more cache and potential bandwidth. You've potentially got less overheads as well (SAS drives rather than SATA and native block rather than file) and generally speaking higher grade hardware as well.
Also a word of advise - please be careful with your choice of drive. By using NAS drives (typically meant for machines with 8-12 disks, but that varies by vendor) in a larger unit you might hit issues.
The drive firmware knows what it's being used for and the manufacturers can - and will - invalidate your warranty if it's been used in a larger system because you're using the hardware outside of what it was designed for; the cheap drives aren't designed to cope with the heat and vibration from running in a larger system.
Using WD as an example, note how WD Red drives are advertised for NAS systems with 1-8 bays and WD Red Pro drives are advertised for NAS systems with 8-16 bays. Beyond that you're expected to use their Re/Re+/Se drives - but from your comments around cheap NAS grade disk I suspect you're using the cheapest you could find? And consequently you have broken your warranty - albeit not on the unit itself.
A lot of people like to knock the Tier 1 vendors (HP, EMC, Dell, IBM etc etc) on pricing, and on the face of it that's easy an easy argument to make. But a lot of the time the way to make a self built solution cheaper is to use worse parts (WD Reds for example). In the long term - as long as you 'play the game' and don't get ripped off - you'll often find that a Tier 1 solution is cheaper in the long term.
Good luck anyway!