The issue of storage QoS is even more complex than the article suggests. Moving an entire host on to a specific tier of storage is not the correct solution, because most servers have a relatively small amount of active data, and a large amount of inactive data, obviously with different I/O requirements. So there needs to be the ability to identify and migrate the hot data on to flash whilst retaining the cold data on HDD. Plus of course the definition of hot data changes frequently over the work day, the business month, the calendar year, etc.
And defining requirements in terms of IOpS is not a good solution either. Even a slow disk can provide thousands of IOpS if the application requires streaming whereas the fastest HDD can be defeated by a weird access pattern. Flash has a major benefit here for reads, but you can craft write access patterns that can bring flash drives to their knees as well. Response time is a nicer way to think about things, but that would require storage companies to work with OS/database/app companies and they're notorious for having no desire to do so.
Finally, the idea of having lots of policies around to define relative priority of servers is too time-consuming and error-prone to be worthwhile. It needs to be a totally automated system, dynamically moving data from one place to another to ensure that the the right data is on the right tier at the right time and without operator-created constraints. This is not the type of functionality that someone will purchase as an add-on or third-party feature to existing arrays, it's going to need a new product (or company) which uses this as the basis for their sales pitch. Kind-of like Compellent, but much more dynamic and adaptable and without the manual configuration.