I think you're missing the point.
Tiering is about efficiency and cost - making sure that data is on the right type of disk at the right time. Access time is not a relevant metric for tiering, it is generally based on frequency of access so that idle/stale data can sit on cheaper disk until it becomes relevant again.
Irrespective of the tier used all disk should be resillient so data integrity is never compromised.
What you should be focusing on is complexity. As much as most vendors have some form of tiering technology, very few have an implementation that are easy to use and sufficiently granular to not compromise performance.
SSD (for many vendors) is a great marketing tool but does little to make tiering relevant or to improve performance. In a well configured system very few applications will benefit from the reduced latency SSD offers. In fact, in highend arrays, many cache algorithims prevent getting the full SSD benefit anyway. The real benefit of SSD is being able to get many IOPS from relatively fewer disks. Due to cost and size constraints this only works if you can tier data on a sub-LUN basis.
In my experience, very, very few products out there can deliver this in a meaningful and sustainable manner.