Hardware independence and other myths
A good survey.
The instant these systems target mixed workloads, they sacrifice a lot of capability. In particular, the need to support latency-sensitive, high-transaction-consistency workloads through a block interface, and POSIX-compliant distributed lock management in a file-interface, designers cross a road that they can't come back from. Ceph is a good example; by targeting the generalized cloud workloads, they've had to invest in optimizations for both transaction latency as well as throughput.
The cost is that these systems become bound by rigid assumptions of underlying hardware and network topology. Maybe you'd be asked to choose from the HP Xeon 2u 12 HDD server or the Dell 2u Xeon 12 HDD server, but true hardware heterogeneity is mostly a myth. Dependencies that that inhibit flexibility include drive failure detection relying on specific BIOS versions, specific local file systems, assumptions about how namespace is balanced, intolerance for performance variation, consistency in SMART APIs, etc. Even the promised 'multi-generational' platform support usually results in bulk migrations akin to a forklift upgrade.
Lots of pundits are predicting continued polarization of the storage landscape. Where transaction performance matters, enterprises continue adoption of AFA and eventually NVMe. And at the other end, unstructured data stores go to Object Stores that are optimized for RESTful semantics, WAN/hybrid topology flexibility, yet can provide high throughput (don't confuse with latency) where needed for workloads like batch analytics.
And to answer your primary question, scale-out has proven to deliver improved TCO at scale, due to the simplified administrative experience and ^improved^ platform lifecycle management. But is scale-out a commodity? of course...IMO there ever was a time that scale-out commanded a price premium. Even in the extreme performance HPTC world where Mr. King plays, storage is mostly purchased by the pound.