Storage is a multi-dimensional issue
I'm extremely dubious about the merits of allowing applications to write to persistent storage in servers without going through properly mediated O/S interfaces. There are extremely good reasons to have very well defined API storage interfaces (security, data sharing, validation, integrity etc.) implemented by operating systems and databases. Throw this away and chaos threatens.
It's certainly true that latency can be optimised by placing storage very close to system buses, but realistically the vast majority of real-world applications will gain very little benefit once I/O latency drops below the 100 micro-second mark (well within the limits of what storage technology can do). That's a 50-fold improvement on what 15K spinning disks can achieve. Of course older arrays don't have the processing speed to support such low latencies at high I/O levels (although they can just about get there from cache). That's an argument for improving storage appliances, not throwing away decades of well-founded application and storage architectures.
Once you throw away the idea of a separate networked storage pool (whether a physical or virtual pool), then it plays havoc with application and data centre architectures, data sharing, resilience, legacy applications, manageability and much else.
No, much better to retain established I/O API interfaces (block, file & database) and their attendant network protocols. For each of these there are centralised storage appliances available (SAN arrays, NAS arrays & DB farms/clusters/appliances) where the detail of flash implementation can be hidden.
It may also be that VM farms might have their own integrated storage solutions, but the majority of storage in major data centre applications will continue to be shared and that will require appropriate storage networks.
This is an article which has all the hallmarks of theory based on just one dimension of a problem.