certainly is a reason to put in memory on a network link
flexibility.
In memory data even over a network link is still much faster (and probably "fast enough" for the vast majority of workloads out there - same goes for SSD over a network link). The main drawback for SSD in traditional enterprise arrays is the cost is really high and the controllers aren't prepared to scale that well with them. If you look at some of the biggest baddest enterprise arrays being able to squeeze out 200-500k IOPS with tons of CPUs, cache etc. When compared to what SSDs can do on their own, there is a large mismatch of scale.
Not only that but take 3PAR since I know that technology very well, the P10000 specs (which is rated for 450k SPC-1 IOPS) say max of 8 SSDs per disk enclosure. So it's not as if you can take a disk enclosure and slam it full of 40 SSDs and off you go - if you want those 40 SSDs your talking about 5 x 4U disk enclosures - 20U of space mind you - half a rack, and at least in the past 3PAR has been anal about their power configuration, if they still are in this scenario your looking at 2x208V 30A circuits to drive those 40 SSDs. Of course they won't draw that - I'm quite certain in fact that the disk chassis themselves will draw significantly more power than the SSDs that they house(especially given there is only 8 per chassis). But most customers pay for power by the circuit rather than utilization. And guess what - you can "only" get 12 disk shelves on a pair of controllers on the P10000(it is, after all 24 FC links to connect 12 chassis). Of course you can put SATA or FC drives in those chassis along side the SSDs, just using this as an example of a SSD-only type of solution to see how crazy it can be.
So it's pretty easy to see why large scale SSD use in a typical enterprise storage array is not nearly as cost effective as something designed from the ground up to be SSD like say a Violin box. Though from what I have heard is Violin lacks a lot of software, it's mostly just a dumb fast storage system, which I'm sure works well for certain use cases too.
But don't think that it's a stupid idea to keep things like in memory and SSDs from being networked just because of latency. Networks add very little latency - especially when your comparing IOPS of memory and SSD to traditional spinning rust(which is often traveling over a very similar network anyways).