Mountains out of molehills
I was dealing with the 3Gb/s video problem as long ago as 3 years ago. Before Final Cut Studio went HD. What I learned was that the video business likes to make headaches for themselves.
3Gb/s video is extremely stupid since there is no transport format for it. HD-CAM SR which is the transport format for high definition in HIGH END broadcast stores video at a peak of 880MBit/s AVC which is 110Megabytes per second. The decks cost WAY too much and suffer generational loss left and right since you can't read/write the 880Mbit/s stream, instead you have to read the stream as a 3Gb/s signal transmitted as uncompressed RGB frames for full quality. So, to copy from one deck to another 1:1, the video is first decompressed and later recompressed for storage.
In a professional video network, you would instead benefit from using 600MBit/s which fits perfectly over a gigabit ethernet adapter (use two bonded channels to guarantee quality). Then NAS storage become trivial. I used a HP wx8400 workstation loaded up with SAS controllers connected to large amounts for drives using SCSI expanders and communicated with the network over 10Gbe for network communication. This configuration provided me with more than sufficient bandwidth to handle 5 high definition workstations in full 4:4:4 1080p. Off the top of my head, I believe I could have expanded to an additional 20 workstations and up to 5 petabytes by adding another 10Gbe or two channels to the workstation.
On new technology... I've been experimenting at the house with a small 10 terabyte storage system I have to see how OpenFiler SAN scales to FinalCut Studio. So far, I'm under the impression that a single 8-core server with 32GBytes of RAM should be able to handle 20-40 machines alone. Using big iron from Sun or HP would stretch much further.