back to article EMC federates Symmetrix controllers in virtual matrix

EMC is announcing a V-Max top-end Symmetrix architecture and product that federates potentially hundreds of controllers across a RapidIO fabric, forming a virtual matrix and scaling to support hundreds of petabytes, thousands of virtual servers, and millions of IOPS. The Symmetrix V-Max product will co-exist with the current …

COMMENTS

This topic is closed for new posts.
  1. James

    Day Late and £0.674435 Short

    These have been around for years:

    IBM System Storage SAN Volume Controller

    http://www-03.ibm.com/systems/storage/software/virtualization/svc/

    NetApp V Series

    http://www.netapp.com/us/products/storage-systems/v3100/

    http://www.netapp.com/us/products/storage-systems/v6000/

    They both provide for a unified storage environment for heterogeneous multi-protocol, multi-vendor storage systems.

  2. Macka

    Copying Linux

    Sounds like GlusterFS on Linux (over Infiniband) but with a proprietary interconnect. Probably costs a dam sight more too.

  3. Simon Casey

    Broadcast

    Anyone trying to watch the broadcast on EMC's Virtual Data Centre?

    Mmmm... jerky.

    10% what you say.. etc

  4. Nate Amsden

    Not so revolutionary?

    Seems they are just building on some stuff that is already out there. Compellent already has automatic tiered storage(which certainly sounds like a killer application for SSD).

    As for the base architecture it seems very similar to 3PAR's. I have a 2-node T400(scales to 4 nodes) in house, and an E200 at my last company. The T800 (with 8 nodes) has:

    - 8 ASICs, totaling 44.8 Gbytes per second of peak interconnect bandwidth

    - 24 I/O buses, totaling 19.2 Gbytes per second of peak I/O bandwidth

    - 24 DDR SDRAM buses for Data Cache and 16 FBDimm buses for Control Cache totaling 123 Gbytes per second of peak memory bandwidth

    Sure there is a backplane and the controllers have to be connected to it, but try pricing out 45 gigabytes per second worth of interconnect connectivity and see how much that costs, I'll take the backplane.. If I want to separate the systems I'll just do synchronous replication to another system 100 meters or 10km away.

    That same T800 goes up to 96GB of global memory, which is a far sight from 1TB, though the SPC-1 numbers show a 96GB system beating a 256GB USP. 32 CPU cores instead of 128, though much of the data processing is done by ASICs, even when I push 1.5GBytes/second through my array(just 200x SATA disks) CPU is about 10%. Perhaps the EMC architecture is based entirely on Intel CPUs and not using custom ASICs, not sure.

    Also the T800 goes up to 128 FC ports, 1280 disks though I'm confident that number will go up dramatically fairly soon, their earlier arrays went up to 2560 disks though that was back when disks were small, I'm quite sure they will be back up to the 2560 number in the not too distant future, and supporting 2PB as well.

    Being able to expand to many more controllers sounds nice though, wonder when that will happen. In order to take true advantage of SSD performance controllers will need to be orders of magnitude faster than they are today, or have a lot more of them. 1 shelf of high density SSD(80 disks) could go upwards of 3 million IOPS(on paper at least).

    So while the system certainly sounds very impressive, I can't see how they can make the claim that it's the biggest leap in 20 years, maybe for them it is, but not for the industry as a whole.

    I'd like to see EMC scale this architecture down to the mid range so that normal folk like me could take advantage of it, and not have a price point(including 3 year support) that quite possibly starts north of a million $.

    Rip out that old Clariion stuff and replace it with smaller versions of this, simplify everything, the only thing that distinguishes a high end system from a low end system is the number/type of controllers. a low end might have 2-4 controllers, a high end might have 32. I'm sure they will eventually though it'll be a few years. They could probably cut massive costs out of the system by standardizing on it and just shipping more volume.

    Being able to start small, and grow massive without having to replace anything, just adding more is very appealing to a lot of people like myself. 3PAR does this good enough for me today, our 150TB array I believe will scale linearly up to 1.2PB (on SATA disks, no plans to use FC disks), from 200 to 1280 disks, and our front end NAS cluster is from Exanet, currently 2 nodes, can scale linearly to 8 today, and more if we wanted. But 2 is more than enough for now! Most of our stuff is NAS though we have all databases and vmware boxes on SAN.

    I wouldn't mind if 3PAR introduced automatic data tiering with SSD though that would be handy for sure. I can already really easily tier data but only at the LUN level, thinking more of at the block level, store frequently used blocks in SSD. I think that is how compellent does things, I'm not sure though.

  5. Tom Cox

    RapidIO is a International Standard

    Note that RapidIO is not a prop. Interconnect as posted by a reader.

    It is a international interconnect Standard which has ISO and IEC certification

    see www.rapidio.org

    The RapidIO Trade Association , a non-profit corporation controlled by its members, directs the future development and drives the adoption of the RapidIO architecture. For networking products, the RapidIO architecture promises increased bandwidth, lower costs, and a faster time-to-market than other more computer-centric bus standards. Interested companies are invited to join the RapidIO Trade Association. Members of the association have access to the RapidIO interconnect architecture specifications. Membership also allows attendance at member meetings.

    AMCC, EMC Corporation, Ericsson, Freescale Semiconductor, Alcatel-Lucent, Texas Instruments, and Tundra comprise the Steering Committee of the RapidIO Trade Association.

This topic is closed for new posts.

Other stories you might like