Feeds

back to article 3PAR revs CTO office for major changes

It starts innocuously enough, but follow this thread and the implications start building up. 3PAR has a new engineering VP. He is Peter Slocum, and he's got an impressive background with ViVOtech and Brocade in his CV, plus earlier stints at Silicon Graphics, MIPS Computer Systems, and Hewlett Packard. So far, so ho hum. But …

COMMENTS

This topic is closed for new posts.
Bronze badge

keep in mind

Keep in mind that the current highest end V-MAX is 8-nodes, same as the T800. There's nothing that I know of in the core 3PAR architecture that would prevent them from radically scaling it out further. An interconnect is an interconnect. 3PAR uses a backplane because it's cheaper, and faster. The T800 has almost double the bandwidth of the V-MAX on the interconnect side.

V-Max on paper certainly sounds like it has some impressive pieces, I'm still waiting for more technical details, there hasn't been much released on their architecture other then high level stuff. One of the EMC bloggers agrees and is checking to see if/when they'll have more information available.

So we'll see when EMC infact starts shipping and supporting arrays with more than 8 controllers in the V-MAX.

I think there's going to be a need to fundamentally re-think how array controllers function in order to scale to SSD levels of performance. The current T class from 3PAR doesn't do it, the V-MAX doesn't do it. You don't want to have to invest $700k worth of disk controllers and cache to drive $100k worth of SSD (IOPS wise) (pulling $$ out of my ass just to illustrate the point). With SSD being 80-100+ times faster IOPS wise than current 15k RPM disks..

I'm thinking the architecture might change slightly to be more like network equipment, where you have a big fast backplane and you can hot plug "fabric" cards into it(ASIC-based chips for data movement), and "line" cards into it(fiber/iSCSI/CNA). A 16U enclosure(what a 8-node T800 consumes, not sure how big 8 nodes of VMAX are) should be able to drive at least 10 million IOPS, and 10s of gigabytes/second of throughput. I think maybe there would be 30 slots on such a system and you could buy the chassis pretty cheap as it's just power+backplane, and add components on demand as you scale.

Like the networking world, I don't think your going to get there fast enough using Intel CPUs, your going to need those high speed ASICs for line rate processing and lower power consumption.

0
0
This topic is closed for new posts.