We have all come across the traditional corporate network with three distinct layers: the core layer dealing with heavy-duty switching and routing, which runs on socking big switches and routers; the distribution layer dealing with lighter (but still intelligent) tasks such as packet filtering and some routing; and the access …
"In the three-layer model the only place you can do this is in the core (you can't do LACP trunks if you are hanging the servers off separate access switches), so you want to hook your chassis directly into the core switches."
Or could could use Juniper EX switches and cluster up to 10, at which point you could run LACP trunks to separate switch hardware since they're logically a single switch.
anonymous because I work for a VAR.
802.1Qbg allows link-aggregation across separate switches too...
Cat6k at 40Gbit a slot? Have you not heard about the Sup-2T to bump it to 80Gbit/slot?
Also, there are many, MANY other datacenter class switches with higher per-slot densities which help bring up port count, and down the amount of layers in your network.
And lastly, no mention of the broadcom trident ASIC's, which, imo, have impacted the DC space way more than chassis based solutions.
cisco 6500 with sup720 is today's?????????????
wasn't that completely and totally obsolete about 7 years ago? I think even Cisco has been trying to get people off of this platform for the past 4-5 years.
We have switches that can do a Tb of fabric PER SLOT now
We have switches that have more switching capacity in 1U(1Tbps+) than a decked out CAT6500 could possibly hope to achieve in what was it 14U? 10U? 64x10GbE line rate in 1U available from a dozen different vendors (Cisco likely included)?
our core switches are now our 5596 10Gb switches the 6500s are going in the trash. AT 1.9Tb p/s why bother with that old gear.
Gartner says is quite right - no vendor is really ready for this
Sup720 today? Line rate with 10G cards? Performance mode gives you line rate with the sacrificing of ports being usable (silly line card option if you use this IMO).
If you like Cisco (I do) then you would not be looking at the 6500 any more. Like was said earlier it is going out. Nexus switches are doing the bizz these days, complete with virtualised options (xx1000v etc).
The 6500 still has its place - mainly for enterprise/campus LAN switching. Particularly high density access layer.
Also, plugging servers into the core? How do they talk to other hosts in other locations? Data Centre redundancy? Stops being the core then doesnt it? WTF?
More than numbers
While discussing layers, it's worth saying that the C-D-A model isn't the only model that needs to be reviewed when designing a modern DC - the Networks Team vs Firewall Team vs Wintel Team vs UNIX team lines and boundaries need to be blurred too. Especially with the likes of Flex10 and the in-enclosure Nexus kit.
Nexus isn't without it's faults though. Some of the high-availability functions of the platform are still a little rough, and the use of VSS probably extended the lifetime of Cat6500 by five years. HAving a fair bit of experience with both, I still miss the familiarity and mature feel of IOS over NX-OS, but I'd take the scalability of the Nexus hardware over Catalyst.
Also don't overlook the fact that for moderately sized data centres, you can achieve virtually the same result with 6500 as you can with Nexus but for about two thirds the cost. N7K tin especially...ouch.
- Updated Zucker punched: Google gobbles Facebook-wooed Titan Aerospace
- Elon Musk's LEAKY THRUSTER gas stalls Space Station supply run
- Android engineer: We DIDN'T copy Apple OR follow Samsung's orders
- Windows 8.1, which you probably haven't upgraded to yet, ALREADY OBSOLETE
- VMware reveals 27-patch Heartbleed fix plan