It really doesn’t matter how you configure your servers, how many processor cores they have or how much memory: if the network doesn’t have the bandwidth to service their needs, they will seem slow. Users will complain, dissatisfaction will soar and customers will click off to the competition. Likewise, if the network …
It is a simple fact...
...that users will consume whatever bandwidth you give them and still moan it is slow.
1 size fits all?
It still make's me cringe and cry a little when I encounter customers using 1 big flat VLAN that complain about poor performance with hundreds of machines in 1 VLAN. Flattening a network would only work in certain circumstances. Flattening an enterprise sized network under the guise of 'improving latency' will have the opposite effect as multicasts and broadcasts take their toll. Small network, maybe but large ones? I think not.
if you want better control over your tiered network then use FLEX links. Think of it as manual STP with the option of load balancing VLAN's upstream over the redundant interconnects.
Cisco has been pushing us towards their idea of a tiered network for years now, I find it very odd they now want us to flatten them....
Meh, each to their own I guess?
uses mac-in-mac that kinda needs to be in layer 2.
Use of IS-IS (underlying protocol for TRILL) makes sense for this but the movement away from traditional 3 layer enterprise model is probably going to happen at somepoint, irrespective of whether it is TRILL or SPB or QFabric. Maybe we will all move towards controller based networking for guided wave as well as wireless in the next 5 years.
I do worry about massive fault domains though.
An interesting point about 40Gb/100Gb links is that they will have to use MPO and custom leads (OM4 IIRC) so if you are designing a DC now then don't over specify your fibre requirements just in case as none of it will work with the coming standards.
Maybe it's just the networks I deal with but...
... I have never yet been to a place where the LAN traffic even tickles the network capacity. "Network" (which IME usually equals server and PC) admins nearly always believe the network is slow, when in fact the servers can't get data onto their own wire at anywhere near wire speed for whatever reason. Then of course the vendors pop in and convince the customer they need some new 10gbps stuff. Definitely rule #1 is to actually have a good period of proper granular monitoring to see how full the links are before pointing the finger. Of course, what I deal with may be tin-pot, but I'm getting more and more surprised I've not yet come across an over-taxed network.
I've been doing application profiling for more years than I care to admit to (actually decades rather than years, now), and always when we get called in the message is "we know the network is slow, come and prove it". In all those many years, network bandwidth has been the culprit twice.
Other times, quite often, latency is the issue, due to badly designed client/server software. Mostly, it's just crap software.
Is this a Cisco press release
This seems like a marketing speech from the Cisco PR dept "you all need our latest and greatest switches"
-you need data to really identify the bottlenecks -sFlow or netflow.
-latency will still suck on 10 Gbps
-Sharepoint will always suck
-your DNS server is probably slowing everything down too.
-BIOSes and the like are only just catching up with 10 Gbps ether.
-When Cisco (or anyone else) quotes performance on a switch, that's the "we think it will do this on paper" numbers. You won't get more.
You left off one
I may not....
be a grand network master, but wouldn't this be a case for link aggregation?
- Opportunity selfie: Martian winds have given the spunky ol' rover a spring cleaning
- Spanish village called 'Kill the Jews' mulls rebranding exercise
- NASA finds first Earth-sized planet in a habitable zone around star
- New Facebook phone app allows you to stalk your mates
- Battle of the Linux clouds! Linode DOUBLES RAM to take on Digital Ocean