Andy Bechtolsheim knows a thing or two about servers, storage, and networking. He co-founded workstation and server maker Sun Microsystems as well as two networking companies: one that he sold to Cisco and became the basis of its Gigabit Ethernet biz, and another that he recently started and runs while working one day a week at …
Servers can be the network
any modern host can, if you were being really pedantic the network is just a cable or a wave.
There will always be room for dedicated networking gear, but at consumer level storage will be added, and the operating system services will be extended it would be rude not to.
H3G3 Rules ..... for Genetically AIMODified DNA Systems.
"This, of course, begs the question of why Cisco would be interested in jumping into the server business itself, .."
Control/Own/Build/Lease the Server, Control the Message and Content.
The Server is just a collection of Junk selling Future Bonds in Old Systems and Derivatives for Capital Failure if IT does not deliver the Radically New and Imaginatively Different ...... Viably Constructive ..... ergo are Businesses Selling Server Space for anything Else and Less, Pimping and Pumping Junk and Sub Prime Investment Opportunities/Sucker Deals?
""If you ask a server vendor, they will say servers are servers and networks are networks, let's not get confused," Bechtolsheim says with a laugh."
Err, Networks are a Server Vendor's Best Friend and Most Intimate of Accomplishing Experienced Lovers.
No Man is an Island and all that jazz/hocus pocus.
"There is, of course, a distinct possibility that Cisco is, in fact, correct about the convergence of servers, storage, and networks, that companies will want to have one vendor selling them an integrated system."
That is a no brainer surely, for what company wouldn't want the added convenience and maximised profit advantage from one private vendor selling them an integrated system. No hassle, no fuss, just a Constant Dynamic/VPN Stream ....... which is Virtually Organic in Growth/Creative Potential.
External versus internal network
Converging the external network with the internal (storage/compute interconnect) one is something any risk averse manger would regard with horror. They are very different needs. Just about everything above the need to move packets of bytes around is different. Especially the security and reliability issues. But also the patterns of data and provisioning needs are different. So not only would you want an air gap between the networks, almost everything inside the switches is going to focussed on different issues.
One suspects that the reason FCoE has any future is because it provides FC, and everything that entails. It is something people can have the warm fuzzies about, and thus satisfy their risk averse nature. Infiniband makes a great deal more sense if you were scratch starting system design. Or more pertinant; building atop RDMA is a good place to start. Especially with SSD coming into play. But since the vast majority of customers will want stories about seamless upgrade paths from where they are, with understandable risk, FC is not going away any time soon.
"The server is the network"
I seem to remember it was Novell that coined the expression "The server is the network" - causing some confusion amongst the less informed.
Given that network and systems tend to run in different teams with separate budgets and different preferred suppliers, Cisco's bid to muscle into the server world looks a bit ambitious to me.
10GE: I had to come up with the networking for some rack systems recently and my boss was pushing 10GE. When I showed him that just the 10GE upgrades for our Cisco kit cost more than all the server kit put together, he was less keen. We went with multiple 1G instead.
How many servers could actually fill a 10G link?
How many servers could fill 10Gb link?I suppose not all servers could (or you would even want them to).
However I can see in some arcatectures may need the enhanced speed. Where for example you have servers doing front end and load balancing, cacheing and other things. I suspect where you have massive databases or video feeds where highspeed storage is held (ram Drives for example) you will start to see these sort of volumes of trafic being thrown about.
Anyway, In general you never want to use 100% of your bandwith, so you have some room for errors and issues with cable quality and so on.
I know when I was last lurking around in a server room, they were already shotgunning 1Gb to get higher threwput, so I presume 10Gb is just the next step.
I also noted the trend towards lots of smaller servers doing the same task over a single large one for cost effective and reduncancy reasons. Not to mention the separation of storage from servers. All these devices need to connect and communicate lots of diffrent data to lots of diffrent servers (and externaly).
In a situateion where you have 3 networks per Server (One to link all like servers to each other to maintain coheasion, one that links servers to clients, and one that links servers to data storage or similar) it would be much easyer to have a single 10Gb link and a few virtual networks, rather than 3x1Gb networks. Anyway, 10Gb switches become really handy when joining multiple 1Gb networks togeather.
Not to mention that chaps argument for 10Gb uptake is that, even for servers that may not need 10Gb, they will recive that upgrade because the cost diffrence between 1Gb and 10Gb will shrink to near nowt. In the short term it will be for connecting clusters, and specific specalist installs such as supper computers, and storage arrays.