This is not surprising given the level of engagement from telcos/sp with ONOS, CORD, etc which are all based on OpenDaylight as the controller. The big telcos have been clear that they will not pay licensing fees and be trapped into a platform play by the networking vendors.
38 posts • joined 24 Sep 2009
It took Ethernet to make FibreChannel cheap enough to use
It was just a few years ago that Fibre Channel ports cost upwards of £3000 per 2GB port.
FCoE came along and threatened the market and now FC ports cost the same as Ethernet because, physically, they are Ethernet ports. The FC encoding etc is different but the switch silicon and interface hardware is identical to Ethernet and many switches can do both Ethernet, FC or FCoE on the same device.
FC customers should saying thanks to Ethernet for keeping it cheap enough to use in 2015.
firewalls/proxy/IDS - has to work with them
Wont work through firewalls or other network security devices therefore the technology is a bust.
Not sure who looks more stupid in this spat. Nutanix acted like over-emotional teenagers to produce the campaign and VCE are notorious for highly bigoted stance on their technology.
In the meantime, HP and Cisco sell the same technology and aren't posturing like children in kindergarten.
"And when it comes to VCE's Fibre Channel, it could easily point out that Nutanix does not have any."
Nutanix doesn't require FibreChannel, design is different. Not that it matters in converged infrastructure, who gives a toss about how the storage works so long as it works. And they both do.
VCE uses whatever EMC says they are allowed to use. And that's FibreChannel legacy kit in the VNX. Which is dfine.
Nutanix uses newer technology in IP/Ethernet and that's fine too.
Re: Whoever wrote this article has no idea what he is talking about....
So much self-serving egotism in this comment I don't even know where to start.
Its all about blame
As a IT executive, I love cloud because when something goes wrong I can blame someone else.
Of course our own data centres have the same types of outages and problems, but I'm responsible for those. Putting it in the cloud means there is a lot less things that are my fault.
SDN networking needs a cloud orchestration. He is showing a serious lack of competency here by not understanding how multi-tenancy works in hypervisor systems.
How much will the software cost
Cisco claims to have cheap hardware (actually, it's not that cheap and NOT cheaper once cables and 'certified' SFPs are added to the bill) but it needs the APIC controllers to make the magic happen.
What price will Cisco charge for APIC ? Will Cisco look to recover lost profits and lost revenue by going for a high price ? History says they will certainly try but I'm doubtful that enough customers will pay.
Yeah, that is old school. Lots of money wasted on resources that are unused when you active/standby.
Replicating the VM doesn't solve how the VM is accessed over the WAN. Once you move a VM between DCs what is the path from the user to the operating system ? You missed that.
A business executive spends more on a single fancy lunch/dinner than he does on his laptop.
Tells me a lot about focus.
no link to source
Not even a single link to the IBM announcement ?
Difference between a Lada and Jaguar
Clearly, most of the people here would be unable to pick the difference between a 20 year old Lada and a brand new Jaguar.
While I absolutely agree that Cisco is overpriced & overblown, there are many critical features in a Cisco switch that solve a lot of problems in well run and well designed networks. However, those features are unknown to most people sine they don't understand the technology.
My experience of D-Link is poor. That is, they do connect a bunch of wires together and pass Ethernet frames but they have no security capabilities, there is limited control over QoS, or handling of external authentication. Multicast hasn't been used much in the last ten years, but if you are deploying VMware or Hyper-V then IP Multicast is now critical to your network design.
These are things that most people just don't know about. And you all showed your ignorance.
You can't fix stupid, indeed.
Maximum number of MAC entries ?
Can it handle 1000 new MAC address per second ?
Can it handles 10000 /32 IP Routes ?
Performance under route flap - does it crash when injecting 500 routes per second for 60 minutes ?
Multicast PIM-SM and PIM BiDir is needed for VXLAN support.
How many *,G routes can it handle ?
Does it support all OSPF area types. Can you inject routes between OSPF areas at a suitable rate ?
I'm just warming up here.
Re: "WTF is Software Defined Networking, anyway?"
best summary ever.
I'll take the £850 from anyone who wants to offer it to me.
I'm that networking guy, I live & freelance in the UK.
Re: wtf is SDN anyways
I'm the networking guy that was on the podcast. I can suggest watching this Youtube from a year ago about the fundamentals of OpenFlow. May provide some insight into how it works.
It's really hard to explain why SDN/OPenFlow changes everythin without pictures.
My experience with BT as an outsourcing 'partner' is long tale of misery and woe. Frankly, they couldn't organise a cost effective drink in a bar without a team of five project managers and charging for every single one.
It's cheaper and more effective to do it your self. At least the council would be in control of the service and could make changes to it. BT would freeze it into a contract and nothing would ever improve.
Scaling to peaks ?
The point of a Private Cloud is to take average DC utilisation from 5% to something like 40% of capacity. There is a lot of spare capacity in today's data centres and peaks are easily handled.
Public clouds have peak problems because of growth. Private clouds have different problems.
It's not so much Cisco blocking them out, as the sheer marketing momentum behind Ethernet. There is no intelligent discussion of alternative networking protocols - that's certainly led by Cisco but supported by everyone else in the market.
Until Intel comes back to market with Infiniband, then Xsigo might gain some new momentum.
Also, it's a clever product. Needs clever people to buy it and there's a shortage of that right now.
Public Clouds are for poor people
Bah, public clouds are poor mans' computing. With limits, restrictions, and arbitrary implementation hacks that you have to like.
For that reason, 'blah blah cloud' will get some penetration but ultimately won't replace custom clouds that do real work.
Technically, he is referring to the use of VJ algorithm for fast start TCP connections which reduced the effectiveness of TCP in low bandwidth networks.
However, TCP is well suited for lossy networks, which is true for WANs.
So I guess you are both right.
L2MP is here
The replacement technology for STP is here already. It's broadly know as Layer 2 Multi Pathing (L2MP). There are two approaches but only one looks serious and that's called Transparent Redundant Interconnection of Lots of Links ( TRILL ) and you can find details on the IETF using your favourite search engine.
Cisco has a proprietary and pre-standards implementations of TRILL that they call FabricPath which is shipping today.
Although the OSI model is mostly relative, Ethernet clearly is L2 by definition and IP is L3, and TCP is L4 since this maps into the DOD model used to develop TCP/IP.
Syntax is the very heart of computing, and my usage is fully correct.
That's the next set of articles in the series. Whereupon we expound further.
You missed the point
OpenFlow directly updates the FIB in the router. Routing protocols only update the RIB which is subsequently updated to the FIB. This is a significant difference.
Instead of letting an autonomous system propagate data to it's neighbour, a central controller will have a complete view of the network - make some sort of programmatic decision and then download some configuration to the FIB on the switch/router.
In the same way that VMware allowed the effective management of hundreds of windows servers, OpenFlow hopes to provide effective management of hundreds or thousands of network devices as a coherent whole.
Which is much more advanced than a MS server can do today.
Yes but where ?
Assuming that you can create a distributed coherent cache ( which EMC / NetApp has been claiming is impossible for the last ten years) then where would you put the SSD cache ?
On the motherboard ? How would the local cache software communicate back to the remote array, how often would the cache update ( EMC updates their Flash cache once per day ). This would most likely use a kernel driver in the OS e.g., VMware to use the cache.
On the CNA / HBA ? And make it part of the storage understructure would require support in the driver ? At what price would this highly custom piece of silicon, that would be bathed in Unicorn Tears and individually blessed by a virginal Tech priest as it left the factory ? I'd expect it to be orders of magnitude more expensive than the Fusion-IO product. Fusion IO is a a goodish flash drive built use the PCI-E bus in certain computers - but an entirely custom CNA with Flash and handy CPU/Software is quite different.
More answers than questions here.
Telcos can't crawl, much less walk.
Until telcos can actually manage their core competency of bandwidth, they should be prevented from other activities. For example, I would like an accurate billing cycle, on time delivery of connections and services, plus a more reasonable price.
After that I will believe they could deliver value add.
I wonder how much IBM Cloud Hosting insurance premiums increased as a result of this idiocy. This kind of thinking is pretty silly.
Tape Drive interface.
Show be an excellent interface for tape drives in addition to external HDD/SSD etc for better tape drive hroughput and lower cost.
In reality, Cisco has been conforming to standards because Network Engineers have continually, loudly and constantly demanded standards compliance. We remember multivendor networks and interoperability for the days before Cisco became dominant. Further, as na industry we recognise the power of open standards.
Standards will not be the cause of the Cisco's shrinking this year, it's because the company is unfocussed behemoth that is ignoring it core customers while it plays with shiny toys such as videoconferencing and retail cameras.
Managers should be recognising that their engineering staff saved them from second rate technology. A lesson that the storage industry and their poor standards compliance could learn.
I can only think that EMC feels they have nothing left to lose. Wall Street thinks that EMC has only two things of value - VMware and RSA and the company valuation is negative for storage.
So I guess some childish pranks from the school ground don't really matter. Or are they hiding something else ?
EVA death will be slow and boring - like storage.
Two things, I think. One is that any future product line based around fibrechannel networking is clearly regarded as no future at all. Since the EVA is HP primary go-to market for legacy storage markets LSI felt it had no future with the software.
Second, HP much touted guarantees to continue with the EVA must look suspect now. It's hard to believe that the EVA has any future beyond it's current feature set and that 3Par is the long term future. Trying to make the point that supporting the existing EVA is a requirement is as useful as spitting into the wind.
Who cares about FCoE ?
After all, replacing all your ethernet switche to supoprt FCoE doesn't make sense. The temporary FCoE technology to migrate your storage network into IP makes sense, but isn't the endpoint.
Bugger now storage will create gold plated roacks
The storage industry has managed to convince their punters to overspend on _everything_.
Unnecessary OM3 patch leads for 10 metre cable runs, FibreChannel HDD instead of better caching, lossless fibrechannel switches instead of well designed protocols.
Now they are going to want unobtainum cored tungsten steel racks as well.
More money on storage goes up the cry.
And the battery bit
I would think that allowing flash to use the GPU means higher battery consumption. And the higher memory use means more drain again.
I'd rather have quality video codec in H264 or Off Theora than a cheap, buggy, battery burning Flash codec.
Why doesn't anyone talk abou thte backhaul ?
Not only is the radio spectrum short on resource, but the backhaul is, relatively, more expensive since phone towers are not in good places to get high speed and low cost backhaul to the core.
Having slow backhaul saves way more money and makes more bottom line profit. Shaping isn't about radio, it's about backhaul.
Bah, it's not new or even clever
Having worked for a Service Provider in the UK that used the same concept with a bunch of scripts and proxy servers, there is nothing clever about this solution. In fact, the "NetBox so cleans it whitens thingie" design looks as if they took the idea that everyone is using and turned ti into a product.
My guess is that this company ahas a very high marketing spend and managed to get in front of the Reg hack who wrote this piece.
Remorselesss but only if you are pushing a shopping trolley with a drunkard named Fibrechannel inside it.