Network Neutrality, the public policy unicorn that's been the rallying cry for so many many on the American left for the last three years, took a body blow on Sunday with the Wall Street Journal's disclosure that the movement's sugar-daddy has been playing both sides of the fence. The Journal reports that Google "has approached …
I'm sure that Google...
would say that locating their data close to the users by means of caching servers is an entirely different thing to giving priority to their data across an ISPs links. And actually they've got a point. If my website is hosted at your ISP you get far better performance to it then if its hosted at the far and of a dialup link in outer Mongolia, but its nothing to do with the traffic being prioritised, just about where its located. So I don't think that they are breaking their own rule definitions, so its not the end of net neutrality in their terms, its just that they have their data replicated locally for many users. Any company can do that without asking for network priority, and if xyz pay the ISP to cache locally that doesn't sound the same as surcharges...
Already using ask.com
After hearing that they have people customizing the search results and now this, it's just too annoying. The believe they are Microsoft and are functionally irreplaceable.
Wish they didn't destroy altavista.
Net Neutrality is Like ...
virginity. Everybody starts with it, it stays longer for some than others, but sooner or later it's lost. Best to plan ahead now because the fat lady is warming up.
It may not be "wrong" but...
... deals like this make it even harder to compete with Google. And ultimately, "search" is a lot more important than YouTube. There's a public interest in having internet search be unbiased, which means un-commercialized.
I'd say this resembles, in some significant ways, the deals Microsoft made with PC OEMs to create the Windows monopoly. Pre-loading the OS on the PC is analogous to ISPs having direct pipes to Google.
Google is setting itself up to be the next abusive monopoly that needs to be taken down.
I'm guessing here, but it seems that Mr. Bennett doesn't know the definition of "colocation". If he did, and he were a rational thinker, he would see the massive hole in his argument. I prefer to consider my fellow human beings as rational, so I'll assume he just doesn't get colocation.
Colocation is when you locate some or all of your servers in the same physical location as some other organization. In this case, Google is looking to put some servers into the same location and onto the network of some major ISPs as (as Mr. Bennett observed) companies like Akamai already do. The net result is that customers of those ISPs get their requests answered by a server closer to them, and hence more efficiently. For example, let's say you (C) are a customer of ISP (A) who does not have a colocation agreement with Google (G). Your round-trip to Google would look something like this (massively simplified):
C -> A -> Internet (who knows how many links) -> G -> Internet -> A -> C
If A then signs a colocation agreement with Google, then after everything is set up, your round-trip would look like this:
C -> A -> G -> A -> C (No, that's not a nucleotide sequence)
So let's apply that to the principles which Mr. Bennett believes Google are violating:
1.Levying surcharges on content providers that are not their retail customers;
Either a.) by renting colocation space, Google becomes a retail customer of the ISP; hence there is no non-customer surcharge, or b.) Google becomes a non-retail customer who pays for rented space and network access -- which is a common voluntary lease arrangement with no indication of surcharges.
2. Prioritizing data packet delivery based on the ownership or affiliation (the who) of the content, or the source or destination (the what) of the content;
The improved delivery times of information from local servers vs remote servers have nothing to do with any form of packet prioritization. Instead, it's simply the increased local bandwith coupled with the lower latency of shorter trips. No packet prioritization is necessary.
3. Building a new "fast lane" online that consigns Internet content and applications to a relatively slow, bandwidth-starved portion of the broadband connection.
This one is best answered by analogy. Google is locating its data closer to the consumers, much like a retail chain will build stores near customers. What they are not doing is taking over a lane of the highway, allowing their trucks to travel faster while forcing everyone else to contend with increased traffic on fewer lanes.
In short, Google's colocation agreements allow them and ISPs to capitalize on what the ISPs have in excess: local bandwidth, while decreasing the overall demand for what is in short supply for the ISPs: upstream bandwidth. They do this through a voluntary arrangement, with no increased cost to consumers and no restrictions on or prioritization of traffic. The net result is increased efficiency with none of the throttling, surcharges, or packet prioritization that net neutrality advocates are fighting against.
Google wants it both ways
... and will answer accordingly, depending on who is doing the asking.
However, Microsoft does this too... They say you can record DVR content with Vista Ultimate, but they've made a deal with all broadcasters that enables a "Do Not Record" flag, to prevent it, if they choose.
Building a CDN is not against neutrality!
My crapometer just smashed its pin. unbelieveable. how building a CDN is now against net neutrality? Google buys that Internet service from ISPs. Not preferential treatment, just ordinary service - get it? So, good for ISPs. Users of that (and possibly other regional ISPs) have less network hops to go to get content - good for users.
There is NO other way to improve service (latency) for other remote locations - it's limited by speed of light, among other things. had Google served all its content from US the latency in Europe would be much worse (at lest 100 ms worse, to be precise).
There are NO special QoS arrangements for Google traffic.
what a load of rubbish!
I'm sorry Richard, but the whole of the last paragraph is rubbish, which rather deflates the rest of your argument, especially the references to 'prioritising packet delivery'.
Using your 'logic', you are arguing that ANY website less AS hops and latency from the customer has a 'raised delivery priority' than one further away. You confuse TCP backoff and windowing mechanisms with QOS. Yes in principle throughput can increase when congestion is lower, and yes less hops generally means less percentage packet loss (reducing TCP back-off) and also less latency (less likelihood that you reach TCP window limits), but to suggest that affects delivery *priority* is wrong. Two packets arriving at the next hop router to the customers ISP link are NOT treated differently. It's just that there is a higher percentage chance that the packets in the queue are Googles because the serving device is closer to the customer so the end-to-end TCP connection's potential throughput is higher. Google fills a higher percentage of the queue on the interface to your last-mile link because there is less congestion and loss to their server from your device when compared to a TCP connection to a server further away. The actual packets Google sends you are given no priority over any other packets when they reach that router so that doesn't 'breal' net neutrality.
By your logic we'd have to make every web-server an equal number of AS hops and latency from each customer to have a 'neutral' service Internet! Congestion and latency are inherent to the structure of the Internet, and caching services like Akami, Amazon S3 (and now Google peering) are designed to reduce those inherent (Speed of light related) performance issues, not 'bypass' net neutrality.
You need to separate the concepts of individual packet 'priority' and the effect of things like congestion and latency effects on TCP throughput. Until you do that your argument isn't one.
Sorry, gotta say i think reg got it wrong on this one..... akami, amazon s3 and other co-location services are nothing to do with net neutrality and are just caching services... im not particularly a google fan boy, but they are being treated unfairly here...
@rubbish / Erm...No
Before going to the length of calling what Richard Bennet writes "rubbish" I'd take a very long look at what he writes and prepare some very good arguments. I'm not saying professionals can't be wrong, but remember that Mr. Bennett isn't just anybody, so be extra careful when assuming he's wrong. He might know a thing or two that you've overlooked. :-)
Even though Google's plans (lowering the latency from their servers to the end user) can arguably be considered very different from "classic" QoS (i.e. DiffServ) it is as a matter of fact an element that _will_ increase the amount of bandwidth allocated for them when competing with services with a worse latency. (@Peter: You seem to understand that part and explained it very well btw.)
Wether this should be considered "breaking the rules" with regard to NN is an open question. Does NN prohibit only the use of DiffServ classification and prioritisation? Is the "special treatment" only against NN rules when you would clearly prioritise traffic *at the expense of other traffic*?
And specifically for Steven Knox:
1) Retail is "the sale of goods in relatively small quantities to the public" (COD) or "the sale of goods to ultimate consumers, usually in small quantities (opposed to wholesale)" (dictionary.com). You would call Google's business with ISPs "retail"? Pull the other one...
2) Prioritising can be many things. Introducing latency (e.g. with a Packeteer) can be acheived completely without using any CoS/ToS bits, and I would call that prioritising. Lowering latency on purpose, one way or the other, could be called prioritising. I'd still call it a gray area with regard to NN.
3) What if Google persuades a provider to provision a physically separate network connection for their traffic from their current location and closer to the end users? Would this not be at the overall expense of other types of traffic, since the ressources consumed for this could be used to generally improve the network? If I argued that an IntServ tunnel (RSVP/whatever) wouldn't be all that different from a physically separate connection, where are we then? It's a blurred area. Assuming that Google's interaction with the ISP in no way makes the ISP change priorities is in my eyes a bit naive.
In the end, I don't think Google is evil. But they are naturally greedy, just as they should be, being a publically traded company and all. Anyone thinking that Google has any other "ideal" than making money should "grow up" or "get a job and a haircut". (No offenses meant to either vertically challenged or long haired unemployed people!)
Another litany of errors
I'm confused. This Richard Bennet chap claims such long experience in networking but appears to be unaware of the basics.
"Internet delivery requires millions of distinct file transfers across crowded pipes to accomplish the same end: this is the vaunted end-to-end principle at work."
Except that it doesn't and it isn't. Multicasting was part of the original spec for IP, as evidenced by all those class D addresses that were set aside for it. Yes I know it hasn't actually been adopted very widely, but it exists and is the internet equivalent of TV or radio broadcasting if you want to compare apples to apples. There's nothing sacred (or even sensible) about using a unicast connection for each transfer. Even if there were, the end-to-end principle is another matter. It simply says that all the state required for the transfer resides in the end-points, which means you can lose the entire network part way through the transfer and still expect to complete the transfer when normal service is resumed later. The end-to-end principle is honoured by RTP and other multicasting technologies. It is violated by NAT, which is why Grandma needs a degree in runes to get her internet applications working through a standard domestic router.
As for net neutrality...
"Prioritizing data packet delivery based on the ownership or affiliation (the who) of the content, or the source or destination (the what) of the content"
Filtering by source or destination address is a requirement whenever demand exceeds capacity. You can either do it implicitly and somewhat randomly by simply falling over or losing packets, or you can use something new-fangled like ICMP to ask selected sources to back off a bit. If they ignore ICMP, you may assume they are broken and cut them off entirely. I expect you'd call that a hideous violation of net neutrality, but somehow the internet has been doing it for three or four decades and no-one has noticed. Perhaps your definition of NN is a straw man.
I think this article relies rather heavily on some very dodgy arguments.
Putting your own servers into a large ISP so you don't have to pay for the upload bandwidth on your servers seems to be very good commercial practice as long as the cost to install and operate the servers is less than the bandwidth you'd otherwise pay running youtube et al.
In fact, rather than degrading performance I'd say that they are probably helping the public internet by removing godness only knows how much bandwidth from carrying video on the public internet and releasing it for something useful like real data.
Yeah, so fuck off with the anti google bullshit
CDNs and free ISP facilities
The article is rubbish, but there's an important point in the dross. Building a CDN is a first-mover business. ISPs are happy to host one CDN (currently Akamai), and maybe a Google CDN. But a new third player? In short ISP-hosted CDNs are a natural monopoly with high barriers to entry for subsequent players, and as we all know at some stage monopolies start making monopoly profits.
Limelight -- reasonably new CDN -- has encountered this problem. Fortunately, in many countries there are decent ISP peering sites with colocation facilities and this allows competitors a way in. Although at a higher price, since they have to pay for colocation rackspace whereas the ISP-hosted CDN does not.
And in this maybe Google are playing ISPs for suckers. Google could lease colocation space at peering exchanges and connect to ISPs there, but if ISPs are offering the colocation for free...
You gotta go all the way.
The thing is, is that you gotta be pro-Net Neutrality all the way, or against all the way. You can't just be "in favor of NN in theory but willing to compromise". You can't be "in favor of NN and here are a bunch of reasons why this thing we're doing doesn't violate it." If you want the moral authority of supporting it, then you have to avoid even the APPEARANCE of being against it. You can'y have your cake and eat it, too.
If you want to tell people that NN is critical to your business and get sympathy on your side, then it kind of undermines your position when you make moves that look exactly like the way the anti-NN side wants things to operate!
@prathlev [@rubbish / Erm...No ]
You say "Does NN prohibit only the use of DiffServ classification and prioritisation? Is the 'special treatment" only against NN rules when you would clearly prioritise traffic *at the expense of other traffic*?'
Please! Diffserv is nothing to do with the NN debate and in no way infringes the (mythical) "NN rules" whatever they may be. Diffserv is used, for example, to ensure that VoIP traffic gets the delay and bandwidth it needs, but not so much bandwidth that it unfairly penalises elastic traffic such as HTTP or SMTP. That's not prioritisation, by the way; diffserv is significantly more subtle than a priority system, and that's the whole reason that it supersedes the old IP precedence mechanism. Go read the RFCs (2474 and 2475).
That said, caching in content distribution networks is an excellent thing that benefits users, content providers AND even the content providers that aren't cached, because it frees up bandwidth for their content. There's nothing unfair about a content provider paying for caching services. Anyone can do that. The same is true for colocation services. Of course, a big provider will get economies of scale.
I think the best piece ever written on NN is Johna's piece at http://www.networkworld.com/columnists/2007/110707johnson.html
Erm...No, Richard Bennet isn't just anybody. But he does have a very public and well established point of view, and an obvious axe to grind. So... treat his blog as the editorial that it is.
@prathlev: so what should Google do?
prathlev> Introducing latency (e.g. with a Packeteer) can be acheived completely without using any CoS/ToS bits, and I would call that prioritising.
that wouldn't save anything to you, the isp, and is just pure evil. to pull that off and still deliver packets, you'd need to hold them in some sort of tarpit. this means more ram on routers, more expenses. you can do that, but only if you really want to make someone's life miserable, and are ready to pay extra for it. this doesn't make any economic sense.
prathlev> What if Google persuades a provider to provision a physically separate network connection for their traffic from their current location and closer to the end users?
separate compared to what? do you mthink isps are standing in lines, holding optical fiber in hand asking to peer with google? you are wrong. most of the time it is google who is paying for peering. again - this is exactly what isps do: sell connectivity.
look, if you are an ISP and i come to you and say "i want to buy a couple racks and a pipe from you", would that be unfair towards somebody else? if yes, then what, exactly, do you want me to do about it?
actually, i want all who thinks building CDNs is somehow against net neutrality to tell me how exactly is one supposed to improve bandwidth and latency of one's services? forced to buy the service from akamai or some other CDN? is that what you see as "fair" and "neutral"?
no different then comcast
Network neutrality isn't even a word at Comcast. They are doing away with the 'network' all together by peering with all the major internet companies. Comcast currently has direct peering with softlayer, theplanet, two real big names in American webhosting. These aren't the only companies comcast is 'peering with' and as this continues backbone providers will be faded out. I've yet to see any traffic across comcasts network route through another provider when comcast has a direct route. Perhaps comcast has a better route, that's not really the issue.
Network neutrality can't exist when broadband users on Americas largest broadband network have a direct connection to the web servers. Just call it what it is, Bell Monopoly? ha Comcast makes the bell monopolys look like schoolboys, Comcast is how a monopoly really is done.
How's that for network neutrality.
>1) Retail is "the sale of goods in relatively small quantities to the public" (COD) or "the sale of goods to ultimate consumers, usually in small quantities (opposed to wholesale)" (dictionary.com). You would call Google's business with ISPs "retail"? Pull the other one...
Note that your definition of "retail" (which I agree with) includes specifically qualitative words and phrases: "relatively small" or "usually in small quantities". Because people's connotations of "small" differ, I included the possibility that Google may be considered a retail customer in my comment. I _also_ covered the case where Google is NOT considered a retail customer:
"or b.) Google becomes a non-retail customer who pays for rented space and network access -- which is a common voluntary lease arrangement with no indication of surcharges."
>2) Prioritising can be many things. Introducing latency (e.g. with a Packeteer) can be acheived completely without using any CoS/ToS bits, and I would call that prioritising. Lowering latency on purpose, one way or the other, could be called prioritising. I'd still call it a gray area with regard to NN.
I wouldn't -- because the NN debate (as I understand it) has focused on active prioritisation efforts by ISPs, specifically increasing priority to content providers who pay them _for_that_priority_ and, more specifically, actively decreasing priority for those who don't. While I agree that decreased latency and improved traffic flow is a side effect of colocation, that is not because anyone else's traffic is deprioritised. In fact, by potentially reducing the amount of data which must cross the internet, this colocation has the effect of increasing the potential speed of other traffic.
>3) What if Google persuades a provider to provision a physically separate network connection for their traffic from their current location and closer to the end users? Would this not be at the overall expense of other types of traffic, since the ressources consumed for this could be used to generally improve the network? If I argued that an IntServ tunnel (RSVP/whatever) wouldn't be all that different from a physically separate connection, where are we then?
The real question behind your hypothetical is whether or not Google's administrative traffic to their colocated servers is less, equal or more data, transferred at times of less, equal or more demand, as the user search traffic which will be pulled off the internet. This article certainly does not give us the information needed to answer that question. My intuition says that the answer is less data, transmitted at times of less demand, but that is only a guess. But Mr. Bennett is asking us to make a conclusion without even addressing this, and I do not think that is logical or fair.
>Assuming that Google's interaction with the ISP in no way makes the ISP change priorities is in my eyes a bit naive.
I'm not assuming that. I'm simply stating that the evidence presented in this article does not support the argument that Google is violating the principles of NN.
>In the end, I don't think Google is evil. But they are naturally greedy, just as they should be, being a publically traded company and all. Anyone thinking that Google has any other "ideal" than making money should "grow up" or "get a job and a haircut". (No offenses meant to either vertically challenged or long haired unemployed people!)
1. I am 33.
2. I have a job.
3. I got a haircut about 5 weeks ago, so I _am_ about due. But I get my hair cut pretty short so I can wait a few weeks.
4. I don't think Google are NN saints in any way. But Mr. Bennett is asking us to believe a conclusion that is not supported by the facts he presents, and he does so with some fairly loose arguments. Because he's making the accusation, the burden of proof is on him, and he fails to meet that burden in my eyes.
Could someone define that for me please?
I wasn't previously aware that such a thing existed, I thought the internet was a conjoining of disparate private networks by wholesale network providers (Tier-1 ISP's) through peering agreements (i.e exchange of cash or equivalent exchange of traffic). None of which are owned by the 'public' to my knowledge.
Some of the services on these networks are made available to others so you could describe these aspects of the network as 'public' I suppose - in the sense that there are no access restrictions to the services provided.
Is that what they mean?
Paris - coz she would be confused too.
It is always worth thinking of things from the opposite end. Imagine Google decided to market a box to ISPs - the box being a Google cache. Google promise that it will cache a very significant part of all of Google's offereings, and will take 25% (say) of the load off the ISP's very expensive uplinks. How much would an ISP be prepared to pay for the box? How much would be a fair price? Google could provide an approriate SLA for the box, amd it would just sit in the ISP's server area and do its thing. What ISP would not be interested in such a device? Would the marketing of this box breach the ideals of net neutrality? Clearly in the ISP can get away with thinner pipes than before.
Does the effect on neutrality depend upon who is paying who? The fair price for the box is a little hard to decide, since it clearly advantges Google too, so the ISP may well ask for the price be discounted, or maybe ask for a cut of the ad revenue the box generates. So, how much is a reasonable discount? (or cut of the revenue.) What happens when the discount/revenue gets very large? What happens when the discount goes over 100%? (Or the ISP's cut of the ad revenue generated exceeds the price of the box.)
Would a business model where Google offers such a cut of revenue offset against price be any less or more in breach of net neutrality?
No right answers. And as mentioned above, the arguments take on a different tenor when we add into the equation a market monopoly player. Monopolists make the simple reasoning much harder, and usually, different rules need to be applied.
The article has been edited now, which messes up my paragraph reference.
Anyway, I am commenting on the article and one paragraph specifically, not the author. I don't care who he is, if his written argument doesn't stack up, his experience, title, or reputation are irrelevant. I was not 'assuming' he was wrong in that paragraph, I was saying he *is* wrong.
re your paragraph:
"Wether this should be considered "breaking the rules" with regard to NN is an open question. Does NN prohibit only the use of DiffServ classification and prioritisation? Is the "special treatment" only against NN rules when you would clearly prioritise traffic *at the expense of other traffic*?"
You make the same mistake Richard did. There is no 'special treatment', all the packets are 'treated' exactly the same *when they reach a router*. It just so happens that as the Google cache is close to the destination device MORE google packets get onto congested links. Look, if I download an HD Video and a webpage at the same time, I flood my cable connection with Video packets, but I don't accuse the Video stream of being 'prioritised' over the web-page. If I download two videos, one hosted in my country and one hosted in another I don't accuse the local country video host of being prioritised over the out of country one. All Google/Akami/Amazon S3 are doing is moving their websites closer to the user.
I don't follow the net neutrality debate very closely, but people shouldn't try and build their arguments on incorrect networking facts. Yes, Packeteer and other devices that specifically interfere with packet flow, traffic engineering, queuing, marking, policing, etc ARE non-network neutral. Location of caches is not.
Does Co-Location Break Net Neutrality?
The point is, access to colocated services will be lower latancy and so over time people will (and they do) gravitate to the faster running sites. Ergo, selective colocation (or accelerated delivery over a separate pipe) can be viewed as breaking net neutrality. Delivery is no longer neutral (using the definition currently being bandied around) - the reason why is almost secondary.
If on the other hand an ISP builds additional infrastructure to deliver video on demand in a timely fashion and automatically routes all content over it, regardless of origin, this would be acceptable because all sources are treated equal.
Because of the topology of the internet you will never have a truly neutral net - sites in the same country as you will require fewer network hops and therefore, all other things being equal, respond sooner. Sites hosted by your ISP will respond even quicker than sites at the other end of the country. So before we can support or shoot down a commercial company for trying to get an edge on its competitors, we need to have a decent and workable definition for net neutrality.
For me, the starting point is the ISPs - they should ensure that all traffic over a given protocol has equal priority between the internet backbone (or co-located servers) and the end computer, and that there are QoS measures for each protocol in the customer contract. This would ensure equal treatment as far as is reasonably possible and stop (or at least make plain) any throttling measures being used - and the end user is receiving a neutral service from their ISP. They can change their package or ISP if they don't like the QoS terms, so customer demand will start to drive ISP packages again.
Do this and it is less critical if the likes of Google and Amazon co-locate with large ISPs because their traffic will be treated equally along the slowest link in the chain. Co-location may only exclude one or two hops, on the fastest part of the network, for these big companies.
I wonder if this is the start..
I'm left wondering if this is part of the plan to move away from reliance upon ad revenues and into the content distribution market. How long before they start offering "get your content delivered quicker" services?
At the end of the day, Google is out to make money. so "Google is making exactly the kind of deal with ISPs that it has consistently tried to ban in law and regulation." makes perfect sense: they think that it provides a massive advantage. One that they cannot ignore.
If they tried to say to the regulators that it was too big an advantage for any of their competitors to have, and told their shareholders that they didn't want to take advantage themselves, then they would probably get shafted by both groups.
Ethics say that I should cruise around the motorways at 30mph, so that I produce the least polution possible, but common sense says that getting rear-ended by an artic is going to cancel out those benefits. Does that make me/Google hypocritical ?
Sorry, but you lost me at:
"While broadcast TV can deliver a single copy of “Survivor” to millions of viewers at a time, Internet delivery requires millions of distinct file transfers across crowded pipes to accomplish the same end: this is the vaunted end-to-end principle at work."
And surprisingly, you didn't lose me by the reference to "Survivor". No, it was by the sheer ignorance regarding networking. I'm far from a network guru, but even I know about multicasting. I may not know how to set it up or use it, but at least I know about it. You know, Class D using the old class system of addressing. I'll grant you that I've never seen it used in practice. Regardless, this "broadcast" scenario you analogize is EXACTLY what multicast was designed for. In fact, unless I'm mistaken, old versions of Norton Ghost used multicasting for restoring images over a network.
Google being google... You have to wonder, what sinister ulterior motives there are.
I dont understand much about network protocols, but I understand that google only does things if it increases the amount of money it makes, and that means ADs.
Is there any conceivable way that hosting data closer to its target, will effect how ads are targetted?
Would these regional caches allow google to make more refined regional targeting and data collection for ads than they do currently?
Aside from the pros of getting their data more readily and faster to the user at less infrastructure cost to themselves, there just HAS to be a motive that involves targeting users regionally.
What??? Is this article a troll? A joke? Seriously, I couldn't even follow the "logic" in this. *checks the date* It isn't April 1st, is it? Is this "Prankers to the Wankers" day over there in England or something? Who the **** at El Reg let this article in???
First... I am not a fan of Google, but I can't stand by and let ignorance be spread amok.
To the uninformed...
Google is talking about hosting cache servers close to your house, so you don't have to go so far to get it. While you could say that the end result is that Google is paying an ISP to make their traffic faster (which is the most loose definition possible), it has jack **** to do with Net Neutrality.
Net Neutrality is all about ISPs deciding to purposely make Internet traffic slower than they could actually carry it - slowing down you, the end user. Then, charging companies a premium if they want their traffic given a speed boost. Analogy time: Imagine if the speed limit on the highway was reduced to 45, unless your company pays the state extra money, so their employees can drive 65 on their way to work (but only on their way to work). Maybe your favourite store might may this premium to the state so you can drive fast to go there. But most stores/homes/companies couldn't afford this fee - so you're hosed most of the time.
What Google is doing is what many companies do to a point. Google doesn't just host servers in California. They have caching Co-locations all over the world! They're already doing just this, as are most major companies! Hell, even small companies utilize geographically-dispersed co-los. They host huge expensive server caches all over the world in order to get closer to you, the end user. Most of these are strategically placed in major backbone areas. Now Google is suggesting hosting server cache farms right at your ISP... making content even closer to you. The costs for just the hardware are massive, let alone the costs they must pay for the co-location to the ISP.
Analogy time: It's like if your bank opened up a branch office on your block, so you don't have to drive so far. What the **** does it have to do with Net Neutrality???
"No fair, Google has more bandwidth than me, because they paid for more. They're violating Net Neutrality or something..."
This article is taken WAAAAAAY out of context and is flatly, totally ridiculous.
Yo author... you're doing it wrong.
Hey, if Google want to go even further and "build their own lane on the motorway", what's wrong with that? It's a level playing field, in so far as Google's competitors, like Microsoft, can do the same thing. Nobody loses. Except other businesses. But that's the nature of business.
Surely the point of Net Neutrality is that the internet should enshrine fair business practices. The internet should be a fair, regulated market, as we strive for in the "real" world. When Net Neutrality hobbles reasonable competition, isn't that going too far?
Another misleading article from someone who should know better
I'm looking at the 3 NN points, none of which the Google CDN would violate.
1. Levying surcharges on content providers that are not their retail customers;
If Google is paying to co-locate their servers on the ISPs network, are they not retail customers at this point? What's the difference between Google co-locating their server or you and I? None.
2. Prioritizing data packet delivery based on the ownership or affiliation (the who) of the content, or the source or destination (the what) of the content; or
The fact that the system is closer does not give it "priority" over other packets unless the network provider does this. As someone who works in LAN management, you should be aware of this. The idea is that with NN you can't prioritize based on origin/destination but closer destinations on fatter pipes will always be faster. If I connect to my home system (cable @ 10/1) vs colocated server (100/100) I notice a HUGE speed difference, but the traffic is still being treated equally.
3. Building a new "fast lane" online that consigns Internet content and applications to a relatively slow, bandwidth-starved portion of the broadband connection
How are "Internet content and applications" consigned to a "relatively slow, bandwidth-starved portion of the BROADBAND CONNECTION" (emphasis mine). So you're saying that once (if) these servers go up, my access to the rest of the Internet will somehow be slow?
Nothing to see here?
Unless the ISPs are prioritizing Google's/Youtube's packets over packets coming from Joe Schmo's search and bait shop, it's not really a violation of net neutrality. I'm not a huge Google fan, but last I checked just about anyone could buy a content delivery service.
Across the world, we alertly examine the actions and declarations of neighbors, party members and industrial entities for any hint at a failure to uphold the hallowed principle of "Net Neutrality". No-one knows what it is, some can recognize it when they see it but all players know that they have to avoid even the appearance of violating it. Say it out loud: "Net Neutrality" violations, whatever they are, even imaginary, will not be tolerated! Examine in detail the writings of bloggers, industrial or proletarian, and find the hidden meaning beyond the words on your screens. Are you actually staring at the works of reactionary writers who secretly wish to violate "Net Neutrality"? Could an apparently innocent phrase be interpreted as a hidden slight against the Guiding Principle? Don't be fooled into complacency. Even proposals of obvious advantage may in reality be suspect. Improvements in infrastructure, bandwidth or throughput are right out if they can be unmasked as being just attempts at striking a blow against "Net Neutrality". Holders of the Fiber! Do you know whether that hop count reduction is a good idea? Would you dare to use those TOS fields in a packet? Do you wish to deliver faster content to your customers, leaving others at a relative disadvantage? Beware! Our well-informed politicized NetNeutNetizens (3N) will write withering articles against your pityful imperalistic attempts to become "Net A-Neutral" and swift retribution will follow.
"d surprisingly, you didn't lose me by the reference to "Survivor". No, it was by the sheer ignorance regarding networking. I'm far from a network guru, but even I know about multicasting. I may not know how to set it up or use it, but at least I know about it. You know, Class D using the old class system of addressing. I'll grant you that I've never seen it used in practice. Regardless, this "broadcast" scenario you analogize is EXACTLY what multicast was designed for. In fact, unless I'm mistaken, old versions of Norton Ghost used multicasting for restoring images over a network."
Umm... show me a *production* (as in open to the public) multicast delivery service that runs across different ASes on the "public" internet. HINT: There isn't one. And multicast wasn't invented to handle the type of video delivery that people want, which is *on demand* and not fixed schedule programming.
All HTTP delivery of video is currently Unicast IP, with a single stream per feed. In fact, all video on demand systems operate this way, as there is no point in delivering video via multicast if only two people want the feed at that time.
Richard is correct in this case, because Youtube doesn't do multicast. The rest of you can come back and make your point when they're ready to deliver via this method. ;-)
Some last points maybe?
There are lots of points to NN that I obviously haven't understood well. I think some of the examples used above (Phil) are out of this world. The NN camp is trying to solve a problem that is not here yet. It might come, but it's not here yet. I think I lack an understanding of what the NN camp is actually trying to accomplish.
@Q we: Introducing latency this way is not at all a waste of money. It's a way of slowing down traffic like well behaved TCP sessions without actually throwing traffic away. It can be used to put a lid on e.g. Bittorrent traffic.
@Chris C and others with no real knowledge of multicasting: No, that's not the way it works. And you're probably right: You've HEARD of multicasting, but probably never tried to implement cross domain multicasting services.
@Ken Hagan: Also the congestion handling part is wrong. You don't filter on source or destination addresses. In a backbone you would typically use RED or other similar technologies that work well with TCP backoff algorithms and don't care where you're from.
Oh and Phil (2008-12-16 15:52): Richard Bennett is _not_ a proponent of NN. This article is about the alleged hypocrisy of Google, saying one thing and doing another. Are you trying to win a FoTW contest or what? Richard specififally says that Google's actions are sound practice regarding networking. Stop eating those mushrooms...
Merry christmas everybody!