The head of the US Federal Communication Commission has proposed formal net neutrality rules that would prohibit internet service providers from discriminating against particular content or applications. "Because it is vital that the Internet continue to be an engine of innovation, economic growth, competition and democratic …
The key word here is "lawful"
... because it's all too easy to place a blanket label of "illegal" on a range of applications, services and data streams that the current regime (or service provider) dislikes, doesn't understand, or thinks is too much load for their outdated backhauls.
Secondly, how are they going to separate a "lawful" data stream from an illegal one over the same protocol ? Did someone say "enforced deep packet inspection" ?
I'll admit it's a nice (and nobly-intended!) try, but with a slightly too easy get-out clause attached.
This is a pleasant surprise. In the last days of Bush II, there were increasingly alarming reports of moves being made in the other direction. Let's hope they manage to get something that's solid enough that it can't simply be wiped off the slate when a different administration comes in.
Agreed! Slowing down by protocol or site is wrong; falsifying connection resets is wrong; slowing down the heaviest users at peak time so everyone else has good service at those times is fair. (Note, I say this AS a heavy user.. those who are light users but probably use it just at peak do deserve to have snappy service. Preferably the ISP would build out enough to cover peak, but if not.. *shrug*.. better to slow down some people for like 10% of the time they are using the connection, than have the light users have slow service basically 100% of the time they are using it.)
I like the solution of a bucket throttle.. this is a bucket that can empty at your ISP's rated speed, but fills at a slower rate... if you had a 1GB bucket, you'd get full speed for 1GB, then whatever speed the bucket fills at after that.. if you use the connection slower than the fill rate, you accumulate full-speed usage. Simple, no manual intervention (some ISPs maually throttle or unthrottle users I think..), and effective. And it meets the transparency mandate -- it's a simple enough concept the bucket size and fill rate could be placed in the terms of service (I saw in the past where the satellite ISPs did this.)
Is this the tide turning ...
... or just the sea backing off before the tsunami ?
If they manage to make this stick then it marks a significant shift in the balance of power - in consumers favour. If it fails of course, then the big players will feel there's nothing to stop them doing anything at all. And if they do make it stick, can we borrow the FCC for a bit to teach our excuse for a regulator OfCon how to do it properly ?
I have to admit that made me a little nervous too. Maybe I'm just being paranoid, but its not too hard to imagine the FCC going "Oh by the way, here's a list of UNlawful content, which you MUST block."
The idea that ISPs are going to start filtering stuff like web traffic is retarded. If they start blocking access to google or youtube they'd get no end of shit from average customers. And they dont even care about that shit anyways because the majority of it is either cached locally or direclty peered. The stuff they do care about is the top 10% who sit there with their bit torrent clients up 24/7 soaking the network for all its worth.
One common freetard argument is "HURRRR DUH HHUU THEY SHOULD JUST ADD MORE BANDWIDTH!!!!". This is clearly retarded as bit torrent expands to fill whatever pipe you throw at it. Bit torrent will saturate a 5 meg pipe the same as it saturates a 50meg pipe. Thats kind of the point of the protocol. The only way to stop it is traffic shaping. Its a completely fair solution too. Bit torrent is not real time. It doesn't matter if the traffic gets theire 5 seconds late. Whereas with something like RTP or gaming traffic you need it to get there on time every time. A few senarios. Lets say at peak times most users are doing simple web surfing maybe making a few phone calls via RTP. Most of that traffic is so small that the qos impact on bit torrent would probably be unnoticable to the user. Once peak hours subside bit torrent goes back to full speed since no one else is using their connections. Senario 2: People are doing heavy web traffic (lets say lots of direct streaming) and doing lots of phone calls. RTP calls get priority over all traffic. Bit torrent speeds drop until peak hours end. This is fair because calls are more important than everything else due to their nature. Streaming is second in importance because bandwidth is required to keep it seemless. Bit torrent, not being time sensitive, is the least important. Once peak hours subside, bit torrent goes back to full speed. 3rd senario: everyone is torrenting. everyone must fight through the slow shitfest. In the current world adding more bandwidth doesn't help. If you had packet shaping, increasing bandwidth would allow you to increase low priority traffic speed at peak hours.
The other argument is "hhhhhuuuuu duuhh huuud duuuu the internet is freeeeee". Its not free. Its a group of private companies who peer their networks for their own benefit. The idea that an ISP should allow whatever traffic you want accross their privately owned lines (or that they dont have the right to prioritize traffic for the health of their network) is ridiculous.
Clearly stated and generic protocol based packet shaping is the first step in solving network overuse problems. After you have good qos then you can go about adding bandwidth to improve low priority performance during peak hours. The idea that isps are going to start blocking popular websites is a stupid red herring and everyone should just shut the hell up about it because its never going to happen. All you're doing is preventing real fixes.
To be fair...
...more and more of the Internet's traffic is being encrypted and its routes increasingly disguised, so it would become increasingly difficult anyhow to distinguish one stream's purpose from another, and DPI can't inspect an encrypted stream without using a weakness in the encryption---which would likely be fixed soon after. So rather than allow for traffic prioritization that wouldn't work later on anyway, just perform a generalized utility usage regulation like everything else.
As for the bucket throttle, it sounds good in theory, but then I think of the "halftime at the super bowl" or "seventh inning stretch" problem--people who accumulate their buckets and then unload them all at once.
Canada needs to get to this too
I was thinking of getting a Palm Pre, it's exclusive to Bell and they limit data to Web browsing and "personal" email. No voice over IP, no streaming, no "corporate" email like exchange sysnc, no vpn...
Rogers is guilty too. If you buy an internet stick for your laptop they block vpn, you have to pay extra on top of the normal outrageous data rate if you want to use vpn.
Both need a good swift kick in the ass. If I pay for xMB of data a month I should be able to use it for what ever I want. And don't give me that Unlimited* crap.
The cheque from the ISPs is in the post...
"Mr. Watson -- come here -- I want you."
All the players will cry that this stifles competition and will reduce investment in the networks - and therefore any regulation is A BAD THING.
In fact - regulation for a level playing field - regardless of whether it's wired or not - would be a great thing for innovation ... and many applications that are currently "wired" would move to unwired (like the PC that I'm working on now) - eventually changing virtually everything in the house and office that either uses, or could use, any form of network communication.
Cell phones would quickly become obsolete - it's really just a network access point that limits itself to talking anyway... but in their place would be a rich variety of devices that would change telecommunications more in the next ten years than anything since Mr Bell utter the famous words, "Mr. Watson -- come here -- I want to see you."
Im not going to say much but your arguement looked valid until I read further. At that point I realized you sounded just like your "hhhhhuuuuu duuhh huuud duuuu the internet is freeeeee"tards. Keep believing that you spew. We wont miss you.
/Yes this is your coat now go.
I'd for the most part agree.
However, if I'm being sold an unlimited 10Meg connection (i.e. I do what everyone else does first time and ignore the smallprint) then... well I'd rather like to get my 10Meg connection. So yeah, they should throw more bandwidth at the users.
Also, the ISPs get a degree of legal protection from what flows over their networks simply because they don't know what it is.
QoS on the network should ensure that, as people above have said, the real-time apps get priority, then there's a normal-user-HTTPing level of priority, and the not-time-based apps (Torrents, Youtube, etc) that are free to suck up whatever bandwidth is left. That way your normal users notice a slight increase in speed, voice calls are delivered flawlessly and a movie still takes a while to download.
Good reasons for...what?
It's not often that an issue comes up that pits two personal passions of mine against each other. Here's one.
On the one side is a reverence for the freedom of those who own and operate an enterprise to run their own house as they choose, within the bounds of basic business ethics. In the case of connectivity providers, one example of this I'd hate to see lost is the right to use, at their discretion, such things as DNSBLs to fight spam, or block outbound port 25 from users of things like DHCP pools to frustrate direct-to-MX bots. These things inherently chip somethinng away from a neutral net, though in these particular cases, nobody worth listening to is likely to object.
On the other is exactly this matter, preserving to the greatest extent possible the old concept of the end-to-end model, or the spirit anyway, and, in general, keeping an eye to the ideal of making the net blind to the nature of traffic, where any two endpoints of a connection anywhere on the net behave exactly the same as any two others.
Paris, who has lots of conflicting passions.
Pipes is Pipes
All data is shit. Imagine for a moment that all toilets are used for their intended purpose. An end user is free to pan for gold in the septic tank because a commercial interest has told her that they used the toilet to put the gold there. The end user is not, however allowed to blame the pipes or the misused toilet.
This explanation of the FCC rules may need a little cleaning up for Prime Time.
OK, his speech actually says "The fifth principle is ... that broadband providers cannot
discriminate against particular Internet content or applications. This means they cannot block or degrade lawful traffic over their networks, or pick winners by favoring some content or applications over others ..."
But there's a problem here and it isn't clear that the FCC understands. It is highly desirable for everybody that providers *do* discriminate in the sense of giving (e.g.) VoIP traffic and HTTP traffic slightly different treatments: VoIP needs a limited bandwidth but minimal delay and loss; HTTP needs best-effort bandwidth but can tolerate some delay and loss. Etc. We just have to hope that the actual rules don't accidentally make this illegal.
I think there's a misunderstanding in your post.
It's not for the ISPs to decide how customers use their bandwidth. Whether it's bittorrent, voip, vpn, imap, http, etc. is none of thier business. It is the networks business to make sure customers get a fair slice of the bandwidth though.
A counter example to your post is if a customer uses tons of bandwidth over RTP (either legitimately, or via another protocol being masqueraded), then why should the person downloading files get less than his fair share?
Henry Wertz 1 has the right idea, network management should be protocol agnostic such that the infrequent user gets priority over the frequent user until utilization catches up, at which point they would share the available bandwidth. This would be quite fair to everyone.
If the high bandwidth user finds that his VOIP/realtime apps have insufficient bandwidth, it ought to be his responsibility to configure his router/application/OS to limit the high utilization. Alternatively the ISP could offer this as an optional service to do this on his behalf. No single port / protocol should be forcibly punished or rewarded. That is unfair and presumptuous about the nature of the packets being transmitted.
Above all else, the ISPs should be forced to be transparent with regards to what they're actually selling.
Re: red herrings
Lots of fair points eden, just expressed in a very inflammatory manner.
For all net neuter-ality fans:
1. There is _NOTHING_ wrong with a non-neutral net. Never was, never will be. As long as what has priority and what not is determined by the consumer. There are ways to do it and it has been widely practised by same ISPs (ATT, Comcast, etc) on their business products. Time to grow up and allow this to Joe Average Consumer instead of "mummy knows best" attitude.
2. Bittorent and family will expand to use any link until it is congested. Throwing more bandwidth does not help. So as long as protocols like it are around the net, it will have to be QoS-ed, prioritised and managed. It may be per user, it may be per protocol. However as long as the rules are clear there is nothing wrong with it.
Thanks for principle, - now the practice!
I think establishing the principle first and then working on the practice is much better, than allowing each operator to invent 'fair use' policies to contain what they mis-sold in the first place.
As it stands, the EU in its Telecoms Package urged on by the UK civil servants and AT&T lobbyists have traded neutrality for a belief in the ability of a competitive market to deliver affordable services. The customer protection will be found in the small print. This can be no basis to plan key service delivery using IP based connectivity.
Establishing the fifth principle will force more service transparency and thus more innovation can emerge because the rules are clearer.
The practice that needs to emerge is for operators to clearly explain about the resources they have put in place for each package. Planning rules state for $20 a month you share 34mbps backhaul and nominally allocated 30kbps(bits) of peak. How would you like to use that? We can load up to 95% capacity before the system starts behaving oddly, what packets would you like us to drop?
Stating the principle was the easy bit and much appreciated. The practice demands we understand the limits of our services and work out some basic rules for the commons, which will include slowing bit torrents during peak, and encouraging the scheduling of downloads for off peak times. We were sold the potential of the system, rather than what the service was engineered to deliver.
Three cheers for the principle, now lets work out the practices for establishing end user control of the available bandwdith and quality.
Do I trust them?
Comcast is only one of many ISPs who have been less than open and honest about the service they provide. This story is about the USA rather than the UK, but why should we believe what an ISP says, when we all know what "unlimited" really means?
Yes, I appreciate the arguments for controlling the activities of the bandwidth-hogging protocols. But if we are to have limits, let's make them clear and apply them honestly.
Though, since we have Ofcom rather than the FCC, I don't plan on holding my breath in anticipation.
Disingenuous ISP marketing tactics
The problem is that ISPs are addicted to marketing services as "UNLIMITED" which are not, and never have been, "unlimited".
I've never had a problem with any ISP managing their bandwidth in a reasonable manner. If people don't like the service and bandwidth they are providing at the price they are charging, they are free to find another provider more to their liking.
But because the marketing arm of most ISPs just can't wean themselves from flogging "UNLIMITED" service packages, the bozos resort to sneaky measures to try to throttle users traffic usage in various ways, instead of BEING HONEST ABOUT WHAT THEY'RE SELLING.
That's the main problem in a nutshell. If all customers knew exactly what their service was paying for (X amount of data transfer per month, not to exceed X amount of data in any 24 hr period, for example), there would be no need to "secretly" throttle this-and-that.
The other issue are the ISPs that are trying to choke off services that they think compete with something they're trying to sell. (ie ATT and Google Voice) I think it should be obvious to anyone with a few brain cells that that is unacceptable. Especially if it's done surreptitiously.
Apparently the large cellular providers are trying to claim that they need some sort of special dispensation because their available bandwidth is lower. I call BS: all they have to do is abide by the same principles above, and they're no different than any other ISP - just like Genachowski said.
Once again - they just can't bring themselves to admit publicly that 3G cellular data customers can't download as much porn via their mobile as they can at home over cable.
In other words: DUH.
- Breaking news: Google exec veep in terrifying SKY PLUNGE DRAMA
- Geek's Guide to Britain Kingston's aviation empire: From industry firsts to Airfix heroes
- Analysis Happy 2nd birthday, Windows 8 and Surface: Anatomy of a disaster
- Google CEO Larry Page gives Sundar Pichai keys to the kingdom
- Something for the Weekend, Sir? SKYPE has the HOTS for my NAKED WIFE