On March 14, Bell Canada began throttling peer-to-peer traffic on pipes it rents to third-party ISPs. And it neglected to tell the third-party ISPs. The mega-Canadian telco has been throttling P2P traffic on its own network since October, but this is different matter. One of those third-party ISPs is TekSavvy, a small family- …
This is news?
OK, maybe that Bell Sympatico is is, apparently, lying to and shafting their wholesale customers but they've been treating retail customers in this manner for some time. TekSavvy was, definitely, the way to go. Hopefully Rocky will be able to make alternative arrangements. Hopefully, too, more consumer advocates will give Bell Sympatico the publicity they deserve.
Yes, this looks bad, but...
It appears that they are only "throttling" the traffic, not stopping it dead in its tracks and slapping it in the face like ComCast!
Then again I'm not connected to them, so I could be wrong!
But it is still BAD! I suspect that the BitTorrent people will make up a work around, and the Bell Canada people will have a harder time.
pulls up chair
T His can get very interesting . If tech savvy promise their customer some thing, and bell Canada stops them, I smell law suit. Nice or should I say said, to know the that they are screwing every one including re sellers. If they cant provide to their own, what business does bell Canada have reselling.
P2P is bad.. mmkay?
Anyone noticed that P2P is slowly killing the net? It is just like leaving your car running all night in the garage. No one would do it because the resources needed to run it cost money. People who leave P2P going 24x7 are using resources that cost someone. Pretty soon bandwidth will be a scarce resource, just like fuel, but with bandwidth we can't go to war with anyone to keep the price down.
I use AEI, a DSL wholesaler that gets its lines from Bell. This really does feel like a slap in the face.
This isn't the only case where Bell ignored the best interest of their customers, either. They haven't spent enough on their infrastructure and to try to fix things they've dragged their feet on speed upgrades. We're still stuck at 5Mbit/s* connections in metropolitan areas. I yearn for ADSL2.
*Up to, rather. Mine peaks at 3Mbit.
Bell should provide what they've been contracted to provide.
Like most here, I dislike that Bell is telling users how they can or cannot use the service that they pay to use (without restriction according to the adverts).
What I really dislike, and find downright sketchy is that Bell bills (ISPs) on a data rate, or peak bandwidth rate. That means that for TekSavvy, paying for the data that their users are putting through the network, regardless of time of day, is part and parcel of the contract. They've honored that (so far as I can tell). That contract doesn't include any limits or whatnot imposed by Bell other than some basic network maintenance terms. TekSavvy is quite right in suggesting that to make this permanent moves it beyond a simple maintenance issue. if Bell is going to act this way, they probably should compensate Teksavvy, and their customers for not providing the service they agreed to, and were paid for.
Telus is effectively a Natural Monopoly, they're regulated in Canada by the CRTC, but even that has limits. Telus really hasn't been able or willing to keep up with the infrastructure needs, and that's a real problem. But it is now a problem that they're passing down to consumers because they're a monopoly of sorts. This is why we have the CRTC, and gov't regulation, and this is where they are supposed to step in.
P2P is just more effective use of bandwidth
It's not at all like leaving your car running in the garage - More like strangers giving each other rides to work instead of taking the bus, or maybe like running a generator in your garage to help power a commuter train. Eh, why do all shoddy Internet-related arguments have to involve shoddy car analogies?
The point is, 100 people sharing a large file amongst each other doesn't involve significantly more data transfer than 100 people downloading a large file from a single server. As for the specific impact on the bandwidth used by your average home user, it increases only by a factor of two (since every bit downloaded corresponds to a bit uploaded by another user). I'd more inclined to make a wild statement and say that distributed bandwidth usage is preferable to a single node being hammered by everyone.
Perhaps you could say what's happened is that instead of, say, people paying someone to host a server from which they can download, we instead pay for bigger pipes to p2p instead, and because this is more convenient, there is more large content available to transfer, which in turn puts strain on the net. But, if we are to return to the car metaphor, I would liken p2p-hating to complaining that we're running out of fuel because there are too many places people want to go. Therefore, we should clearly decrease the quality of the world, so that fewer people will want to leave their homes, so that the roadways will be clearer.
P2P isn't "killing" the net. It IS the net, and for many people, it's a mechanism PURPOSE of the net, to obtain data. Paris, something to do with hammering a single node.
Bandwidth is not finite
you can keep adding to it forever it costs money which presumably you charge for P2P isn't going away, and what about other high bandwidth uses playing stupid and cheap is just annoying. The internet has to keep pace with utilization why is that so hard to accomplish open your tiny little minds and scale this fucker up.
So thus kicks in the old law for every action there is reaction !
So instead of supplying the actual bandwidth purchased , they have oversold their own to smaller ISP's , the contract suit cleaners will have a field day !
Thus they are falling way short of the minimum capital to operate the communications networks by running them on the cheap limited bandwidth to maximise dividends and profits thus the customer is always last in the business equation !
But then again this kind of stupidity will not only accelerate the next generation of encrypted from end to end data packets p2p to make it indistinguishable from every other data stream floating around the intertubes with also encrypted stop start bits with longer headers to bypass this madness and silliness , almost like a dark net within a dark net and is it not one of those pet new p2p projects from those evil pirates sailing in Swedish territorial waters just out of US cannon shot range !
Some people have no brains unlike PH
@james surely you jest , for p2p is the backbone of Linux distributions for all netizens , for obviously you have not seen the data flow rates of distributions like Knoppix and many other distributions as well !
I can only hope you're kidding. Killing the net? Seriously? While bandwidth is finite in the sense that there is a limited amount of it at any given time, it is still not going to 'run out' and become a "scarce resource".
I think the problem that you refer to is cause by the fact that while the capabilties of home PCs have risen dramatically, particularly in the form of storage, the wires that connect you to the rest of the world are still the same tiny, noisy, unreliable things that Bell put up in the eighties.
The problem here in Canada is that Bell (much like your British Telecom) still owns and operates (barely) the actual pipes that give us the internet here (Nexxia). Even though Bell is not my ISP - a fact for which I am very grateful - my traffic still ends up going through their network.
And now we've come to the issue of this article, Bell has entered into a contract with a business to supply them with a connection of certain specs, and now Bell has decided to change the service. This is the problem here; since all roads lead to Rome, Bell still has ultimate control over the internet in Canada, no matter who your ISP is, you're still at the mercy of Bell and their "optimisations".
Welcome to oz
Where we have monthly data transfer limits (which on some ISPs measure downloads only and others measure data in either direction). It means you have to manage your own traffic or you'll end up with a completly throttled connection for the rest of the month. Seems a little bit more sustainable than OMGUNLIMITEDDATA...
Exactly. Too many people want a free lunch. Ignoring the legalities/politics/morality or P2P for one moment, the reality is P2P ain't sustainable for $15 a month broadband.
P2P is not killing the net.
I live in Canada and all the ISPs have in the fine print that they will charge extra if you abuse the network (have a upper limit of the traffic). Well, if P2P traffic is killing the net then charge us for it. People said that we are not paying for the resources, how come I get a bill each month. If what Bell is charging is not enough to keep the network up, they should charge more.
No, the problem isn't the lack of resources, it's the fact that people expect to use it all constantly without having to pay anywhere near the cost that the sort of traffic that they uses costs.
I'm sure if you started billing users 95th%ile on peak time traffic they would soon stop P2P'ing then when they find out how much it really costs.
Throttling/Shaping/Caps are needed so the provider's can provide the service level customers demand (and then stamp their feet if they dont get 100% uptime and 100% speed 24/7/365) so for us providers it's a loose-loose situation.
What's waiting a few more minutes for a bloody download, if you -need- to get it fast constantly (while I know that there are legal P2P, a majority of P2P traffic is illegal) then pay the proper cost for the connectivity.
Oh and there are pricing wars which drive down the cost of connectivity, just ask Cogent.
If providers are up front about what they do, and it pains me to say this, but Virgin Media have the right idea - then what's the issue? Just schedule your (Il)legal P2P downloads to run in Off Peak hours....
"We give you XGB download between XPM and XPM, if you reach this we will throttle your traffic for X hours"
Perfectly reasonable. (I don't however agree with stealth shaping/throttling)
Contention at the wholesale level ?
So are Bell Canada saying that they have over-sold their wholesale pipes, so that if an ISP actually tries to call on all the bandwidth it has rented they will throttle it down?
You sir are a moron.
Bandwidth is directly proportional to the investment in infrastructure that the provider has made. By your analogy you compare the resource of fossil fuels to bandwidth. All you need to do to get more bandwidth is install more data cables, routers etc.
To get more fossil fuels we have to have the world suffer a mass extinction and have all of that lifes bodies decompose for millions of years at great pressures.
I am not trying to say that bandwidth is completely unlimited, because the resources to create routers, and cable etc are limited. But the scales are completely different.
Add to this that we are able to pump more data down physically similar sized equipment following regular advances in networking tech. and you will find that your arguement becomes more and more foolish and shortsighted.
The real arguement here is that the ISP (bell) should install enough infrastructure to provide the bandwidth that they sell, not choke off the bandwidth of those that have bought it to accomodate more suckers who are willing to pay for the misrepresented packages.
To those complaining about 24/7/265 internet usage
Thats what we're paying for!
If it's not sustainable at $15/month, they shouldn't bloody sell it at $15/month!
And I pay somewhere in the region of £50- roughly $100/month for my connection (for the home business) and get the service all these P2Pers want. I understand that quality costs more.
I think that there's a great market out there for a no-monthly-cap, no-FUP ISP (or option from existing ISPs) especially for P2Pers that limits you to non-peak hours. $15/month for your internet connection plus another $15/month for non-peak non-capped, non-shaped internet. Increased cashflow for the ISP, partly normalised network utilisation, lower peak loads (so it could hold off required investment in infrastructure etc), and customers flocking to you to get a taste of that downloady goodness.
jus a thought
if p2p is killing the net then wouldnt it be a good idea for the isp's to stop marketing thie connrctions under the guise of unlimited free musig and video downloads? - also one has to wonder what the cumulative effect of those billions of flash adverts plagung the net is
found out the hard way on my gsm bill the main one used on yahoo finance weighs in at over 700kb :(
It comes down to what the ISP promises to customers
Wholesale or retail, if an ISP promises to supply an individual or company a certain amount of bandwidth without constraint, then that is precisely what they should supply. On the other hand, if they promise to supply a line with a specified maximum bandwidth, on a flat rate basis but with a warning about fair usage , not exceeeding a monthly limit and so on - then they can throttle or otherwise constrain the traffic.
I guess TekSavvy are miffed because they thought they were buying a service which would not be throttled. As Bell Canada seemed to change their story with TekSavvy, maybe they also believed this to be the case. I guess a bunch of lawyers will decide...
I don't think it's greedy downloaders to blame - ISPs advertise how simple it all is and how fast the connections are. The tech is sold on that - not continuous connection.
However, the current tech is not the same as before, the kit used to try and maintain speed rather than connection but these days the promise is for the kit to attempt to maintain connection all the time. The boxes constantly monitor line condition and are more likely to drop speeds as conditions vary but don't always raise the speeds if line conditions improve --- you can see line conditions change from hour to hour and they frequently vary due to weather conditions, humidity seems to play a large part in it as it can affect whether a joint goes partially short circuit or high resistance. a lot of it is down to the wires.
What the ISP's don't usually tell people is to drop then router/modem connection for a few minutes and power up again. The kit at the exchange is re-set to the highest possible speed at that point in time.
Even here, 1.5 km from the local exchange with recent main cables and joint boxes, it's not unknown for the speeds to drop when the weather gets very wet but the speeds don't always ramp up again. Power off the router, make a cuppa, power back on, speeds restored.
But, as others have said, there is also the problem of insufficient investment - as the customer base grows the kit isn't always upgraded to meet the number of customers so throttle the speeds and blame it on P2P.
You've still got connection, the ISP is still fulfilling their small print.
If the customers are encouraged by content providers to download all the telly programs they missed, download films for a few days rental and share holiday movies and the next music sensation it's hardly the fault of the customers. we;re only trying to use a service that's heavily promoted by content providers and ISP's.
Doesn't seem much different from trying to drive anywhere -- cars sold on a promise of speed and convenience, roads advertised to be good routes to places and shopping centres and theme parks built for the punters to visit.
But somehow it's always the fault of the motorist for attempting to use these services in the way they are told they can as surprise, surprise - it all fills up with traffic.
@Brian Miller et al
Do I smell a deliberate attempt by the P2P advocates to obfuscate the facts here?
What I read by James is that P2P is being used by people to use an unreasonable amount of bandwidth (see tragedy of the commons).
True ISPs should provide what they say they are providing, and they are "misspeaking" with the current situation, but be careful what you wish for. The result of these discussions, and particularly complaints about unfair ISP provisions is not that ISPs will roll over and give us all a Gig pipe for £1.34 per month with unlimited bandwidth, rather it is that they will charge more for less with stringently enforced (or charged) download limits.
One thing that is almost always overlooked by those vehemently opposed to throttling / limiting P2P is that people do not just download what they want and then leave it at that, they download what they *can*. This takes bandwidth from teh rest of us.
To take the bad car analogy, this is more like people driving their cars 24*7 to visit places they don't really want to visit but see it as both their right to do so and that they may be missing out if they didn't.
Oh, and to the P2P linux downloads, if linux downloads comprise even a barely noticeable proportion of P2P traffic, and of those downloads only a barely noticeable proportion are installed and used then rather than the current 1-3% market share for linux we would be seeing more like 200-250% market share.
I'm using another third party ISP that piggy backs Bell's DSL lines, called Acanac, but I don't seem to have any throttling going on. Currently pulling Match of the Day down at 503 kb/s
I feel the need to point out that you are being rude over the matter of a poor analogy thereby making yourself look a bit of a <insert term here>
Yes the fossel fuel bit was a poor analogy, a better analogy would be to the electrical supply system. For the most part (in the Western world) there is enough capacity to cope with normal demand. However if everybody switched on all their electrical equipment the system would be overloaded. The infrastructure could not cope.
Customers in the US regularly suffer brown outs and power outages during heatwaves and cold snaps.
Data networks are similar in that they are speced for average usage (and suitable additional capacity for peak periods also included). This is calculated on typical users doing typical things EMail, Web browsing some file transfers, online gaming ...... and even these days a reasonable amount of P2P traffic
So wheres the problems:
1: ISPs overselling their bandwidth (yeah it happens too often)
2: Users doing the equivalent of running every electrical appliance at once (and wanting it for free)
I notice that very few people are complaining that their upload rates are too slow, perhaps that says something ?
So how many linux distributions is a single home user going to download during the course of a week (no distributions don't come out that often).
Lets be honest it's "I want my free films, music and apps for 15 quid a month AND I WANT THEM NOW!" I'm sorry thats a selfish attitude and an unreasonable expectation into the bargin
If you want a guaranteed 8 meg throughput on all protocols, all the time then pay for it. Many people commenting here don't have a clue how much it costs to provide connectivity. Try phoning your local telco and ask how much a dedicated 2meg circuit is, then have a think about compromises that may have to be made to provide your 8Meg, 15 quid a month home connection
I have to work with everything from modems to multi Gig DWDM instalations, and can assure you that you don't get a new Bugatti for the price of a used Hyundai
What is the actual problem with setting your client to limit the upload / download rates to something reasonable and letting the file trickle in over a few extra hours.
@ P2P is just more effective use of bandwidth
I second that.
I just switched to teksavvy, to get away from the throttles and caps on the local cable provider, and now this happens.
Paris because only she can save me now
Bell must be getting a check from the MAFIAA
Guess the checks have cleared from the MPAA and the RIAA to do the deep packet inspection (spying) that AT&T said it's not doing.
It's been said already - but you're an idiot! :)
I pay £24 a month for an unlimited connection of upto 24mb (depending on distance and line quality) - not cheap £8 or free (talktalk - <spits />) connections.
Even though I've moved house and city a few times - they always provide that connection and never throttle me. The second they do, I will pay another company the going rate for the standard of connection I require for my internet habits.
The fact is - if they started throttling security updates for Windows to 10k (what plusnet pulled on me once with usenet traffic) then there would be an outcry, it would take days for you to get your monthly update and all the while you are open to attack - here lies the problem. People equate p2p with piracy and 24/7 usage, and see it as less of a priority than other data (like flash-based porn that young johnny is downloading next door) - fact is, it's not always the case - and it's not their right to make that assumption in the first place.
I download about 250GB a month via usenet (HD TV mainly - since I don't have a box to record HD off my tv but still pay my license fee).
My connection is usually inactive during the day - the only thing I use bittorrent for is linux distro's because it's cheaper for them than me raping their servers for the latest versions.
Not everybody is out to leech 24/7 - the ones that do should pay what it costs to sustain that habit - but I would say if ISP's advertise unlimited, and try to undercut prices of competitors to get more customers like some greedy whore, they should be forced to give that service no questions asked.
Apart from all that, ISP's are turning this into a mine-field for non-techies and it's getting to be a ball ache for the rest of us.
Missed the point?
Is this about P2P, or competitive practices?
The "big guys" do a little math. We have X likely customers on an exchange. On average, they'll sign-up for Y speed. That exchange can support N bandwidth. That gives us a couple of numbers to fiddle. The highest speed we can advertise, and we'll 'optimize' that by capping monthly transfers so that folks can't/won't max their bandwidth all month. Add calculus, marketing crap, peak times, etc, etc.
What's important is that it's a last-mile bottleneck. It's not a problem on the fiber. Along comes the government who tells the big guys that they have to lease their toys to the competition. Suddenly, the playing field doesn't look as friendly as it used to. Add P2P to this, where folks are using a large chunk of the bandwidth which they've been sold. To keep peak traffic down on the last-mile, throttle it. It's natural to them as they've been doing it for years on their usenet servers (hence, folks paying for third-party). Also start charging $2/gig overage charges.
Smelling blood, the smaller ISP's go after a market of users who don't just send email, surf, and whatnot. That's a competitive edge, albeit slim since, let's face it, most folks don't know or care about the finer points of their connection.
Eventually, the smaller ISP's are creating problems for the last-mile maths. The big guys see an opportunity to remove a competitive edge under the guise of optimizing saturated networks. Done deal.
World of Warcraft updates, experimental releases of DRM-free TV by the CBC, or copyrighted material? Legal file transfers, illegal? I'm not sure that it matters. They aren't enforcing copyrights or anything so noble, they're using it as cover when limiting congestion of outdated last-mile tech, and now to remove a competitive disadvantage. That's it, that's all.
Substitute VPN traffic for P2P. How would that change the reaction to this story?
When a paragraph is riddled with spelling/grammar errors, one [sic] at the end will do - and even that is unnecessary since if three blatant errors occur in a quote and zero in the rest of the article, it's pretty obvious the errors are as quoted.
It's mean-spirited for journalists to jump on every single niggling mistake - that's our job as commenters.
Paris because she knows all about the proper use of [sic], particularly after a paparazzi-pleasing bender in the VIP.
(No comment on the real issue because I agree on being careful what you wish for. I'm not one of those people who download over 25 hours of videos per 24 hour day so theoretically I'd be happy with paying for what I actually used, but if it happened it would probably be at a ridiculously extortionate rate, ripping us off for the benefit of those who only check their emails and watch cats falling down stairs on YouTube. And even though the status quo is a stupid, messy compromise I can live with having to wait a day for most BT downloads to complete. So I'm going to stay in my shell and not talk about it.)
"Exactly. Too many people want a free lunch. Ignoring the legalities/politics/morality or P2P for one moment, the reality is P2P ain't sustainable for $15 a month broadband."
then maby the isp should stop selling £9.99 a mounth "unlimited" packeges they tell us we have no limites and then complane when we act like it
I have an isp that dose charge £34.99 for a 50gb allownce and I have no complantes cos I know that I am not going to be throlted
Well said, Anonymous Coward
As a network engineer within the telecoms industry, can I just try to shed some light on some current issues that underpin customer dissatisfaction.
Most posts here show a lack of understanding of both 1) the current business climate within which the telecoms industry has to survive and turn a profit and 2) traffic engineering principles in a multi-protocol, multi-service next-generation network.
Presumably, most people here consider themselves at least tech-savvy, if not actually a net-head. With that in mind, the first point is simple. Why do you believe the crap told to you by "ISPs"? Let me clarify that. Why do you believe statements written by marketing/sales types that couldn't SPELL IP? The idea that you can get 8Mbps guaranteed throughput "on the 'net" - which actually means "across continents" - for £9.99 a month? Come on! No engineer that works for a telco would promise you that. But the service definitions are written by Marketing types, NOT engineers. And the statements they make are not based upon technical realities, but on "what are the competitors saying, we need to offer more", and "what are the competiton charging, we need to charge less". How naive do you have to be, to think that such services can be technically realised?
Our core network only spans the UK, and I can tell you that that amount of dedicated high-class b/w would COST us approx. £800 per month to provide. That's cost to us, not price to you. And we own our own fibre. Would you be willing to pay £900 a month for a 8 Mbps connection? No? Thought not! Bandwidth is a scarce, expensive resource, and the bean-counters FORCE us engineers to design the network to ensure a good Return On Investment. That means upgrading ONLY when we HAVE to. And engineers do NOT make that call, senior managers do, and they do it based on financial return - which often means DELIBERATELY postponing upgrades that are absolutely essential, in other words accepting bad service. In today's tight financial climate, if an upgrade doesn't bring in good margin, it isn't going to happen. Ergo, we have to design around the problem, because despite what you think, most engineers in this industry want to do a good job, and take pride in technically solving the well-nigh impossible problems given to us by the marketing dead-heads.
So we deploy techniques, like statistical multiplexing and traffic engineering in the core, to put off the expensive day when we have to light up more glass, or worse, dig in more cable. These techniques RELY on users sending intermittent traffic streams. When users don't (and P2P apps turn users from intermittent senders into continuous senders), congestion occurs. When congestion occurs, everyone's traffic is hit. In our case, that means hitting customers like banks and power utilities that pay a LOT more dosh than you to get their traffic delivered.
So, to stop that from happening, we groom the traffic! We cannot do anything else. We cannot deploy enough bandwidth to cope, even if the bean-counters opened the purse strings. There are so many users (millions of 'em!) out there, that the aggregated traffic streams, if they were ALL pumping out 8 - 20 Mbps, would blow our (or any other) backbone, far less an Internet peering connection, and just deploying b/w would NOT cure it. We would need a major redesign on the core network. That means an investment of 10's of millions of pounds and years of, possibly service interrupting, work. It is NOT going to happen! End of story.
Actually, on a technical point, no matter how much b/w we deploy, if we are going to GUARANTEE that no congestion is going to occur in our core network (as our corporate customers insist), we *have* to deploy traffic engineering and grooming techniques, as failures in the core, or sudden events, can still potentially create congestion points (unless we have typical core n/w utilisation of less than 10%).
To illustrate, to cope with BBC coverage of an Olympic event, we increased our b/w, on a link running at 8% utilization i.e the pipe was damn near empty, by a factor of 5 (yup 500% increase), and we STILL got traffic discards at peak). No commercial concerns could conceivably do this on an on-going regular basis and stay in business.
And, for the truly technical among us, the problem is compounded by fact that Internet traffic arrival rates follow a power-law distribution, rather than a Poisson distribution. For the non-technical, that means the aggregated traffic doesn't smooth out, but stays bursty.
So do the sensible thing, and plan your downloads to use off-peak hours. That way you have a chance of decent service. Just railing about it and blasting a 24/7 data stream to/from your supplier, is plain stupid, not tech-savvy.
Sorry for the rant!
Why can't the Telcos figure out a transparent pricing scheme that makes them money? Maybe they should buy a computer, fire up excel and try to get a grip on their cost structure to figure out how to price their services.
On the other hand, if the customers are happy with their Unlimited Internet Experience (throttled, capped and emasculated) then it may be easier to make it up on the go ... gota say, the Reg loads OK, sowat
Because packets should run free!
What John said.
Lusers who think that they can get 8Mbit/sec out of whatever site on the Internet simply because their link to the nearest router will support it should (for preference) educate themselves, or shut up. I have absolutely NOTHING against traffic shaping, network optimisation, QoS and the like. Pipes are not infinitely wide, so there will be queues. If throttling P2P packets means that my latency sensitive WoW packets and my interactive web sessions go faster, then it's a GOOD thing.
I remember when the net performance would basically come to a complete stop when the schools went out and the script kiddies came out of class and started downloading whatever tickled their fancy. Traffic shaping means my webmail session has a fighting chance out there. Prime time would be around 9am when the little darlings were either still in bed or in class.
For crying out loud, what are you using P2P for? This is a download of a movie or a Linux distro, Windows fixes or something else that is Large. It's going to take a while anyway, so you may as well stop staring at that torrent screen, click it away and do something interesting. It's not that your life grinds to a halt untill you can watch that last episode of Crotchwoot innit? Oh it is? You sad fuck.
Maybe you should ask Teksavvy...
how they've managed to run a discount ISP that caters to high-traffic users for the past ten years. They've been consistently at least $10 below the competition, even while paying what I'm sure is an inflated fee to use Bell's network.
"We would need a major redesign on the core network. That means an investment of 10's of millions of pounds and years of, possibly service interrupting, work. It is NOT going to happen! End of story."
Lets be realistic, bandwidth needs are not going to drop because you just wish it to.
You are saying we should stop progress because it will hit our bottom line.
Some one had to lay the first copper and fiber and i bet they were not thinking how to maximize the shareholders pennies.
ISPs are not loosing money, they were riding the gravy train since the mid 90s and they don't want it to dry up and because they are so short sighted they don't want to invest into more infrastructure.
It has been done before in Japan and Korea and the big telco's have not gone extinct there.
what you folks need is real competion
Were I live ATT and comcast are in a race to see who can lay the most fiber. Due to real completion both companies are upgrading there networks.
Funny thing is when the ATT came back togather the gov told them the had to layer fiber to all new homes . ATT said well since we are laying fiber lets offer cable TV. In response comcast said lets do phone service. This real competition.
Before ATT deiced to do cable , comcast only upgrade stuff when it was to old to be supported or when it burned up
Maybe you're missing the big picture...
Today, it's P2P protocols. Tomorrow it's your VPN session. Then it's your TV episode downloads from iTunes that competes with their satellite television or VDSL offerings. Then it's your VOIP traffic from another provider. Then it's your SSH session.
It's anti-competitive, anti-consumer behavior. Bell specifically advertises a service that doesn't slow down during busy times... And now they're CAUSING the network slowdown.
If they're not fought on this front, we're all fucked in the end.
"Substitute VPN traffic for P2P. How would that change the reaction to this story?"
In this area, Comcast completely blocks VPN. Our company has roughly 100 employees. Around 30 of them use VPN. I have advised them to dump Comcast and get DSL instead. They have done so.
That's $1500/month (from our employees alone) that Comcast is not getting any more because they chose to block a totally legal and legitimate service.
My reaction to this story is "Well, Bellnexxia has a long and sordid history of ignoring spam reports, so the fact that they ignore their obligations to supply what they have sold is no surprise, either."
"No engineer that works for a telco would promise you that. But the service definitions are written by Marketing types, NOT engineers. And the statements they make are not based upon technical realities, but on "what are the competitors saying, we need to offer more", and "what are the competiton charging, we need to charge less". How naive do you have to be, to think that such services can be technically realised?"
That isn't really our problem is it? It's a problem for the lawyers retained by your ISP.
It's very simple: When you (the ISP) promise a service and you can't deliver it, then you are guilty of fraud. *Why* you can't deliver it (Marketing lied, market conditions have changed, the ISP hasn't invested in adequate infrastructure maintenance, the ISP oversold bandwidth, etc.) doesn't matter.
I have a new radical idea that could change the face of capitalism as we know it
its simple: if you can't provide it, don't freaking try to sell it!!!! Geez, its not rocket science!
(maybe I should write a book on how to succeed in business, I clearly already know more than the folks in business now)
"Pretty soon bandwidth will be a scarce resource, just like fuel"
Actually, IIRC there's a crapload of "dark fiber" laying around the world from the dot-com heydays. Plenty of backbone capabilities.
If fiber-to-the-premises had been actually done by the Western world telcos, this wouldn't be a problem, as it seems it is a matter of quasi-last-mile congestion issue, that is, the local loop can't handle it. Silly telcos somehow figured out that the olden dialup infrastructure would be able to cope with stupidly high bandwidth requirements.
At least our local "BT equivalent" (Telmex) had enough common sense to keep ADSL links on 4Mbps maximum. They even have taken issue when traffic surges screw up some areas by upgrading the links serving this particular area.
Cable companies, however, took the Comcast approach since mid-July, and guess what: tons of users have since flocked to ADSL service. Oops!
Network engineers are actually on YOUR side...
RE: Boris H. I never implied that b/w needs will drop, nor that I wished it would. And I am certainly NOT saying that "we should stop progress", nor do I wish it. B/w requirements have increased consistently year-on-year since time immemorial (or in my case the late 70s when I first started networking). My job as a network engineer is to make sure that those needs are catered for. To that end, traffic engineering techniques have been used for decades and will continue to be developed and used for many more, because it is a truism of networks that demand will always exceed capacity.
Capacity *is* constrained. The issue of congestion is NOT just a last-mile question. On the contrary, the problems of large-scale aggregation of customer traffic in the core is far more problematical. While it is true that in certain areas, *some* dark fibre exists, this does not mean that b/w is therefore available. A network need much more than optical fibre to pass IP packets. The major components in capex costs are that of the optical interface cards required to light up fibre, and the switch/router that holds the interface card. The major component in opex is the cost of people required to design, run and maintain the kit. Both these considerable costs have to be met somehow. And passed to the customer, because that's how capitalism works. I was not attempting to excuse the sometimes dubious business behaviour of ISPs, or any other organisations.
I *was* trying to give you an appreciation of the current state of play in the industry. As a long-time socialist I can also assure you that shareholder profit is very far from my concerns! ;-) However, I must point out that ISP's most certainly ARE losing money. The gravy-train ground to a very firm stop in 2000, and since then cost models for next-gen services over converged IP architectures have proven MOST problematical. Have a look at some of the research papers on the subject on the IEEE, if you doubt me. Most Service Providers are looking to move up the food chain, to higher margin corporate services, and frankly, residential services. once seen as desirable, are primarily revenue-generating, not high margin. Our bean-counters would LOVE us to drop residential support entirely. As someone who believes in the benefit of communications to society as a whole, I find this worrying. As far as the situation in the Far East, the business model is significantly different there, due to the large number of very high-density apartments. The costs of running one fibre into such a large building (containing dozens of customers) are NOT commensurate with the costs of running multiple fibres into a widely-dispersed housing scheme in Western Europe, or even more widely-dispersed residential estates in the US.
Finally, re:MorelyDotes, et. al., on the assertion that failing to meet services is fraud, once again, I'm surprised at the naivety. One area where Marketing types are very good, is in the creation of very firm legally binding documents that exonerate them totally from any such allegations. If you're lucky, you MAY get some service credits. If it is a problem for lawyers, then why don't you sue? Let me guess. Because you can't afford it, and if you did, I can assure you, they would tie you up in knots.
In conclusion, knowledge is power. If you wish to "win" over ISPs you need to understand how they work and how they think. And as this is a technical forum, how the technologies work. My aim was to impart some aspects of that, that may be of benefit to YOU, not to mount a defence of ISPs. Traffic shaping does not cause a slow network. Large traffic volumes cause a slow network. Traffic shaping allows a network to continue to function despite the congestion. In it's absence, the network would STOP. Wild caricaturisation and emotive posturing wont get you better service, but understanding the industry *might*.
RE: Chad H.
I am delighted to read about your literary pretensions. May I recommend you ask your dad for some scrap paper and crayons.
While it is very interesting to hear more about the realities of trying to run the physical Internet, I think Morely Dotes brought the focus back to the actual issue.
I think the problem is that many seem to be cling to the utopian ideal of unmetered Internet access and the service providers are guilty of pushing this ideal. The evidence clearly shows that more and more metering (caps, throttling, etc) is being put in place.
What is needed now is 'clear labelling' so that a customer can make an educated choice of Internet services. That means no more "fair use policy", no more "up to" and no stealth caps/throttling. Customers care about 3 things; latency, sustained peak link speed and bandwidth limits/penalties. Set your pricing via those metrics and everyone is happy, though many will be initially shocked to learn the true cost of bandwidth.
It never ceases to amaze me how pricing never seems to get the blame...
they aren't losing money because the bandwidth is being utilized the way they were advertised and sold...
they're losing money because they aren't charging appropriately for the service.
I have no problem paying a higher rate for a 24/7/365 always on connection, with the only limit being the speed plan I signed up for.
I don't even have a problem with reasonable, responsive, traffic shaping during peak times, even if it takes away from the plan I'm paying for, so long as it is temporary, and responsive to changing traffic conditions.
Even in the early 80's, before Windows, traffic shaping was common. Different types of connections and data had different priorities, something Windows completely ignored when it came on the scene.
Bulk data would get a lower priority. All perfectly reasonable.
But... traffic shaping != throttling... and throttling is the issue now... (well, that and their over-sold networks based on faulty assumptions about people using what they're selling)
And ISP's aren't losing money because of the traffic and how much people are using the connections they've paid for... they're losing money because of those faulty assumptions, and rather than seeing the world as it actually is and adapting to market conditions, they're sticking their head in the sand and trying to cut off anyone that doesn't fit into their faulty assumptions.
...and, those ISP's, either need to evolve, or die out as they deserve.