Re: The real solution - amended
Turn both keys, at the same time!?!?!?
15 posts • joined 23 Apr 2014
Pretty much the same experience as myself. A couple of years ago I shifted VoIP provider, picked up a set of local numbers to use for both personal/business calls, and at the same time thought I'd try to port my old London BT number at the same time.
No problem at all, completed on the day they said exactly.
VoIP is nowhere near as much of a mess of dodgy quality connections (if the call was lucky enough to go through) as seemed to be the case 8-10 years ago.
"People who live out in the sticks and cant get FTTC or Cable and have piss poor speeds over ADSL and in over subscribed area bitch that they cant stream stuff in HD, shocker, sure blame it on the p2p bods..."
I know it's moving off-topic a bit. But it's not always that way. I moved from zone 3 London (where I had 5Mbit ADSL2+) to a large village or small town.. I would say village though... Here I have full 80/20 FTTC. I've been here 4 months or so and not seen a hint of contention.
"Quite simply I don't give a sit if you are throttled downloading illegle content.
Oh yeh I know I know you are downloading hundreds of gigs of linux distros and game demos etc. Do you think I was born yesterday ?"
The main problem is that you can't identify genuine use of P2P vs non genuine use. Likewise usenet throttling. Even if only 5% is legitimate use. You're still throttling someone else's idea of good service.
Not only that, but some games use P2P for client updates. They've often been caught in the crossfire. Even sometimes having their gaming ports throttled (destroying in game latency).
P2P used correctly is quite an efficient use of the internet. So, throttling and thus discouraging its use goes against those principals in my opinion.
On a related note. Anyone silly enough to use P2P to download illegal content deserves what they get. It's one of the easiest places to spot illegal downloaders, and once you share some of that data you downloaded with someone else, you also become a publisher of illegal content. I'm surprised there haven't been more P2P convictions on this basis alone. Yes, I know there's ways to mitigate that risk, but I wonder how many people take those measures.
As mentioned above. If there's any shaping or throttling going on - so long as it's advertised clearly, it's not really a problem. Joe Consumer can shop around. It when there's an advert of "unlimited unfiltered internet access" and, somewhere buried in the smallprint you only see after you sign up is the information about shaping or throttling. That's just not fair overall.
"If the bandwidth rates were properly enforced, there would be no issues, you pay for 2Mbs you get that, and if you have it switched on all of the time who cares, its not like water."
The thing is, DSL and other consumer offerings have always been contended. When ADSL was first released commercially I remember the numbers were clear. 50:1 contention for home service and 20:1 contention for business service.
Now-days the contention is never mentioned in such open terms but, it's there. Check the prices for fully un-contended data pipes and prepare to put away your credit card and be happy with the contention.
So, when you pay for a 2Mbit pipe. You pay for 2Mbit to the exchange, at which point you contend with all the other 2Mbit (or more) pipes onto the bigger pipe (that doesn't have 2Mbit * users capacity) and then once again when it gets to your ISPs POP, it's once again the data contends with all the other packets heading to your ISP down whatever size pipe they pay for.
Contention is a fact of life for consumer internet and that's not what's being argued here. It's more about businesses paying extra to have their data prioritized whenever there is a point of contention. That is NOT what I am paying for.
"I see your point but doesn't it come down to how MUCH you pay? You mention that it would be okay to prioritise traffic if the service were free. But if you choose the lowest priced ISP and they happen to make up the difference by selling priorities to (e.g.) Netflix then isn't that really just the same thing but on a sliding scale?"
I understand your point. I don't use the lowest priced ISP by a long shot. (Zen internet is my ISP). I think so long as an ISP makes it clear that you're buying a subsidized package and that includes shaping or prioritized data then it's fine.
Now, there is a danger that then ALL ISPs would of course slap this notice on all their packages. That way they can get the best of both worlds. e.g. prices don't go down, they get consumer money, and money from big business and laugh all the way to the bank. So, there would need to be some regulation that set a reasonable maximum price for un-metered, unsubsidised and un-shaped internet access.
I don't think it's a fair comparison to compare ISPs throttling a user's overall connection rate, because they've been heavily using their connection over an extended period of time, and also the shutting down of hosts by hosting companies, when there is a chance of legal action against them. These are logical, and maybe even totally fair things to do.
The comparison of the above against prioritizing certain kinds of traffic in return for monies paid to the ISP is another thing entirely. I pay a fixed monthly payment for a pipe to the internet. For that money I expect the ISP to keep their upstream pipes big enough to sustain at least moderate use by the majority of their users. I'm not paying them to prioritize packets because a third party in turn pays them to do so. IF I had a free service from an ISP I could understand it, they must pay for themselves somehow. But, I don't. *I* am the customer and, if any traffic is to be prioritized, I should be the one choosing it.
So my personal position is that the internet should be free of such practices. If companies want a prioritized network, then TCP/IP is a nice open protocol, anyone is free to use it. They can make their own private network, deliver it to users on their own wiring (or even take advantage of LLU, or even use BTs network to deliver it into their own) and prioritize things any way they want. Until then, I don't see a reason why paying users should even have to put up with the notion of this happening to the internet at large.
I looked at A&A. But, the cap can be a problem. I watch quite some netflix and use Sky on demand quite a bit, not to mention do a lot of work from home over VPN. So, some months I could easily go over any cap, while other months use very little. It depends on too many factors. So, I had to find something else.
In the end, I moved my existing Zen connection from my old address. Which is around £30 per month for unlimited 80/20 FTTC. They also have excellent technical support (I had an issue with impulse noise on standard ADSL before I moved to fibre and they not only understood the problem I was describing, but was able to enable the interleaving and FEC needed to combat it). So, all in all, they're a good choice in my opinion.
I used to be on BE, and they were also excellent value for money. Until Sky got their hands on them of course.
The strange situation with Zen having a usage cap AND higher price on ADSL but not on FTTC is an odd one for sure though.
I've found it really depends on the router, and surrounding use of the channels.
When I was recently in Romania, the connection available was a 100Mbit FTTB connection.. That is 100Mbit in both directions (for a ridiculously low price, 1Gbit available for not much more). The wifi on the Linksys router (dual band 2.4/5Ghz, max 300Mbit/s) seemed to hold speeds back to around 35Mbit/s in both directions. I spent some time working with the settings to try and improve the speeds. I had no luck. When I connected by wired, while I never actually got 100Mbit.. It did go up to around 55 down, 45 up. So, for sure wifi was limiting matters.
Here in the UK I have FTTC 80/20. I'm using a Fritzbox 3390 (also dual band), which many reviews said had bad wifi.. This wifi manages the speeds fine (http://www.speedtest.net/result/3673784945.png taken just now).
So, as usual YMMV.
Yes, probably most people don't know the limitations of Wifi though.
Well. It could be worse. In November 2011 Openreach came out and installed fibre cables to all the local telephone poles (including mine). Hanging around the top, ready to be installed.
The broadband checker happily stated that soon I'd have FTTP installed. My ISP even contacted me to see if I would be interested in trialing FTTP.
Then, everything went quiet. All around me other streets were being hooked up to FTTC. But, the promise remained on the wholesale checker site and it was always promised "soon" with delays cited.
Then, around a year ago it all disappeared. Now the checker states only WBC connection of 3.5-5.5Mbit is possible with no suggestion there will ever be an upgrade (nor that it was ever the case that there would be an upgrade).
This is on the edge of the Urban/Suburban London boundary.
Meanwhile I go to visit Romania often, where in Bucharest you can get 100/100 for €7 or so per month and 1Gbit/1Gbit for only slightly more!
"You literately must have never heard of OpenBSD to say that.
Read "Audit Process" @ http://www.openbsd.org/security.html"
Not wanting to rain on your parade here. But, since they've been following this process since 1996 (before OpenSSL 0.9.1) and since they say they perform a file by file analysis of every critical file component (I would say, OpenSSL is critical). Surely OpenSSL would have been part of this process. Yet, both 5.3 and 5.4 seem to have "shipped" with the bug in.
Also this is an audit process. My comment is about the way the code will evolve, in the same way presumably OpenSSL has since 1998. Over the years, styles change, active developers change, concepts change and a serious amount of obsolete code (which isn't easy to identify) is present. I'm not convinced the auditing will prevent this from happening.
So, I think my point stands.
"At the end of the day if these guys are demonstrably incompetent when it comes to writing a key component of security infrastructure, I think folks have a moral duty to report it - however uncomfortable it may be, because trusting untrustworthy code causes avoidable pain."
But, that's pretty much the point I'm trying to make. The code was "out there" for two years. But never properly audited. Just happily used by small and large scale users alike. So, at least some of the burden lies on all of us for the blind trust put into this tool for which it could have been properly audited by anyone, at any point.
I think the more important point is that up until now, OpenSSL was regarded as not only good, or fine code. But, simply the de-facto standard tool for the purposes it covers. So, knocking the guys that wrote it now is kicking someone when they're down.
What will libressl look like in 15 years, once an untold number of other developers have each had their hand at extending the functionality? What vulnerabilities might lie below the layers of functionality by then? Unless some regular auditing takes place from either libressl or an OpenSSL 2.0. We'll be re-visiting this situation sooner, or later.
But, for over 15 years it's been used by everyone. Small software writers and big business alike. It allowed many large companies to use cryptography without employing their own specialists. Everyone was happy.
But, at any time these companies with the resources could have reviewed the code. Anyone else could have reviewed the code. It seems it was never done, or at least not done regularly enough.
Also I think the other problem is one that any medium to large suite of software reaches. That a lot of old code that will probably never be used remains. People are too scared to remove it, or even revisit it in case they break something. All the while, old development styles persist in older code. But again no-one wants to rewrite it, lest they introduce new issues with their rewrite.
What is left is a mashed up mix of coding styles all linked together to create quite a mess. So, something like this was inevitable.
A complete rewrite would be a good thing. But, I don't really think it should be a fork, or for a specific OS. I personally (with no real knowledge in the area, casting judgement!) think it should be OpenSSL 2.0. I think some of the big businesses that have saved so much money over the years could provide some useful resources to this endeavor. Let's face it, most routers are running this, our phones, most of the big websites were running behind it. If these guys could spare some resources, along with the OpenSSL development team, and anyone else who has the time and expertise. Rewrite this behemoth, from scratch. Omitting any obsolete processes. Aligning to a single design style. Maybe, we could get past this and move on.
Back to my point. Calling the developers incompetent is easy to do now. But, EVERY person that used OpenSSL and never reviewed it, can receive the same label. Sadly, that includes me. Only once mind you, and then just to sign a bank payment file. But, all the same. We all (developers) use these libraries never really knowing (or sometimes even caring) how they work internally until something like this happens.
Ignorance is not bliss!
Biting the hand that feeds IT © 1998–2019