Since when is trust a one way affair?
I suggest that AT&T provide me with their credentials in return for mine. They can trust me.
A draft put forward to the Internet Engineering Task Force has drawn the ire of prominent privacy activist Lauren Weinstein as “one of the most alarming Internet proposals” he's ever seen. The document that's upset Weinstein is this one, out of the HTTPBis Working Group and posted as an Internet Draft on 14 February 2014. …
Long gone is the saying "..if .. free .. you are the product.."
Now, no matter how much money you pay, you are still a product.
All the advancement in technology in the last decade has been in the reverse direction.
- Caching was supposed to be a short term fix in the era of dial up modems.
- Secure transmission was the dream of the society
- Even end users' browsers were not trusted back then to allow the local resource sharing
See where we stand now :(
"Caching was supposed to be a short term fix in the era of dial up modems.
Was not.
Network efficiency has sod all to do with endpoint speed. There is absolutely no need to reload the same images for a website again and again, just because you can. Ultimately, doing so reduces overall efficiency, however fast and fat the pipes are.
This is a different issue that that raised in the article though.
".. same images.."
What I remember from my short-term web development experience is that caching is heavily based on the name of the images and it has caused big pains for many figuring out a guaranteed update of dynamic images. May I was not so good but a technology is only as good as its drivers.
"What I remember from my short-term web development experience is that caching is heavily based on the name of the images and it has caused big pains for many figuring out a guaranteed update of dynamic images. "
That did happen, but shouldn't have. It was due to badly configured (either intentionally or accidentally) or buggered servers/proxies/clients.
Cachable data objects sent as part of the headers, the date the object was last modified. Caches are meant to record this information, and re-request the object basically saying 'if this has changed since date xxx" then send me the new version.
The server replies either with the new version or a message saying "you have the current version".
This could still incur a penalty (imagine forums etc. with loads of piddly little images) - this problem could be fixed by the server sending the 'Expires:" header, which tells the proxy/client that the object will never change until at least the mentioned date.
Of course, too many incorrect headers were sent, or ISP's got greedy, and started ignoring/overriding the information sent, forcing the situation you mention.
The real data transport savings are to be made with broadcast and on-demand entertainment media streaming. Most publishers of these use content delivery/distribution networks , who place there nodes with the ISPs. So, apart from providing data scraping opportunities on sensitive data, how does this proposal help anyone?
This post has been deleted by its author
This post has been deleted by its author
This form of transparent MiTM on SSL already happens on a large scale and you don't even give your consent. Providers like CloudFlare et al. act as a reverse proxy for some huge websites and actively act as legitimate MiTM for SSL connections for the purposes of caching. I recently wrote a blog covering this and explain it in a little more detail: http://scotthel.me/cfup
When you visit a site you have no idea where the SSL is terminated and your traffic could be travelling half way across the web without encryption. How do you know if a site is hosted on a service like CloudFlare and that there isn't already a MiTM of your SSL?
Up to a point, Lord Copper.
All your saying is that people can, and sometimes do, put a server cert in a gateway (reverse proxy). That often makes perfect sense, to take some of the load off an origin that might be concerned with something higher-level, like your shopping basket or portfolio. Indeed, it's the same picture as a HTTPS server using an unencrypted connection to an SQL backend: there the server is itself MiTM. From there, it's arguably a small step to outsource the proxy function to a third-party who specialises in implementing it securely and efficiently.
Now as to whether trusting your expert third-party and all is any worse than trusting your own non-specialist staff, that's a question above my pay grade. In principle it could be better: your own sysadmin perhaps doesn't have the time and expertise to stay on top of every technical and security issue your specialist contractor deals with.
From an end-user PoV, the question is simply whether you trust the organisation behind the cert. And that's two questions: their intentions, and their competence.
You're assuming that the backend connection for the reverse proxy isn't SSL. While I'll admit that the places with reverse proxies I've worked at have both the reverse proxy and the backend in the same site, I do know that everything is covered by SSL. Hell, a certain bank that shall not be named has SSL from Internet to Reverse Proxy, RP to yet-another-RP, to Application Server, to MQ, to Mainframe.
Usage of reverse proxies doesn't automatically mean "cleartext on the backend".
ISP Web page: I'd like your permission to terminate all your https connections on our proxy in order to provide you with a more streamlined service.
Mini-User: Mum! What's a Poxy?
Mum: Something you get when you're young, but you've already had it so you can't get it again.
Mini-User:OK, <click>
In order to do this, you would need to add a Trusted Root Certification Authority to your browser. That's probably not how your ISP would express it (if only because few would understand it), but a 'good' browser will at least prompt you along the lines of: Do you want to do this? It means that if the issuer can see your traffic, then it can also see all your secure traffic.
If you don't have control over your system (e.g. if it's a 'company' machine) this could all happen without any visible sign.
Well Mozilla Firefox or Microsoft IE trust (directly or indirectly) about 650 organizations that function as Certificate Authorities. And none of these will prompt for permission. All of these organizations can sign any certificate for any domain and be fully trusted by all browsers. This is how insecure the CA system is by design. You have to trust that all 650 CA's are not evil, even though some are run by governments, some are private companies and I'm sure a few fake front companies for three letter acronym agencies.
That little padlock really means "yea, you are probably safe, what is it you are doing again ?"
Chrome has 'only' 50 entries for Root Certificates (which are the same ones as IE), which are all effectively self-signed (which is, of course, what 'Root' means). This thread is about ISPs installing a root certificate, so they can spoof certificates for Google, Microsoft etc. and act as a MITM that can read all your encrypted traffic. AFAIK any additional Root Certificates will require authorisation from a SysAdm (which, as I pointed out, might not be the user).
Is there a draft in the room ?
Watch his right hand ... waving flags, banging pots, thumping the drum. Are you distracted?
(No need to heed his left hand under the table ... )
Feel better now?
(A $1.7B NSA facility in the Utah desert to store a yottabyte of data. Oh, my ... !)
ISP's shouldn't use proxies for their service. Not. At. All. It just serves for them to engage in shady content favoritism practices or snooping. It's one of the things that ISPs should never do unless they have a really good reason to do so (say, being a satellite ISP and even then, the proxy would live within your premises.) The other horrible thing they shouldn't do is CGNAT.
Both practices, however, are done by the Cable ISPs here in Mexico, the main reason why I don't ever use them.