not sure what to say
I only have one word. Insecure.
Mozilla has proposed a new method for signing into websites that avoids both site-specific passwords and existing cross-site sign-in services from corporate behemoths such as Google and Facebook. Known as BrowserID, Mozilla's prototype is built atop a new "Verified Email Protocol", which uses public-key cryptography to prove …
I only have one word. Insecure.
It's not. Or, rather, it doesn't have to be. The concept is perfectly sound once you wrap your head around it: it's pretty simple, really. You enter an email address. The browser generates an RSA key pair. It sends the private key to the email address. So only if you actually are the owner of that email address can you sign communications with the key. It's a pretty smart way to use an email address as an ID token for sign on to multiple sites in a decentralized way.
The problem with it that I saw when I tested it is pretty simple, though: it doesn't set a passphrase on the key, so once you confirm your email address by clicking on the link it emails to you, from then on, anyone who sits down in front of your system can log in as 'you' - the RSA key is stored in the browser and since it doesn't have a passphrase, anyone who can use your browser can be you. This is fine if you don't let other people use your computer, I guess. But I'd much rather the key had a passphrase I had to enter every time I logged in anywhere. It'd still save the inconvenience of going through a sign-up process and waiting for an activation email from every site separately.
1. Key generation should be local and not at the "trusted" keykeeper corporation.
2. Why the necessity to store a history of user's registered sites remotely? At minimum this is a privacy concern, especially if such information is shared with third-parties including governments!
3. Lack of a private keyphrase. An individual is not their computer, even if its linked to an OS account.
4. No mention is made of how to revoke a key if either a computer is stolen or lost (private key) or mail account (public key).
5. If Mozilla is compromised, especially remotely , the cache of all private keys on a machine could also become available. Think of the BitCoin attack where keys were left unencrypted on user's machines. How is this addressed?
"Key generation should be local and not at the "trusted" keykeeper corporation."
As I read https://wiki.mozilla.org/Labs/Identity/VerifiedEmailProtocol , the key generation is local: when I wrote that it's done by the browser I really mean it's done by *the browser*, not that the browser goes out and asks some remote server to generate a key. I haven't looked at the source, but my reading is that the key generation is local.
2: yeah, I'm not entirely sure why this needs to be _logged_. Though I may be missing a wrinkle somewhere.
4: You do set a password when you set up a BrowserID, and there's a 'my account' thingy at browserid.org that I guess would be used for this kind of operation.
5: Good point; another good reason it should be possible (and possibly required by default) to put a passphrase on the key.
I just thought of another, 6: every time I want to use BrowserID from another system, I guess, I'll have to wait for a confirmation email, given how the system works. That'll be kind of annoying when using public systems or just switching browsers.
I'm still kinda undecided on whether this can be a better system than OpenID.
No bias there then.
An article that has nothing to do with Google gets an anti-Google title?
"Outsourcing login and identity management to large providers like Facebook, Twitter, or Google is an option, but ..."
The issue at hand has everything to do with Google and any other "trust us" third-party verification setup, which is why they were mentioned along with others including OpenID.
Okay, I suppose it could have said De-faced or Twit-less just as well. Say, isn't an too-ready anti-bias a bias too?
..apart from the vast majority of Mozilla's funding comes from Google (for now)
"Behind the scenes, the website, your browser, and a separate verification service use crypto keys to verify your identity"
"Outsourcing login and identity management to large providers like Facebook, Twitter, or Google is an option, but these products also come with lock-in, reliability issues, and data privacy concerns"
BrowserID looks prone to failure as much as any other external service.
So who owns the verification service?
Er, that'll be Corporate Behemoth 2.0.
Now accepting VC funding......
(You'll have to imagine the badgers)
The whole idea of public key cryptography is that you create your own public/private key pair yourself and distribute the public key. The private key should never leave your computer. If the key pair is created on someone else's computer, then someone else can use 'your' private key.
Herp, de derp, de do.
What JS _is_ very nice for, however, is adding "fluff" - for instance, opening an image as a popup - without JS you're pretty limited to wrapping a small image inside an anchor that points to the large image and (again, as long as you're not using XHTML strict) using target="_blank" - however the odds are that, on a modern browser, that'll just open in a new tag.
Again, as I read the detailed descriptions of the system, the key pair is generated on your computer.
This is what MS Passport should have been.
The next step is for Public eMail SMTP relay to use somehow it to verify senders and cut spam. I have no idea how that can be done. It's not simple at all to avoid spammers gaming it.
What you need is a "distributed" Browser ID service where "Trusted" ISPs provide the service for particular eMail domains. No one central system.
So there's no way to ensure this works over SSL presumably. Simply intercept the mail, or get into their mail box using stolen passwords from any number of the recent hacks (Sony PSN etc) and you can then login to any account someone has.
Of course there are. I'm just unconvinced the answer is to introduce yet another one. In any case, until bazillions of websites implement it, it's value is always going to be somewhere below zero. And typically webmasters won't implement it UNTIL bazillions of other sites have implemented it.
Incidently Register people, how come you don't let me sign in with Twitter? I kinda like that for commenting, and you guys get tweeted more. Any site that requires me to sign in with their own system has an instant turn off really these days.
"The service then creates a cryptographic key pair, keeping the public key and storing the private key with your browser."
That's all good and well, but apparently we should just take their word for it that the private key is not kept?
Better would be the other way around: you generate a key pair locally and upload the public key to the service. The site could even provide a thing that does it for you, as long as the code is in the open and transparent.
That's a good idea - though instead of a central service how about an easy-to-implement, open source API that could just be munged on the web-server - it would still need the browser to support it but it could sort of work like...
The user generates their public/private key pair on their machine (perhaps through a GPG implementation embedded in the browser) - which the browser can then access.
When a user needs to sign into a website (that's implemented the API) they just enter their email address and submit the form - this tells the browser to upload the user's public key and download the website's public key.
The website adds the user's public key to its keyring and generates a passphrase. It encrypts the passphrase using the public key for the email address and sends it to the browser which then decrypts it, re-encrypts it using the website's public key and sends it back to the website.
If the website can successfully decrypt the passphrase (and it matches of course) then everything has gone through successfully and the user is then logged in.
As the user has the public key for that site, the next time they visit it can (probably) be assumed that that site has THEIR public key and the key exchange needn't take place again - just bounce the encrypted passphrase from server to client and back again.
This way you're not relying on any central repository and websites can generate cryptographically strong, single-use passphrases that are just bounced around, encrypted and decrypted behind the scenes... no more easily crackable passwords, or user's having then written down on sticky notes plastered to their monitors.
The only issue to resolve then is one of user authentication on the machine - but that's no worse than as it stands now with web browsers remembering your logins, bookmarks and browsing history and is easily solved by NOT SHARING YOUR LOGIN! :)
Doesn't this rather depend on email addresses not being re-issued to other users? As far as I can tell, that may happen if your email account expires and a new user later chooses the same name.
Of course, the existing password system has this problem also, as the new user can request a password reminder.
well of course you don't abandon your email address often, yet sometimes it does happen. Or, as sure as taxes, email owner will die someday.
What happens then with all these websites, subscriptions etc he/she used to use?
I think I will go and ponder on my mortality ...
If the private key is stored on the browser then doesn't this limit sign-in to a single machine? Or are Mozilla going to issue additional copies of the private to multiple browsers?
Either way it's seems to be a choice between impractical or insecure.
From https://wiki.mozilla.org/Labs/Identity/VerifiedEmailProtocol#Synchronization_of_keys :
"This protocol does not require the user's private key to be synchronized across user agents or devices; it is expected that authorities will present more than one public key when queried. It does not forbid synchronization, however, and the system should work fine in that case. User agents should be prepared to deal with expired keys at any time. "
So the keys can be synchronized, but the system is also supposed to work with multiple keys verifying the same identity.
My concern with regard to this scheme is that since a key pair is linked to a specific email address, miscreants and/or tyrants can sift through traffic data, access logs, and other information to mount correlation attacks and gather evidence establishing patterns of behaviour: Every time my BrowserID is used to login to a service, my public key is retrieved from a third-party server, and matched against the private key sent by my browser in response. By correlating the two events in time, an interested party can easily determine when and where my computer is used to access the service being monitored. It should be noted that this interested party can mount correlation attacks against existing "enter your password" systems as well.
Also, since my private key is "stored with the browser," the scheme only provides as much security as supplied by the physical environment surrounding my computer, and, if used, any folder/file, keychain database, or full-disk encryption that has been implemented within my computer itself.
Thus, on the whole, it looks like that the BrowserID scheme doesn't really do all that much to enhance security and provide anonymity: The system is still just as vulnerable to various time-related attacks, and still depends on a well-protected physical environment to be secure as an authentication method. What it **does** do, however, is make it less cumbersome to **manage** my authentication info, which means that as Joe/Jane User, I may be more likely to use it in the first place.
The real problem with this solutions, and all others like it before is that you need a central trusted key authority. No one believes that any central key authority can be trusted. There is always the PGP'esq solution with the circle -of -trust model. but that hasn't really taken off now has it??
Ding, fail, try again.
You don't, actually. The idea is that the email providers act as the trusted key authorities; there's only a 'central' authority for now (browserid.org) because the system is very new and it needs to be kickstarted. The idea of using the email provider as the key authority is quite smart because you're *already* trusting your email provider with your identity, so you don't lose anything with this scheme. The Grand Plan is that browserid takes off and everyone who provides email addresses also acts as a 'primary authority' - https://wiki.mozilla.org/Labs/Identity/VerifiedEmailProtocol#Primary_Authorities - for browserid, and there'd then be no need for 'secondary authorities', which is what browserid.org is currently acting as.
"The idea of using the email provider as the key authority is quite smart because you're *already* trusting your email provider with your identity..."
No, I'm not. I'm trusting my e-mail provider to provide me with an address which I can use to send and receive communications. It's a convenient handle, but I don't hang my identity on it.
Given the prevalence of client hacks, phishing sites, and even simple SMTP spoofing, only a fool would trust their e-mail provider with their identity.
Once you wrap your head around that, you realize we're right back where we started -- we need a trusted authority. That's pretty difficult to achieve for a system which exists specifically because of a lack of trust.
"No, I'm not. I'm trusting my e-mail provider to provide me with an address which I can use to send and receive communications."
Er, really? What do you type in the 'email address' box of just about every site sign-up form, then? Because what I type into it is my email address. And that's where they send the password reset whenever I (or anyone else...) requests one. So yes, the email address is the identity, in the case of most accounts: if you control the email address you control the identity. You have no choice, in most cases; few sites allow any other way of doing a password reset, or allow you to disable password resets, or allow you not to enter an email address, or an invalid one. So, we're all fools, apparently.
"So, we're all fools, apparently."
Well, I won't argue with that one.
"What do you type in the 'email address' box of just about every site sign-up form, then?"
Well, in most cases where e-mail isn't actually needed for communication, I often use privacy@[sitedomain]. If it doesn't work, it means that the site manager is somewhat cognizant of security, and I may have gotten through to someone about privacy as well. If it does work, I've got an account to deal with the given organization without compromising my identity at all.
On the other hand, every website I've used where _actual_ identity (rather than just authorization to use the service) is important does allow for methods other than e-mail for control of the login*.
* A login is NOT identity. I don't agree with the conflation of identity and authorization. Your comments have been rife with that, and that's the main point I'm arguing against.
All I can think is we're using vastly different sites. I can't think of any account I've signed up for in the last year - and I sign up for a lot - that didn't have an email verification step.
I'm using FF5, and my registration seemingly went A-OK. Fail, indeed.
As for the concept itself, I don't trust my browser all that much with my data; I never save the passwords for my accounts for example. Thus for a user like me, I'm not sure this scheme is in any way enticing.
worked okay here. did you get the verification email and click on the link? you have to do that. also, if you're using noscript, you have to allow browserid.org.
Any identity assurance services fails because of the very fact that you need to have somebody to tell the site you're visiting that you're legit.
What I fail to understand is people can ALREADY use Public Key Cryptography to sign into websites using certificate based setups by issuing your own Public Private keypair and uploading the public key pair to any service provider/sites directly.
You can see this in action on cacert.org. Look here: http://wiki.cacert.org/Technology/KnowledgeBase/ClientCerts
The ONLY problem with ClientCerts at this moment in time is that browser vendors' implementations completely fails, particularly for Mozilla's Firefox in that you cannot use crypto USB sticks to store the certificates in - it has to be within the browser's store. Which is insecure and not very portable. On the contrary, this is something IE has implemented quite well.
Going along with anonymity of the internet, this is the _only_ secure way of signing in I'll ever use or employ as there is no requirement for me to fill in personal details in the certificate and what's more I can issue a number of these certificates for different categories of sites and I can manage it all on my crypto key.
I'd say, Mozilla stop wasting time and implement client certificates properly so it supports external USB crypto keys - a lot of software already supports that functionality (within Windows anyway). And people need to move security to the next level by buying and using crypto keys to fuel proper adoption into other OSes beside Windows.
a lot of discussion going on http://lloyd.io/how-browserid-works
Aha, just in time for an issue I have a question about. I'm looking up a word at the Merriam-Webster site http://mw1.m-w.com/dictionary/anaximandrian. The page asks me, in a beta "Seen & Heard" feature "What made you want to look up Anaximander? Please tell us where you read or heard it (including the quote, if possible)." So, in the spirit of crowdsourcing, I type into the textarea the url and quote that generated my interest in the word. There's a "comment using" dropbox requesting a login. Reasonable enough as a first line defense against arbitrary malicious spam/sabotage. Options are: Yahoo, FB, AOL, and Hotmail. Now I don't have FB. When I investigated the site initially I was put off by Zuck's (nominal) requirement for a real life first and last name. Subsequent developments have confirmed this intuition. I do have Yahoo mail under my real name, which I use for transactions that aren't permanently archived, indexed, and sold to everybody and their cousin. So I select "comment using Yahoo" and login with my Yahoo username and password. I get back a dialog that says, "Click 'Agree' to sign in to www.facebook.com using your Yahoo! ID and allow sharing of Yahoo! info." I do not want to sign in to FB. And I won't.
So what's going on here?
1. Why does Mirriam-Webster force me to authenticate through FB to contribute to their site?
2. What information about me is Yahoo proposing to provide to FB?
3. What does Yahoo stand to gain by forcing me to collude with FB?
Mirriam-Webster lost my contribution. I hate FB more than ever. And I'm suspicious of Yahoo in a way I wasn't before. Why can't they let me be a good guy without signing away the store? I'm more inclined to trust Mozilla than the aforementioned entities, but what good does that do if sites insist on locking in to FB?
How is there greater privacy, the TTP (trusted 3rd party) still must know every site you visit; No privacy gained what so ever.
if they really wanted privacy, then there would be in a encrypted payload held by the client. the key is that only the TTP could create the payload but would distribute it to the client at a earlier date where the payload is not site specific just verifies the client.
also to the comments "anyone who sits down in front of your system can log in as 'you'" , unless you anally lock your screen and and sign out over every website after visiting NO protection is lost.
there doesn't have to be a trusted third party. they only exist to bootstrap the system, given that it's hard to make every email provider in the world support it on day one.
"also to the comments "anyone who sits down in front of your system can log in as 'you'" , unless you anally lock your screen and and sign out over every website after visiting NO protection is lost."
why yes, I do. but I usually do it with my hands, not my anus.
isnt this what PKCS12 client certs are for?
so we still have to give an email address to every crappy site we sign up with? wasn't that one of the things that openid potentially fixed?