Tatu Ylonen, author of the SSH protocol, isn't afraid of criticising his own work: he's calling for a new version of the Secure Shell to make it more manageable and get rid of the problem of undocumented rogue keys. In this IETF Draft, Ylonen proposes a regime for key management, including key discovery, to overcome the problem …
While this is not exactly a case of presenting a problem without having a solution in mind, he could have made a bit of cash if he had offered to sell the solution in the form of some handy applications that did all of that. If he were especially
greedy insightful, he could have offered it as a service through a subscription plan.
Chirgwin has it right, though: all of Ylonen's recommendations look to be common sense security practices. I would add requirements for documentation and regular auditing.
Re: Missed Opportunity
"Chirgwin has it right, though: all of Ylonen's recommendations look to be common sense security practices"
It's draft-ylonen-sshkeybcp-01, so presumably it's on the BCP track, so it's kind of the point that it codifies good practice.
Why do have a nagging suspicion that any proposed solutions are going to end up more hassle than the problem? There are more keys than users... and this is a problem how? It makes perfect sense to me to use a different key for each account you may wish to access from - e.g. one for your phone, one for your home network, etc etc. A single key per user is a much greater vulnerability - one device is compromised and that's the lot of them gone.
Key rotation sounds like a nice idea but consider where key authentication is used - it tends to be in "off-grid" situations where e.g. Kerberos is inappropriate. No point is saying that keys should be changed periodically when in practice it isn't going to get done for that system belonging to an occasional client that you need to access once in a blue moon. On the other hand I'd welcome some form of automatic key update - e.g. if new public keys are generated on a system keep the old ones around. If they're needed to login to a system transparently update the key to the new one as part of the authentication process.
Whatever's chosen it needs to be as convenient as possible - the great strength of SSH is that it decentralises this kind of issue making cross-network authentication a doddle. Lose that and security may actually suffer if less secure alternatives are chosen.
So he wants a Perl script?
Seriously, since most ssh implementations store their keys in simple text files, managing them is fairly simple. If you want more automation you can just write a Perl script or whatever.
Re: So he wants a Perl script?
Don't speak common sense now, we can't be having that in IT.
Re: So he wants a Perl script?
We store user keys in a repo with a start/expiry timestamp and class/range of machines to protect. Then each server to be protected by SSH makes a request to our repo over HTTPS which builds an authorized_keys file, downloaded and shoved into its ~/.ssh directory every 60 minutes. You have key expiry and server class/range management all from one central repo. A simple shell script requesting from a simple web service. This works ok for 200+ VMs and physical servers with 50 odd simultaneous SSH users.
Getting into large deployments though of thousands of users/servers then yeah, nightmare. I'd be looking to auth against PAM/LDAP.
Enforce strict checking?
On the couple of occasions when I've inadvertently deleted a key from one of my devices and then SSH'd into my server, it's refused the connection, telling me an entry already exists in "authorized_keys" for that device and as I have "enforced strict checking"* the login is denied.
I have to SSH in [on another device] to remove the offending line from "authorized_keys" before the first device can SSH in and set up the new key pairing. As "strict checking" is not anything I remember consciously enabling in SSH config, I assumed this was a default setting.
"Millions" of keys in an organisation sounds to me like they have multiple [redundant?] key-pairings per device, unless they have "millions" of devices all needing access. So maybe a bit of good house-keeping is the answer, rather than SSH being inherently broken?
*[may not be the exact term, but words to that effect]
Key management server...
A simple system that sends a salted hash of the key back to the central server, if the key is no longer required, then it gets removed. Of course securing this process is a whole different matter ;)
already doing most of this stuff, albeit manually
We have addressed a number of these issues -- security scans verify public keys from the fingerprint against a database of registered keys. The users have to "register" the key fingerprint. Automation places the public keys for specific users/applications as needed based on account existence and environment. Account cleanup processes remove the keys. Automation enforces permissions on the files and folders (since users don't seem to get the point).
But it would be nice to have it all in some sort of centralized and automatically managed environment, based on a well known and documented set of publicly acknowledged standards.
As for SSH Host keys, these are a different issue for us, but again -- "known_hosts" files can be done in two cases (host level and user level) -- it would be nice if we could have, again, a centralized and managed listing that ssh in the environment would use for host validation, instead of having users confused when a host is moved/migrated/rebuilt/virtualized/p2ved etc.
- Batten down the hatches, Ubuntu 14.04 LTS due in TWO DAYS
- FOUR DAYS: That's how long it took to crack Galaxy S5 fingerscanner
- Did a date calculation bug just cost hard-up Co-op Bank £110m?
- Feast your PUNY eyes on highest resolution phone display EVER
- Wall St's DROOLING as Twitter GULPS DOWN analytics firm Gnip