Google has dismissed an engineer who had access to its back-end systems after he violated the company's internal privacy policies. On Tuesday, Gawker reported that the engineer, David Barksdale, was dismissed in July after he accessed at least four user accounts. And Google later released a statement confirming the dismissal. " …
Whats the word I'm looking for?
Im sure it had something to do with hips. Or maybe some bloke named Chris. As our yorkshire readers might say, Eeee.
Therein lies the classic problem, the bigger the pie you have at the picnic, the harder it is to keep the pests away from it!
I think Google do a great job of indexing information, however I wouldn't trust them to look after my kid's piggy-bank, just too many systems and way to much info under their control.
Still rated higher than banks for security
Never heard a case yet of Google leaving peoples details in bins or out in the street, so in that respect they take more care than banks with your privacy =]
"Never heard a case yet of Google leaving peoples details in bins or out in the street"
That's because they're too busy out in the street collecting people's details, Wi-Fi info etc.
Fair play to Google i say.
If yo leave your front door open, expect to get robbed, if you leave your WiFi unsecured, expect poeple to use it.
All Google have done from this is show people they need to lock their Wifi down with a password
So if anything, Google actually IMPROVED security for a lot of people.
You Wouldn't leave your doors and windows open, so why leave your WiFi open to anyone...
Another Google non-story
Just wait, it will turn out to be just some software they forgot to remove
"Google’s strict internal privacy policies".
What colour irony?
The title is required, and must contain letters and/or digits.
Their internal privacy policies *are* *very* strict.
They say that only google corp itself is allowed to trawl user data to take advantage of the saps who trust it, not random employees.
Good to know Google runs a tight ship BUT ....
it would have created more confidence in it's handling had the news come from Google itself rather through a delayed report in the technical press.
"strict internal privacy policies"
Oh, so that's where they are.
This is news because Google want it to be news. They've had a lot of bad press recently so they're making sure people know how seriously they take privacy.
I'm not suggesting they are lying, I'm sure this really happened. I am, however, questioning their motives for going public on this.
Motive irrelevent - action relevent
UK government regularly loose multi-million items databases with no heads on a plate.
Who is the most evil?
Answers on a postcard....
Probably happens all the time. Expect to see a press release from FB shortly intimating that they have done the same recently..
Why go public?
1. This news will get out, sooner or later. After all, there must be some people who know he used to work at Google (a sought-after gig) and now does not.
2. If you (i.e. Google) don't announce it, world+dog will say you covered it up. Maybe even Gizmodo. Then you're on the defensive explaining that it wasn't really all that significant, blah blah.
In this case, I don't think that Google went public soon enough. They should have announced it about the time it happened, although probably without naming the person involved. This should have been out in July, not September.
"That said, a limited number of people will always need to access these systems if we are to operate them properly – which is why we take any breach so seriously"
Actually where I work, noone has any back end access by default access is granted with a change management record or an incident management record, by the sysops. The temporary access is supplied with a password which enables you to sudo (or runas) and gain access to what you need. DBAs get the database, Storage guys get storage etc. etc. In the rare event that someone needs root/administrator access the ID is handed out and the password is automatically changed a fixed amount of time after it's issue.
...but who issues/hands out the passwords?
Are there Unicorns in this magical land you live in?
I suspect (very strongly, Fraser) that there are dozens of people where you work with permanent back-end access, some who have permanent root and administrator.
@AC1 - Our company sysops hand out the passwords, having verifed an Incident Management Record or Change Management Record. The scripts that they use only give access to the intended users and can't give access to themselfs.
@AC2 - You can suspect all you want, but the systems are regularly audited for local accounts which shouldn't be installed or are configured to have the wrong access levels. Also root/admin passwords have to be checked in and out with time limits and the reset of the passwords is handled automatically with no user interaction, there is no way of finding out a root/admin password without setting a load of alarm bells off. We have no unicorns, just well designed heavily audited security.
I can still think of a few probable ways around that. Bent sysop, user not caring about drawing attention to themself because they can achieve their objective sooner than they will be discovered, and (of course) the granddaddy of them all: physical access. I'm confident that I could think of some more if I knew your site.
Usually, the more attention someone is focussing on the back door, the less attention they are paying to the front door, the windows and the roof .....
Internal consistency error?
"Actually where I work, noone has any back end access..."
"...access is granted... by the sysops."
Uh... so how can the sysops grant access if they don't have the access to grant the access??
Of course, if you emphasize the "...noone has any back end access BY DEFAULT...", that makes more sense, but the press release also states that Google employees don't have back end access by default...
So this was a "sysop" level person who muffed up, or was someone who was likewise granted access temporarily.
Of course, it might also be that this person (a) gained access improperly or (b) was inappropriately given access and he ended up where he shouldn't have been. If the first case, dismissal all well and good. If the second, I would be suing Google, or at least filing for unfair dismissal.
The scripts that they use only give access to the intended users
>>and can't give access to themselfs.<<
Yeah? You still believe in the Easter Bunny pal? That's a little naive even for a kid to believe...
"..but who issues/hands out the passwords?"
An organisation in which I worked had this system. The passwords were under the control of a Security Manager. He had no role as a systems or network administrator - his role was solely to control and monitor access and changes to operational systems.
To the person who suggested many people would have backdoor access: not likely. Regular audits would uncover this and anyone found to be accessing systems in this way would be subject to instant dismissal and possibly, legal action.
Here is a rough overview of how it works (for unix/linux at least) The sysops can run a script which grants the right for sudoers, they have no rights to the sudoers file themselfes. There is a series of sudo configurations for administering different aspects of the system tailored to the different departments (storage, oracle, sybase, unix, etc.etc.) your normal logon ID is granted access for you to temporarily be a sudoer with these command sets. The right to sudo to these commands sets is automatically revoked after a pre-configured amount of time.
As for proper root access there is a piece of commercial software which stores all of the root passwords in a database. You can check root IDs in and out, when they're checked back in to the system the particular root ID's password is reset and the database updated. It requires two people to check out an ID.
This really isn't advanced technology, it has been round for years, possibly even decades, I really don't see why you're all so suspicious.
.. they're security wannanbeez :-).
The correct security models have been around for years, including "four eyes" access to systems which meant you needed two people with defective morals collaborating before you had a risk.
The problem is that implementing good security requires, effort, overhead and costs money. I'm happy to hear of a company that takes its responsibilities seriously, and has external audits to prove it.
Now for the next step: protecting the executives. Practically nobody gets that right at all, which is why we started our business, and we're doing so well we had to merge with another company to have enough resources handy :-).
And you don't think
(1) Someone, once legitimately granted sudo access, could not use the fact of having sudo access to make this status permanent? $ sudo visudo anyone? $ sudo bash ? $ sudo passwd ?
(2) Your "piece of commercial (therefore, presumably closed-source; therefore, most probably not audited by you) software which stores root passwords in a database" could be sending those passwords elsewhere?
(3) The script that grants sudo access must modify /etc/sudoers, which itself requires root access, might be vulnerable?
(4) Someone could obtain sudo access by exploiting the oldest known vulnerability (human stupidity)?
Like I said, there's bound to be a weakness in that system *somewhere*.
>>your normal logon ID
This would be the one with perminent back end access then?
And don't give me that 'read only account' rubbish, there's no such thing, plus it's one step up the ladder towards privilage escallation.
>>I really don't see why you're all so suspicious
Probably because you're describing "ideal world" and not "real world"
p.s. you've disclosed enough about the systems (and your name), that I'm sure I know who you are ;-) time for a social engineering 'hack' I think it is the gulible people that get targetted :-o
I think I know what that is!
Given your very key choices of words, I suspect I know who your employer is (as I also work for them). Look me up to confirm it. :)
My fave security hole
Was Ken Thompson's lecture "Reflections on Trusting Trust" from 1983. ( http://cm.bell-labs.com/who/ken/trust.html is a reprint.) Namely, he made a proof of concept C compiler that did two special things. When detecting code for a a login function, it would inject a back door in the code. When it detected code for compiling, it would inject the detecting code (Both for login and compiler) into the new compiler. He then compiled the compiler with clean code and hid the detecting source code.
The result was a back door that was undetectable even if you had an audit of the source code, and recompiled the compiler to make sure.
Only way to be sure
Oh, yes, I know all about that. It's basically a classic sleight-of-hand manoeuvre, using the compiler source code as a blind to misdirect the audience's attention from the pre-compiled -- and gimmicked -- compiler binary. But there is one way to be sure you have a clean C compiler:
Rewrite the C compiler, in assembler, from scratch.
A slightly quicker method, functionally equivalent but using the computer to do more of the work for you:
Write a C *interpreter*, in assembler, from scratch, which understands just enough of the language to interpret the source code of the C compiler. Then you can run the compiler's source code interpretatively, and use this temporary compiler (which probably will be as slow as a snail swimming upstream in treacle, but you only have to use it once) to compile the real compiler.
Unless there's a backdoor right in the instruction set of the processor, you should be safe.
Only way to be sure is to dope your own silicon
Not even then the assembly would help. The point of the exercise was that he could put the injection anywhere. In the assembler, in the firmware, in the chip itself. Heck, with the advent of hypervisor technology, you're not even sure of your system.
Point being, unless you make the system yourself (And nobody does that), it's all on trust. It has to be.
Who will guard the guards
And who will protect Google from itself? Corner cutting in motion yet again... Google is just out to make a quick buck, who can blame them?
Nice of them to name the individual concerned. Seems a bit OTT, in fact legally I'd have thought (in America especially) that badmouthing even a bad employee this way is risky. Fire someone and all they can fear is a bad reference...
Spies don't just get fired, they get burned.
I feel the same way, since it'd essentially be a black-balling move on Google's part. However, it does send a strong message to anyone thinking of doing something similar: there are serious consequences for privacy invasions.
All said, of course Google wouldn't know how to keep a person's private info secret.
Why is the horned G logo on the article and not in the icons here
that is all ..
Note to self:
Never get fired by Google!
They will apparently douse you in gasoline and set you on fire in a public display. Sure it might be liable, but what are you going to do? Sue them for a lifetime's worth of lost income? That's an inconsequential sum to them.
Google? Concerned about privacy??
Have I wakened into some strange alternate universe this morning?
dropping the guy's name in public is a violation of his privacy
He's now learned his lesson, we hope. He'll need to change his name if he ever wants to work anywhere ever again. Dropping his name in public is a big no, no.
Not a violation of privacy
"dropping the guy's name in public is a violation of his privacy"
No it's not. It's bad luck for him that his name is so publicized, but all these investment firm guys that royally failed at investing had their names publicized, embezzlers have their names publicized, all kinds of fuckups, illegal or not, get their names publicized. The fact of the matter is he SERIOUSLY breached company policy and may have broken the law. Some companies choose to keep this stuff hush-hush but there's certainly no obligation for them to.