no thanks
I quite like my normal wallet, thanks, I don't think I'll be using any of this NFC malarky any time soon until all these issues have been ironed out.
Turns out it's not necessary to decrypt the PIN, or even hack into Google's Wallet, just ask the phone nicely and it will let anyone root though its innards. The flaw was spotted by The Smartphone Champ, and unlike yesterday's efforts which required root access and a modicum of brute force, this hack barely qualifies for the …
I can only think of one explanation. This oversight had to be a known issue to the Google team. This must have been a security risk sign off to get this product to market. There is far to much security focus at the chocolate Factory so this would of come up in conversation...I am certain of it. at the end of the day some Mods and fixes will make this a good product again. So I am disappointed that this was let into market in its current state.
Doesn't seem like it should be too hard; Google Wallet has access to muck about with the contents of the "Secure Element", so you'd think that, right before it wiped itself, it would wipe that too.
On the other hand, if my experience in the finance industry is any judge, there's fifty thousand pages of regulations and requirements governing that interaction, so who knows? There may be a rule that they can't, no matter how much sense it would make to do so.
"If (s/Microsoft/Google) were serious about security, they would not use a plain [whatever], but use a service which would reverse-resolve to a proper (s/microsoft/google) [something]. Maybe that would imply that (s/MS/GOOG) itself would do the [thing], but that is the price of proper security..."
I could play Mad Libs all morning. Any more choice paragraphs for me?
I guess bad design isn't unique to Redmond.
VAULT of their own...
If they get it right, they should rename it to "Google Vault", not wallet. No wallet is as secure as a vault. But, if they get it wrong again, they'll have to vault themselves to a higher standard.
Pay by bonk is bonkers if there is not wrist/IR/biometric link in the loop. The feedback loop should require a pulse, some warmth associated with consciousness and movement, some vocal input, and a confirmation phase. Otherwise, someone could probably walk up and tap someone's ass or hip and rip some money, forward it, and toss the phone used to vault money to elsewhere.
Maybe I'm missing something, but this is not inspiring confidence. The wallet is as much as sieve as Android's contacts lists.
Wake the hell up, Google. Too much convenience will lead to easier fraud AND more ID theft.
But, one hell of a bonker-maker will be if bonker tools were surreptitiously installed in taxis, theatre seats, and restaurant chairs. It could take bonking to a new level: boinking. Who needs ID theft protection when bonking can fuse the tools to just rip into e-wallets?
Has anyone done a study of bugs/implementation flaws and correlated them to specific developers and/or teams? Is it possible that executing (well, firing) a dozen people could fix a majority of the problems?
The problem is that the kind of people who can find and exploit holes like this aren't usually able to get past HR or the second interview because, well, they aren't cut out to be employees. Or they try to get a job at Marriott.
That being said, around 15 years ago, while being a semi-inept programmer (which was not my job,) I was quite good at defining and finding corner-cases, but rather useless at coming up with elegant solutions to them.
This post has been deleted by its author
The future of a technology which is already proving unwanted, unneeded and basically pointless.
Where is the hardship in reaching into a pocket for a handful of change? Takes seconds, doesn't rely on having a gadget with you which hasn't run out of battery power and doesn't encourage people to steal said gadget.
Funny, the way I see it you're less likely to get attacked as you won't be carrying any real money, only money loaded onto a phone that only you can use.
I can see it both ways but if muggers start to realise that nobody carries cash any more, then, eventually, opportunist theft should go down.
Does rely on the electronic wallet being secure though.
but chip-and-pin didn't stop a mate of mine being mugged at knife-point by two guys, and have his card _and_ pin stolen. Only when one of them had emptied his account did the other let him go. This could now be a one man job. "Give me your phone and your pin, or I'll stab you." How quickly can you get this stuff shut down? Presumably you would need to find a phone you could use first.
Collectively, after 30 or so years, we've just not caught on to IT security yet.
Centuries ago, banks (and just about everyone else) figured out that different levels of granularity were needed to secure items of differing value. Petty cash was adequately secured in a tin box in the office desk whereas tons of gold bars were best secured behind steel safe doors a half meter or more thick.
In IT, presumably because we can't physically see it, there seems to be a fundamental conceptual problem of figuring what's really needed for various levels of IT security. If it weren't so then simple security fuck-ups such as this wouldn't happen.
All banks have an a priori understanding of the type of vaults they need to keep money secure and they've a fair idea about the risks (and so do their insurance brokers), but in IT we've still a long way to go before we've fully worked out adequate protocols. Sure, there are many excellent security systems out there in IT-land but security and the differing degrees of 'hardness' needed for different requirements simply hasn't become second nature to us IT professionals as yet. For if it were so then a basic security protocol check-list would have been invoked automatically which would have prevented this problem with Google Wallet.
If big smart corporations such as Google have such large problems with a relatively trivial security matter because its programmers can't sufficiently visualize the security model so as to avoid it, then it begs the serious question as to what else can't programmers conceptually visualize with their software which may leave it with fundamental flaws.
Of course, it's another a priori reason for having open source software--as many can check it.
Many years ago I participated in a focus group concerning a form of "digital money" that a major transit system wanted to employ, not only for fares but for use in shops. It was to be a refillable "gift card type" system. It would not even have a PIN and your balance would be shown on a public display whenever you made a purchase. I'm glad to say I was able to add my disapproval and squash this idea. Who needs to be mugged for money-on-a-card usable by anyone?
For the same reason, I would be very careful about flashing my Android phone for payments at Starbucks or elsewhere until this bug is fixed along with LOTS of publicity to get that information down to the mugger level.
Where you load some money on them and each time you use it, your balance is displayed.
These work in their concept because;-
1) You cannot use them in shops, only to buy travel on bus, train or tube.
2) People do not mug you for them for reason 1.
I like this use of NFC as it replaces a paper system that was inefficient at getting enough people through the barriers at rush hour.
However, I do not need a new form of proximity payment for a shop, cash and credit cards do very well thanks. Also the benefit of cash is it is physical, one its in my pocket it can be spent, then its gone. No freezing of assets or misleading online bank balances (as things haven't cleared yet).
"If big smart corporations such as Google have such large problems with a relatively trivial security matter because its programmers can't sufficiently visualize the security model so as to avoid it, then it begs the serious question as to what else can't programmers conceptually visualize with their software which may leave it with fundamental flaws."
Except that the problem is not just with Google programmers. Each phase of the process from global design specs, functional design, technical design, build and test should act as a review of the previous phase of the project. A functional designer should notice flaws in the assumptions of the architects, a tester finds problems with the build, whilst the programmer then discovers that fault lies not with their code but the technical design. In the meantime, the technical designer might already have realised that there is an ambiguity in the functional design and altered the technical specs. So it goes on and on.
You would have thought that such a process would have eliminated such a glaring flaw. Unless of course everyone working on the project thought it was a fantastic feature instead of a rather dumb idea.*
Perhaps the problem is that companies like Google employ too many "Wunderkinder" who are very good at design, programming, testing etc, from a technical point of view, but have insufficient experience and understanding of "the real world" to notice a stupid idea when it bites them on the bum.
*cf New Look Google.
If you're doing agile without design and reviews, you're doing it wrong. I don't know of a single agile methodology that dispenses with design and reviews. In fact, they generally explicitly require both. (Agile does put most of its explicit design emphasis on usability and functional design, not architecture and implementation; but that doesn't mean it omits the latter.)
Of course it may well be that most people do it wrong, but that's not the fault of the methodology.
The real problems here are the usual ones: lazy programmers and management that is not concerned with quality (mostly because the market doesn't reward it). What proportion of programmers have read an IT security book, like /n Deadly Sins of Software Security/ or Ross Anderson's /Security Engineering/ (the first edition of which is free online)? Or follow any computer security news, like BUGTRAQ (or even stories like this one)? In my experience, few bother with even basic competence in secure programming.
open source safe combinations? So we can all check them? Surely, in this case the algorithms must be kept secret at least to slow down casual attacks.
I just love this idea that clever people with an interest in improving security will, for nothing, spend hours of their spare time vetting code for banks and telling them how to fix it.
Do tell us, how much of your spare time do you devote to reading other peoples' "open source" code and delivering back reasoned, informed and accurate reviews? Do send us the links to your samples.
Ah, altruism for the sake of commerce. Good Lord, just saw a whole flock of spotted pigs fly by in question mark formation.
Anyway, according to its proponents, everything from Google is open source, so what went wrong? Did they just get fed up fixing code for nothing for one of the wealthier firms in the advertising business?
"Google will then disable the prepaid card to prevent the phone being used to pay for stuff with a tap on the till."
But how does that help us considering that the link between the Google account and said phone still remains? All this does is making it harder for John Doe to access the account straight away, but if the link between phone & account remains then its simply a matter of time before someone writes up an app to circumvent this blockade.
Quite frankly this is /exactly/ why I don't use "Internet wallets". To me a wallet is a bag of cash; if I somehow lose my wallet I lose my cash, which is tough luck but my big stash (bank account) remains safe. Yet with all these "e-wallets" they force you to link it with your bank or creditcard account. So if something goes awry with this e-wallet then you're in much risk than you should be.
If "e-wallet companies" really cared for security as much as they claim they wouldn't force their users into linking wallet accounts with bank or creditcard accounts but instead would allow bank transfers (note that this comment isn't solely aimed at Google but also stuff such as paypal).
... is to use a Virgin Money prepaid credit card. You load it up by bank transfer (a simple telephone call for me) or via the Virgin Money website. I never have more than £150 on it. I use it for all internet tranactions and for Googly payments (such as Android Market). If the card gets lost/stolen/compromised, I can't lose more than the balance on the card and if I report the loss/theft quickly enough then the card gets cancelled and becomes useless. to anyone.
(For some reason, it won't work on those petrol pumps that take credit cards as noted in the Ts&Cs)
This is because, like a hotel which swipes your card on arrival, the pump 'locks' a notional amount to cover whatever you're buying (plus a margin on top).
It's only later (when the actual amount they're taking has been confirmed) that the surplus they'd locked is available again.
Surely though, the guy here wants it tied to the Google account, but if that occurs stealing peoples gmail and other Google usernames and passwords etc. becomes a bigger security risk as then they could just grab any phone with nfc, drop in the stolen details and off they go.
I might be missing something here, but this system seems to by moved security to the device and locking it to the device, it's trusting the device itself it locked down using a good password/gesture/photo recog etc. Once passed that ok, you have access to someone's account. But it does sound like that's where the emphasis on security has been placed in this case. A simple fix here would be to remove the reset though, and have a pin reset done though contacting the bank (with all the pain and suffering that usually involves).
Frankly though, both ways of doing it has security issues. Mines the one where I carry enough cash for a pint or two and a taxi home. Someone might nick my drink money if mugged, but that's still way more secure than any of this milarky.