Oh FFS
Sanitize your fracking inputs! When will companies realise that coders with a clue cost money?
Security researchers have published a more complete rundown of a recently patched SQL injection flaw on PayPal's website. The Vulnerability Laboratory research team received a $3,000 reward after discovering a remote SQL injection web vulnerability in the official PayPal GP+ Web Application Service. The critical flaw, which …
It's a lot worse than that. On a system the size of PayPal's (or even on a system a tenth that size, for that matter), there should be a framework in place that does this stuff for you, will ye or nil ye. The art of programming in the large is making sure that is easier to do things right than to do them wrong.
So either somebody broke out of the framework and this wasn't picked up in QA or there isn't an adequate framework. Either way, they have a structural problem.
What an insulting amount! What would it have cost for a security consultancy to have found that? Three grand US wouldn't pay for the first days consulting where the coffee and biscuits would be hit hard in the meeting room.
I know the result isn't a labour for money, but a bit of proportionality would seem appropriate.
While I do agree that the sum isn't really commensurate this kind of research shouldn't be done for the money. The kind of research that is done for big bucks is the stuff that you generally don't hear a lot about. The Economist recently ran an interesting article about the sort of professional services that companies are willing to pay handsomely for.
The error is reprehensible for, as has been noted, allowing both SQL injection and excessive permissions. The spirit of openness and at least some kind of peer review should, however, be welcomed. If companies think that this can replace paying for proper reviews then they are likely to learn the hard way.
Granted but they didn't have to do the research and they didn't have to hand over the results, they could have sold it on the "black market' for 10 times that. If I was running a company and managed to find someone willing to do a job almost purely for the love of it just to recieve a token payment , I'd do it too! I look good being charitable and open about my fuck-ups and it only cost a fraction of what it would have cost to have it done by industry pros.
"One born every minute!" springs to mind but also with a healthy dose of "You get what you pay for!" too.
I agree it isn't much but I bet folks @ Vulnerability Laboratory are pretty darn happy with the publicity.
"Ah, you may have heard of us, we found the SQL vuln in Paypal some time back. Sign here please".
Second, Paypal (whom I loathe) should be praised for engaging into this process which has the potential to make them look stupid but an even bigger potential to fix actual problems. And if they can limit their costs to $3000, <10 hours worth of Accenture, ahem, experts' security Powerpoints, then great.
FFS...not only pants down but greased up and ready for a right good shafting. They're not describing a flaw, they're describing a ****ing great hole that you could herd elephants through. Data access to their entire database must have been built by an intern during his summer holidays or something.
Having dealt with a few penetration testing companies on behalf of some of our customers, they seem to largely rely on using the same OS tools that anyone else would use and provide a report that is simply the unfiltered output, including such gems as proposing an ssh server is insecure because it reports its version (hint: this is required for interoperation between servers and clients).
So who were the incompetent penetration testers who missed this flaw.
Who briefed the testers on what to test for? You do realise that in a great many countries the kind of software that you need to carry out penetration testing is considered and you may need waivers not just from the customer but also the software developers, the data centre operators, and maybe even special dispensation from the local law enforcement, etc. Even if you do get those permissions testing takes time and it is axiomatic in software development that no one ever allows enough time for testing.
Add to that the current paradigm of growing as quickly as possible with whatever works, depending on keeping your best programmers sweet until you IPO after which point, you want to replace them with cheaper ones who are expected to manage, maintain and extend largely undocumented and untested (see above) code.
This post has been deleted by its author
.....under PCIDSS scanning, you know, the requirement to make your systems rock solid against loss of customer card data. Or is it that because they are a psuedo bank they, like the normal banks, don't have to carry out these scans on their own systems?
Posting anon, well, just because.
It could be that the address or URL wasn't provided to the tester. Which I see as a failing of the testing org. They shouldn't take their client's word for anything, but that makes it cost more because they have to use ARIN (or APNIC/AFRINIC/RIPE) and dns queries and traceroute, etc.
I worked at a place that was applying for PCI DSS accreditation recently. Do you realise that at least part of the final application process is a SELF COMPLETED questionnaire..?! The whole system is as much about keeping sweet online customers as it is real security.
It's worse than that, if you answer the questions truthfully you'll never gain accreditation because they ALL have to be answered with a `YES`, even if they don't apply to your business whatsoever. It's a form of indemnification for the card processors because in the event of any loss, they can just point the finger at the business and say "Well, they answered yes to that question so it's not our responsibility!"
If it's in the GET parameter, a simple sqlmap scan would've revealed it: ./sqlmap.py -u (insert URL of choice here)
This is worrying as it means they are missing a whole stack of security:
- They clearly don't have an effective automated vulnerability scanner in place (Acunetix, Qualys, Nessus, etc.) Even a manual pentester would've discovered this, as it doesn't even seem too complex - if it was a blind sql injection it's harder to pick on a manual scan as it requires some kind of script to enumerate tables,etc. on blind injections
- It also indicates absence of a WAF (web application firewall) or one that's configured correctly. Any WAF will start ringing alarm bells if you're inserting SQL parameters in the GET parameter like so/page.php?id=3' or ''='
- Lastly, it means code-wise there's a defficiency. I don't remember whether paypal utilsies PHP or .NET but in any case functions already exist to sanitise all this input - mysqlrealescapestring() in PHP for example.
I admire the company that brought it to light - it would definitely have been worth far more than 3,000 USD on the black market and would've caused damage in the millions if it had been exposed via less savoury means - think pastebin.com or bitshacker....
Paypal escaped lightly this time.
As someone who programs but has never done SQL, can someone explain to me why SQL "evaluates" string-variables as (potential) program-code by default, instead of treating data and code very separate (like C)?
It's all very well saying "sanitise your input", but I don't understand why the potential for trouble was created in the first place. Sure it lets you write something akin to self-modifying code (but I thought we decided that was a bad idea yonks ago). Thanks.
The problem is people writing bad code.
They should be using parametrized stored procedures or queries, where they pass in individual typed values for each parameter, which are treated as such. Then it works pretty much as you say it should - variables and code are distinct separate things.
But if you write code to piece together an SQL string, and then run that on the db, you're asking for trouble. Sanitizing inputs is just a sticking plaster, if you forget one, or don't do it right, you've got a vulnerability.
That's because SQL is interpreted language (kind of). This property of SQL is sometimes used by lazy developers to build dynamic queries by simple string concatenation, e.g. "SELECT * FROM orders WHERE id = " + request.id(); Assuming that request.id() is string provided by the client, it might be possible to put something "interesting" into it.
Of course there are many ways to prevent this from happening (most popular are 1. use precompiled queries with parameters 2. perform string sanitation on data supplied by the client) but, for "lazy developers" it's not worth the trouble. It seems more fun to just have the database hacked.
That's because SQL is interpreted language (kind of).
This is the key security issue. Although the risk has been long understood and there are generally pretty reliable ways to pass data in separately so that is cannot, in theory, be run as code, it must be converted in SQL at some point and AFAIK most of SQL escaping techniques have in the past been breached, though I can't remember a server-based library having problems in the last few years. In the event of a breach additional precautions can be taken to limit the scope of any subsequent attack. But all this takes time and planning and you want to get your services out there as soon as possible.
I'm confused about how the security researches discovered this without breaking the law. Presumably, paypal didn't contract them to do this research, so this means they just started fuzzing paypal urls till something crashed.
Don't get me wrong, I think that is valuable research and should be legal. However, just altering the url a server gives you is enough to be considered hacking in the US (see iphone/AT&T snafu). How does one do security research and not get slammed in gitmo?
On most web sites you would have to break the law to discover this. However, Paypal have a particular policy, whereby if a researcher follows their rules, they will not be prosecuted. They were the first web site to have such a policy. It was warmly received by the security community, and has since been copied by other web sites.
http://jeremiahgrossman.blogspot.co.uk/2007/11/paypals-vulnerability-disclosure-policy.html
1. Don't use stored procedures, use embedded SQL in compiled programs.
2. Don't use dynamic SQL. All statements static; parameter driven.
If someone has broken security by social engineering or ID hijack; they can damage or steal whatever that ID can damage or steal; but no more.
This is hard to stick to on something like E-Bay (custom search strings); but on a transaction processing system, like PayPal; not so hard. We do this on the transaction processing app I work on at <censored for job retention>. Every year when we have our big security audit we have to explain to the external auditors why we need no protection against injection. Because it just can't be done.