We’ve all been hearing a lot about secure applications recently, or more accurately about insecure applications; specifically those that are exploited in identity theft raids or that we can be “tricked” into running on our PCs. Insecure applications are such a problem that Microsoft has spent the last five years and many …
Testing, and employment of hackers
If "All a hacker has to do is take the application and start feeding it random data or corrupted files as input", doesn't this also tell us that:
a) Testing of this kind isn't hard, merely tedious and we've got no excuse for not doing it;
b) We should be employing hackers to break our code in order to make it better?
Sloppy application programmers......
Just another case of sloppy app programmers..."Length and Error checking? The system takes care of that while I can concentrate on my beautiful code!".....
Small change to CPU design could fix this at a stroke
One thing I've often wondered about buffer overflow vulnerabilities - why don't CPU manufacturers simply keep the data and code/return address stacks separate? This could be done in a way that is totally seamless to the software, and would fix this kind of vulnerability at a stroke. Seems to me that the hardware builder that employed such a chip would have an instant marketing advantage too. Seems so obvious there must be a very good reason why it's NOT done - but what is it?
Well, I did an "ethical hacking" course once, but that wasn't about employing hackers. Some of my best friends are hackers - but their exploits were long ago, when "experimentation" was a valid excuse.
Consider this. Why should the hacker (and sorry about misusing what was once the term used for clever UNIX coders) you employ do any serious testing? You'll be happy if he tells you you have no vulnerabilities and pay him anyway...
Or this. If the hacker does discover vulnerabilites, why should he tell you about all of them? Perhaps they have a market value..
Or this. How do you know your hacker is any good? A successful hacker only has to get lucky once; but the "ethical hacker" has to produce a complete risk/vulnerabity assessment.
There are real issues in employing "reformed" hackers as security testers - you have to work out how you're sure they've reformed. As an IBM security consultant said to me once: "we usually find it easier to teach ethical systems programmers about hacking techniques than to teach hackers about ethics".
Although you do have to pick systems programmers with the right sort of "left field" mindset...
Separate code and data stacks
" why don't CPU manufacturers simply keep the data and code/return address stacks separate?" -- Graham
This actually isn't a bad idea. Some processor architectures (specifically, ones with a highly orthogonal instruction set such as the PDP-11, ARM, MIPS, 68000 and PowerPC families) could already support this without much effort. You just need to have two instructions; "put the contents of register Y into the address pointed to by register X and decrement register X" and "increment register Y and get the contents of the address pointed to by register Y into register X". Then use any two registers as a "data stack pointer" and "execution stack pointer" respectively. Obviously, you would need to keep the data stack at a lower address in memory than the execution stack so the former cannot grow into the latter.
It's still not perfect, because code could still deliberately modify the execution stack -- or directly alter the data stack pointer to point into the execution stack. But such code *wouldn't* be able to get itself executed by anything so trivial as a stack overflow.
As for why it's not done; well, I suppose that all dates back to the 8080 which had only one stack pointer, implemented as a simple up/down counter, and corresponding dedicated PUSH and POP instructions. The 8086 implemented the 8080 instruction set, and kept the single stack. Everything since then, all the way to the Core 2 Duo, has carried 8086 -- and therefore 8080 -- legacy baggage. We can't get away from the 80x86 instruction set as long as anyone has 80x86 binaries they need to run.
The fact remains that the BEST way to make sure code doesn't contain exploits is just never to run any software whose Source Code has not been audited by independent (i.e., not connected with the author) experts. Auditing of source code, possibly in conjunction with provision of patches for any bugs discovered, would seem to be a service which has Intrinsic Value (i.e., if you are a programmer, you could make money doing this for people).
Of course, this approach is somewhat incompatible with Microsoft's (and others') business model, where they keep the Source Code a jealously-guarded secret *even from the people who are using the software* ! They've been getting away with that for long enough now that most people don't even know what Source Code *is*, or why it's important to them. Access to Source Code is a prerequisite not only for auditing software for vulnerabilities (bear in mind that, for every dishonest hacker looking for something they can exploit, there are several honest hackers looking for the same problems with the intent to release a patch and cure them ..... it's a matter of definition that good guys outnumber bad guys), but also for adapting the software you use to suit the way you do business. Otherwise, you would have to adapt the way you do business to suit the software you use.
Mind your language
Graham - there have been CPUs designed with increased security in mind, but the main issue is backward compatibility, which is the issue that continually dogs Windows. It's no good having trusted computing when 80% of the software you use needs to be run in untrusted mode (and half your hardware drivers are by 'unknown' and unsigned - look at your Windows services to see what I mean).
Steven - application development would be a lot quicker, and safer, if we could trust our programming languages to do what they appear to say. The choice of C++ as the major application - rather than system - programming language is a problem in itself. The programming community has continually rejected safer languages (eg. ADA) in favour of something powerful but unsafe.
There were - of course - other pragmatic reasons - runtime safety checking every variable assignment against type declaration is a performance hit. Which is why C++ slaughtered higher level OO languages.
It's worth reading Wirth's paper on a history of good / bad ideas in programming.
Lastly, however, some blame does still have to go to Windows itself - an application may open a back door, but it should have been a lot harder to download and execute an application without the users consent, and near impossible to modify the system directories. Vista thankfully takes us closer to this point.
Followup regarding separate stacks
I'm not familiar with the 80xxx instruction set (though I am with a number of others) but I'm not fully convinced by the backward compatibility argument. For that to hold, code would have to be doing something very direct, such as an explicit "load the last address in the stack frame into the program counter" in order to do a subroutine return. In fact most ISAs have a seperate "RTS" type instruction, which does this operation implicitly. If that's the case, the internal implementation of the RTS is not subject to backward compatibility constraint - it can load an address obtained from any stack into the program counter. Stack frame alignment might be subject to compatibility - in which case such a processor can simply push an unused value onto the stack frame as a placeholder.
- Comment Renewable energy 'simply WON'T WORK': Top Google engineers
- All ABOARD! Furious Facebook bus drivers join Teamsters union
- Webcam hacker pervs in MASS HOME INVASION
- Nexus 7 fandroids tell of salty taste after sucking on Google's Lollipop
- Useless 'computer engineer' Barbie SACKED in three-way fsck row