The world of multi-core cpus we have just entered is facing a serious threat. A security researcher at Cambridge disclosed a new class of vulnerabilities that takes advantage of concurrency to bypass security protections such as antivirus software The attack is based on the assumption that the software that interacts with the …
No news at all
Parallel systems have been used for a very long time now -- do you really think these issues haven't been investigated throughly yet? Of course, newbie programmers and beancounters may not be aware, but since when are those the primary audiences of the Reg?
more details in the paper
De Zeurkous, I suggest u read the paper linked in the article.
The problem is real and confirmed by *working exploits* on current software/hardware.
RE: more details in the paper
``read the paper''
I'm sorry to inform you that I've hard more than my fair share of annoyance this week. Besides, if the merit of this paper is nonexisting, why bother *at all*?
``The problem is real''
Yes, we already know that for quite some time, too.
``and confirmed by *working exploits*''
You've totally wasted your time here. We already knew there were problems and we've taken steps to correct them. What more do you offer?
``on current software/hardware.''
As Dijkstra would point out (although not quite in the same words), these problems arise from incorrect design, and can be fixed accordingly by making the design correct. Since this kind of software design is ages old, and the design issues have been explored thoroughly (and, hopefully, fixed in decently managed designs), this is a closed issue -- adding to it will only make the issue more opaque to newbies and the life of historians more difficult.
The time and effort wasted here would better be spent on either coming up with something new and/or actually finding and fixing remaining instances of such problems.
At the least, with the coming chip densities and virtualization software, this could really blow up.
We've got dual and quad core kit right now. Fact is, oct core is next, and then what... hex core. Chips are becoming clusters of systems. On the software side, virtualization is growing the size of exploitable code.
Fore warned is fore armed. And yes, design is the answer, because it is the problem.
I think it's a good reminder.
Pretty much we already knew this
Before the those up to date systems were even made available it's not news.
Re : No news at all, Pretty much we aleady knew this etc.
You mean it's not news to you. If it annoys, you, tough. Personally I'm just grateful that most of the articles are less annoying than reading the same comments posted time and time again. "I already knew that." "As a Reg reader I'm automatically a Real Techie." Along with the oh-so-clever variations of MS with $instead of S. That actually got tired and sad when had been around for more than a few minutes, years ago. The getting coat stuff is getting old too. Sad. And I'm even sadder for still bothering to read comments (though I don't think I'll bother from now on - noone ever seems to add anything other than to stoke their own egos.) Plonkers.
Software and hackers
I find it amusing that we don't have much in the way of useful software that can use a multi-core processor properly, but we have hackers out there making full use of the technology. I sometimes wonder, why don't companies hire the clever dicks who seem to know what there doing?
RE: No news at all, Pretty much we aleady knew this etc.
``You mean it's not news to you.''
This is not Reg Dev. If you want to impress a bunch of newbies with your 1337 h4x0r skillz, go there.
``If it annoys, you, tough.''
Indeed. What I see here is some selfish student proud to have published a paper at all, and only here to boost his/her/it's ego instead of doing something really useful.
``Personally I'm just grateful that most of the articles are less annoying than reading the same comments posted time and time again. "I already knew that."''
Yes, that's /very/ personal indeed.
``"As a Reg reader I'm automatically a Real Techie."''
"As a paper author I'm automatically a Real Scientist."
``Along with the oh-so-clever variations of MS with $instead of S.''
Those mutations serve the purpose of conveying one's general attitude towards the entity in question.
``That actually got tired and sad when had been around for more than a few minutes, years ago.''
Instances of arrogant intolerance got tired and sad for more than a few seconds, millennia ago.
``The getting coat stuff is getting old too. Sad.''
Indeed very sad -- I'm sorry to inform you that my command of the English language is not sufficient to understand the expression 'getting coat' -- can you explain? :X
``And I'm even sadder for still bothering to read comments (though I don't think I'll bother from now on''
That's not a very scientific attitude.
``- noone ever seems to add anything other than to stoke their own egos.)''
Yes, a perfect example turned up right here.
Adding insult to... insult?
Ergo: personal attacks are not going to make up for a mind. Nor do lame attempts at anonymity. You are not interested at all in science, only in fame and success. Well, let me tell you, that PhD will be worth nothing this way, /if/ you get it. I really hope your instructor, mentor, or whoever else oversees you will read and consider this before deciding to hand out one.
RE: Software and hackers
Your information is incorrect. Many UNIX systems are highly parallel, which covers the systems the article concerns itself with.
Re: Software Hackers
I think it's because the clever dicks have minds of their own whereas those working in cubes at M$ (just call me Rodney) have a corporate identity instead.
Re: Software Hackers
David, I wouldn't characterize the masses at M$ like that at all. They are either shackled by the M$ frat-house culture, or they aren't "smart and get it done" type people in the first place. There are more of the latter than the former. M$ complains that it can't hire enough smart people, but when they hire smart people the smart people can't do anything that would actually make a difference. Thus the smart people leave M$ for somewhere they /can/ make a difference.
Did this guy really discover
the same bug that was found in the original bsd unix? Because the last time I read about this attack, the example was taken from a book written in the '70-ies. Avoiding it is easy, either write lock the syscall data area for the duration of the call, or copy the arguments to kernel memory before checking them.
This is beginning to take on the smell of a Flame Forum.
If you are so smartie pants that these articles B-O-R-E you , go write your own paper. These are valuable comments to the masses.
Maybe we have had parallel systems for years in UNIX. How many of the daily masses have a parallel UNIX system running in their home?
I enjoyed the article, I had musings on the subject 20 years ago. My reasoning required a "system/ master" processor that all the other processors used as a gateway to the system. Processes have to be authorized. Since in todays multi-threaded programs/ applets/ processes, there is no "master" processor. The king and subject are the same.
Early on even M$ allowed the option of dedicated processors for certain tasks, unfortunately, it wasn't the system. Not that it would make much difference in their bloated malware.
I use windows because I have to in an academic environment. Almost all our labs are XP. We have some macs, but they are old PPC's. I use it because many applications are available for it. I don't like it because I don't trust M$ to do what is in my best interest.
``This is beginning to take on the smell of a Flame Forum.''
I only had to read the first sentence from your previous post to reach the exact same conclusion.
``If you are so smartie pants that these articles B-O-R-E you , go write your own paper.''
I don't do papers. They obstruct research horribly.
``Maybe we have had parallel systems for years in UNIX.''
Except for some Big Iron and most embedded stuff, every single system that really matters.
``How many of the daily masses have a parallel UNIX system running in their home?''
Since when are n00bs and other assorted lusers of any relevance to computing?
``Since in todays multi-threaded programs/ applets/ processes, there is no "master" processor. The king and subject are the same.''
You're way off. Most -- if not all -- Unices boot on one processor and then initialize the others, and the kernel is the King. The latter was, is, and remains the most significant component impacted by security issues. This 'research' changes nothing.
``Early on even M$ allowed the option of dedicated processors for certain tasks, unfortunately, it wasn't the system. Not that it would make much difference in their bloated malware.''
If M$ crap is the issue at stake here, this research loses even more credibility.
``I use windows because I have to in an academic environment.''
Ha, in my lab, if such ethically questionable material was even brought up for consideration, there would be a redundancy in the department at short notice.
``We have some macs, but they are old PPC's.''
POWER is a hell of a lot better than IBM PC-based crap.
``I use it because many applications are available for it.''
Yes, there are /many/ applications available for it. However, most do about the same. Do you prefer drooling on little icons over actual research and subsequent progress?
``I don't like it because I don't trust M$ to do what is in my best interest.''
There are many Unices available for both IBM PCs and Macintoshes. I strongly suggest you take a look at them before conducting faux research and responding to criticism by flaming ur arse off.
Anyway, on a much more personal note, this was it -- I'm withdrawing as an active participant from the engineering community (never had much to do with Ivory Towers, Inc.). I've been flamed, declared a kook, and subsequently either banned or ignored for life about everywhere I offer my honest opinion. I'm tired of treating this game like it's a matter of life (110V) and death (120000V). If someone wants to bear the fruits of my expertise, they'll have to ask /very/ nicely for it from now on. Congratulations on being the final straw.
I like it how
There's always someone that yells "old! I knew this first, glory to me. You think this is big? Well, I knew it first mister; so you are silly, and your base? Well, you know where that belongs."
Oh and there's always one saying "well, it might be old to you, but this is news to me, don't be such a hater, let's just all get along and not call each others moms anything"
And by the end of it, there's some guy buying a tonne of credibility citing "parallell unixes" and maybe some obscure syscall, and then apologetically procedes to explain why he's stuck with windows.
And then, there's always a guy who thinks it's his job to bust some balls, so he gives a one man opinion of why everyone's a mouse in a maze. Dramatically ending his crusade with
"Oh, the internets... we hardly knew ye."
No-one is being banned, no-one is being ignored. And we are not going to moderate away / censor commenters - per se - for being combative or rude.
This is an interesting discussion and it will be just as interesting if the rhetoric is turned down a notch or three.
RE: 'Flame Wars'
I know, not here :) It was still the final straw, however.
Anyway, back on topic...
Science in a dark room
"I don't do papers. They obstruct research horribly."
Research that's not published is just wasted time.
"these problems arise from incorrect design, and can be fixed accordingly by making the design correct"
Zeurkous appears to be living in a bubble. The above statement is about as useful in the real world as the advice "If you want to get to the top of Everest then you simply have to climb up it in some fashion". The execution of both of these masterful plans is a great deal harder.
Well, enough feeding Mr Troll; I'm sure his ivory tower has people to do that for him.
re: No news at all
Well done De Zeurkous - have a sticker.
Personally, I DIDN'T know about this, as I'm not security researcher. I therefore found the article interesting and informative.
Indeed, I might have found your original comment interesting an informative had it:
a] cited a source for your prior knowledge.
b] gone into some (indeed any) detail as to how this problem had already been addressed.
... unfortunately it didn't, which made look a bit like a puerile attempt to massage your ego.
Of course, we all know that it wasn't that at all, don't we? And you're going to prove that by making a mature and reasoned response, rather than just adding further well worded personal attacks and calling people n00bs
Right, I'm off out to enjoy the sunshine now.
RE: RE: Software and hackers
Was thinking less MS products, more Bioshock and Warhammer: Age of Reckoning. I hardly need uber processing power to type out the shite I'm asked for.
And as for unix... look, i dont want to be the guy who stands in the pub explaining to people about how superior it is too windows. I'd rather be the guy standing at the bar failing to chat up the barmaid. At lest that way I KNOW i'm boring someone to death.
Oh. My. God.
> As Dijkstra would point out (although not quite in the same words), these problems arise from incorrect design, and can be fixed accordingly by making the design correct.
Attention! Reality intrusion field on unknown transdimensional frequency!
> The time and effort wasted here would better be spent on either coming up with something new and/or actually finding and fixing remaining instances of such problems.
Captain! An agressive knowitall is decloaking right in front of us.
Evasive manoeuvers!! Fire missiles!
Obnoxious and Troll Like
That's De Zeurkous alright.
Definition of obnoxious:
1. highly objectionable or offensive; odious: obnoxious behavior.
2. annoying or objectionable due to being a showoff or attracting undue attention to oneself: an obnoxious little brat.
3. Archaic. exposed or liable to harm, evil, or anything objectionable.
4. Obsolete. liable to punishment or censure; reprehensible.
And if you disagree with my opinion, then you're wrong! It's as simple as that.
Oh, hang on, now I'm sounding obnoxious and troll-like too.
Easy isn't it!
Common courtesy is a really useful life skill. Learn it.
Here's what's new
This is about as bone-headed as web-sites that check passwords using client-side scripting. Oh, hang on, that's an interesting observation.
OK, so the 286 had call gates and 4 separate stacks (for each of the protection rings) precisely so that system call arguments were copied to a protected area atomically (by the CALL instruction). OK, so *that* probably only happened in the early 80s because Multics had done the same in the 60s. OK, so if you wander off to somewhere like comp.arch you will find people who still remember designing and working on systems from before then that recognised and solved this problem. This attack is older than I am.
What's new is that the AV software *itself* has re-introduced the vulnerability by implementing critical security functions in userland. It's a truly horrendous design error for a security company to have made. That it is a design error is especially significant. The overall design will have been seen and implicitly approved by just about every programmer connected with the product. This is not a case of some new guy making a typo. This is something the whole company missed.
You AV software is designed and written by people who didn't recognise what is perhaps the all-time classic privilege elevation attack. Now, how safe do you feel?
Calm down people...
I believe I sort of understand what De Zeurkous is saying - seeing stuff that was actually discovered and in many cases done _right_ decades ago be ignored for years and then come to public attention as a "new" invention (only this time quite often broken and it seems only because the authors simply ignored what was known about the subject decades ago) does have the potential to piss one off. Then again that's just an emotion and I'm not saying anything about how one should act on it.
On the topic of design and execution - someone remarked that it's the execution that matters. It's true that the execution is a challenge on its own but it's also true that you don't get very far if you completely forget and ignore the design during the execution. And that does seem to be happening a lot. It's as if you're trying to achieve the goal of avoiding head-on collisions on the road. The design is "when in risk of colliding with head-on traffic, turn (left|right) (depending on where you live)". No amount of masterful execution will replace that ingenious design as a general and complete solution of the problem.
I'm a Genius!
I've discovered perpetual motion. However can't be arsed to write down how I did it...I just hope I don't forget it when i sleep tonight...
Proberbly old news anyway...
Sunshine?? It's looking like rain here...
This is a legitimate discussion
I agree with Ken Hagan AND De Zeurkous.
I quibble with Ken Kagen only in the statement about "implementing critical security functions in userland", because we're talking about syscall wrappers here - this is kernelland.
Ken Kagen took everyone down the right road by pointing out that CALL atomically places arguments onto a privileged stack, which system calls read and act on, therefore seemingly no opportunity for exploit. (but this is actually not quite correct - see below)
What Robert Watson is mostly discussing is
a. what happens when you WEDGE something between the two?
b. Particularly when you can have concurrent processes with same access to memory where the args are?
When a userland process makes a syscall, the argument for strings/structures is merely a pointer to the string/structure in userland memory area.
While the pointer could not be changed due to the stack being used, the memory being pointed at, being in userland, certainly could.
If the execution sequence is
wrapper (reads string, accepts argument as valid, or changes them)
<WINDOW OF OPPORTUNITY>
WINDOW 1: wrapper interrupted (yields time, scheduler interrupts, etc), wrapper gets a CPU back
WINDOW 2: another process, concurrently running in another CPU, changes userland memory
SYSCALL (real one) triggered
WINDOW #1 is the single CPU window.
Window #2 is the multi CPU/multi core window.
In these windows, another process with userland access (a child process with same memory access) can replace the string/structure contents before the real SYSCALL gets triggered.
So, how do syscall wrapper application developers make sure there is no window of opportunity, particularly WINDOW #2?
As De Zeurkous says, "these problems arise from incorrect design".
The way this kind of issue is handled in a kernel is to use semaphore locking architecture, but this doesn't seem to work here, because you'd have to put semaphore structures around arbitrary userland memory used as arguments to function calls, and you'd have to trust userland apps to obey the semaphore.
A better approach would be to have the syscall wrapper application transfer data structures themselves (copy entire structures, not just pointers to structures), into protected syscall wrapper memory.
There may be even better approaches - but the point about an "incorrect design" being used to insert a wrapper between userland and kernel syscall does seem valid to me.
Great work guys.
This is almost as good as comp.lang.cobol !
try ( http://www.dbforums.com/showthread.php?t=1358734 )
Cobol? you ask. Those old guys really know how to get off , off, off topic and every second thread ends up in a flame war of some sort.
The reg comment-ers have a long way to go to before they can reach such lofty heights but are definately showing promise.
P.S. To some comment-ers.
More pertinent, pithy, and downright insulting comments are possible if you RTFA first.
Isn't it about time that we put this one to sleep?
hitler, nazi. etc.
- Just TWO climate committee MPs contradict IPCC: The two with SCIENCE degrees
- 14 antivirus apps found to have security problems
- Feature Scotland's BIG question: Will independence cost me my broadband?
- Apple winks at parents: C'mon, get your kid a tweaked Macbook Pro
- FTC to mobile carriers: If you could stop text scammers being jerks that'd be just great