I thought I was buying a motherboard, not an Ouija board...
Researchers in Europe have developed a way to exploit a common computer processor feature to bypass a crucial security defense provided by modern operating systems. By abusing the way today's CPUs manage system memory, an attacker can discover where software components, such as libraries and RAM-mapped files, are located in …
That is certainly becoming increasingly necessary.
Massive Strategic Cock-up
When the world decided that it was just fine to go down the route of client side execution, it rashly assumed that this could be made secure. Well, it cannot.
The proper answer is server side execution, with standardised remote display protocols being the only thing that the browser has. Things like X come to mind.
The need for client side execution has come to an end given that we all have broadband Internet.
You'd also not be giving away one's application source code to each and every client.
Not true. Not everyone has broadband. Plenty are stuck on dialup, satellite, or low-end wireless. Plus what's stopping X servers from being attacked, not to mention servers full of juicy information. Frankly, I'd say the horse of privacy has bolted and will never return. Even if consumers abandon the Internet en masse, high speed private and government network will continue.
"Not true. Not everyone has broadband. Plenty are stuck on dialup, satellite, or low-end wireless."
Well I wish someone would tell the world's website developers that.
"Plus what's stopping X servers from being attacked"
"not to mention servers full of juicy information"
They do anyway regardless. But an X server isn't in the same category as a server that stores information (such as a website), and is anyway running at the client end, not the server end.
"Frankly, I'd say the horse of privacy has bolted and will never return. Even if consumers abandon the Internet en masse, high speed private and government network will continue."
This isn't about privacy.
Then the black hats will simply proceed to crack your instance of the server side application. That may or may not be more difficult, depending on the competence of the application developers with respect to security (usually dismal), and the competence of the server managers (yeah, right...).
Also you need bigger servers. Client-side computing distributes some of the load.
"Then the black hats will simply proceed to crack your instance of the server side application."
Er, the point is that a client user wouldn't have an instance of the app at all on their own hardware. The app would remain on the server.
It's far easier to secure a fully defined remote display protocol than to guard against arbitrary code that may or may not be malicous. If all that's flowing between server and client is information to be displayed and mouse click events then the attack surface is significantly smaller. There would be no arbitrary code running on either server or client.
"Also you need bigger servers. Client-side computing distributes some of the load."
Client side computing does distribute the load, but for the client, who generally has a battery powered mobile these days, that's a bad thing. Hence functionality-deficient mobile websites. Hence native apps. Hence separate iOS and Android native apps, and the two teams needed to develop and maintain both.
It would be far easier if you only had to develop the app once and have it displayed to a remote client through a standard protocol.
"The need for client side execution has come to an end given that we all have broadband Internet."
I'm not saying I agree with the wholesale move to client-side - what has effectively happened is that a general mechanism was let loose on the world instead of the standards designers thinking about the actual real-world problems - but all that standards work that was never done would still need to be done. And there would be a lot of it, with standards for many different application domains, and therefore a large attack surface.
You solve this by having somewhat intelligent protocols, so text input and other basic building blocks of UI interaction are handled on the client.
Sure - there's still an attack surface, but it's significantly smaller, and any bugs are much less likely to be exploitable when you don't have a Turing complete language to work with.
As to users on dialup and similar - today's web drunk on JS and media is already all but useless for those. Just the JS to drive a fancy "modern" site is frequently several megabytes. Not to mention the ads (and sites that refuse to work if you don't load them).
So the choices are either JS monsters that are useless on a crappy connection AND exposes everyone to great security risks, or you have another solution which is also useless on a crappy connection but doesn't hurt everyone's security.
Fallback mechanism in either case is the same - basic HTML.
"You solve this by having somewhat intelligent protocols, so text input and other basic building blocks of UI interaction are handled on the client."
Anything *smart* you put in would be targeted by the malware writers, such as with malformed input. And then there's the matter of things like video players that MUST be on the client for performance reasons (Don't believe me? Try VNCing a video player...)
The reason is that exploiting a memory corruption vulnerability is no longer a matter of blindly firing off the right payload. You need a multi-step process that involves leaking/reading out addresses (to defeat ASLR) and often doing very precise manipulation of the memory to gain usable control. And you basically only have one chance to get it right, so no bruteforcing (like the good remote memcorrupt exploits of old).
There are exceptions to this of course, but we are talking reducing the exploitable vulnerabilities by several orders of magnitude, as well as removing one of the most significant attack vectors - the JS engine - entirely.
Plus, with a more "remote desktop"-oriented model, you could keep the client a lot more stable. Not as much need for new fancy features every week when the client is simply doing the presentation. So the amount of vulnerabilities would actually go down over time, instead of basically staying constant as it is today.
Yes, there are vulns in things like X servers and RDP implementations. But yes, there are significantly fewer of them than in browsers and they are much less likely to be successfully exploited in the wild.
"Plus, with a more "remote desktop"-oriented model, you could keep the client a lot more stable. Not as much need for new fancy features every week when the client is simply doing the presentation. So the amount of vulnerabilities would actually go down over time, instead of basically staying constant as it is today."
But the client will ALWAYS be a target. They'll just target the rendering engine and send it malformed inputs and so on. Hoping for perfect code when it's open to the outside is like wishing for unicorns.
Try VNCing a video player...
Actually, this works if you have a VNC client and server capable of "tight " compression (like TightVNC or TigerVNC): the video (and other photo-like image parts) gets sent compressed with JPEG, so it is somewhat equivalent to streaming "Motion JPEG". Oh, you want to hear the sound also? well.... I think TigerVNC has some solution for this, but have not tried that part in practice.
I use TightVNC, but the problem is that even on a home LAN setup, on-the-fly MJPEG doesn't get up to 30fps and involves lot of tearing, plus as you said no sound. Plainly and simply, network bandwidth issues mean you really need the originally-compressed data stream (the optimal way to send the data) sent down the pipe and then decompressed locally to get the most performance. Some things like video and audio rendering are simply best done at the render point.
Removing your security slice by slice.
BTW "27 bits of entropy?" Does that mean this mean ASLR in Windows can randomize the start location anywhere in 2^24 addresses but this SW can identify a marker location in a specific data structure even if the structure start address was anywhere in 2^27?
Good work by the team.
BTW. As a core security technique of multiple OSes you can bet this has been under attack by all known TLA's from the time it was announced. "Security by delusion" is no more security than "security by obscurity."
It's good for people to realize this.
Why not modify web browsers to reduce and randomise the time-measuring functions available to any script?
I mean, when does a web page really need microsecond resolution? If the timing is jittered by a millisecond or so by some pseudo-ransom process would it really break stuff that is talking to the web server via a TCP/IP link with delays typically of the order of 10s of milliseconds?
Auction sites are random number generators at the end. Just because it is USEFUL to participants who wish to time their bids does not mean it is NEEDED. And human beings can't time beyond 0.01 seconds anyway, so this is really the same thing as the nonsense in the stock markets.
0 reasons there. Got anything better?
And don't forget, if you ever do have a true need, you can always ship an independent application to handle the entire interaction. This is a discussion about a general use tool.
"And don't forget, if you ever do have a true need, you can always ship an independent application to handle the entire interaction. This is a discussion about a general use tool."
Because you're catering to John Q. Public who doesn't want to get saddled with yet another piece of software. You're talking the Facebook generation here.
Biting the hand that feeds IT © 1998–2019