And in other news
Apparently there is a new form of this attack call "Port Smash"
I seems to work by timing the use of the various LU blocks in modern Hyper-threaded CPUs
Computer security researchers have uncovered yet another set of transient execution attacks on modern CPUs that allow a local attacker to gain access to privileged data, fulfilling predictions made when the Spectre and Meltdown flaws were reported at the beginning of the year. In short, these processor security flaws can be …
The chips are doing exactly what they were supposed to do. Namely, burn for speed. If you want real security you have to physically separate different classes of users and not run them on the same chip/computer/memory system etc. My reaction to all of this is I want my computer to be fast and will secure it at the edge.
> My reaction to all of this is I want my computer to be fast and will secure it at the edge.
'Secure at the edge' would give a false sense of security as once it's breached they have full reign over your computer. Better to have multiple encrypted data path within the computer and processes that are designed to function in a compromised environment. The innovators are really going to have to massively improve their game to defend against networked computer attacks into the future.
"Better to have multiple encrypted data path within the computer and processes that are designed to function in a compromised environment"
How about instead accepting that the current hardware is not secure and should not be trusted with sensative data.
Whilst I agree that absolute security does not exist, it must be said that the current hardware manufacturers have chosen to degrade what security there was to be found in earlier systems.
Since many people earn their crust via computing then it is going to take a long time and a lot of pain before this situation changes. Especially since computers in the workplace have replaced people and skills have been lost during the original transistion, meaning that reversing the trend is going to be vastly more expensive than pretending that all is still well, hence the poopooing of the discovery of yet another CPU design fail.
It would be nice if the problem was addressed before it becomes a major issue but since those who could act are being paid not to, then everyone else has a lot of pain in their future that will continue until people again believe that putting all your eggs in one basket is a bad move.
"It would be nice if the problem was addressed before it becomes a major issue but since those who could act are being paid not to, then everyone else has a lot of pain in their future that will continue until people again believe that putting all your eggs in one basket is a bad move."
Unless you can only afford ONE basket. Then having all your eggs in there is preferable to trying to carrying them in your arms.
@ "Unless you can only afford ONE basket. Then having all your eggs in there is preferable to trying to carrying them in your arms."
Ah, the old "change costs too much" ignoring that it is already costing to much already and that without change it will only ever increase in cost. So it is your belief that ignoring problems never makes them worse?
So it is your opinion that we wait until the wheels fall off and then stand around saying "who'd thunkit" instead, good plan genius
This post has been deleted by its author
If you want real security you have to physically separate different classes of users and not run them on the same chip/computer/memory system etc.
Which is why these are of particular concern to cloud hosting providers. An important requirement for cloud systems is that a program running in on user's VM shouldn't be able to observe what's going on in another user's VM.
Just imagine that inside your box there were 2 devices. One which you trusted and only ran code that you'd actually installed yourself, and anything that ran anything else (e.g. Javascript) ran on "untrusted" hardware. Then you'd need need to have a way of communicating safely between the two, with some user interface devices and the ability to send data from one to the other, and a fast bus/network between the two. And some machines would have all the oomph in the trusted box and others in the untrusted box for games and stuff, and there'd be a hardware video multiplexer doing it's clever stuff, like an updated version of what we had back in the days of video overlay cards on VGA, so that you don't need to try shoving 120fps video down the network pipe.
Then high spec machines would include extra separate modules hanging off the bus/network so that eg. game engines didn't interfere with google docs.
And there'd be a some kind of manager thingy on the main computer to make sure that let you interact with the different untrusted-compute devices while maintaining isolation. Actually, maybe the display/HID ought to be a separate device, maybe with a really simple RO filesystem, and everything work via that main UI box too. ...
Oh. prior art, my UI box has just basically become an X server hasn't it?
So they delivered the speed (which users can measure easily) and hoped no one could figure out
a) They'd relaxed the boundaries between running processes and
b) No one could find a way to exploit the relaxed separation.
IOW the illusion of security without actual security.
I wonder how many process crashes over the years could also be traced to miswritten code influencing another process and crashing that instead? No way to know I guess.
More likely they just didn't think that there would be a security problem with speculative execution. After all, it's not exactly an obvious flaw, it took years for anyone to notice it in the first place, and it's taken almost a year for this fresh crop to be discovered, even when they knew where to look.
Always assume incompetence rather than malice and all that.
I wonder how many process crashes over the years could also be traced to miswritten code influencing another process and crashing that instead?
From transient-execution side channels? None, barring major CPU bugs that have mysteriously gone unreported.
I think you do not understand how Spectre-class attacks work.
>I'm up for that. Maybe I'll start work again on my 6502 assembler that I haven't touched in years then, seems people might need it.
You might want to pop over to 6502.org where this processor is discussed in detail, more than 40 years after it was designed. I am surprised the Z-80 didn't have the same longevity, given the enormous success of CP/M.
>Clue - they'll never actually be bug-free. It's not a perfect world, but it's the one we live in.
I am sure most readers by far here know that. The issue is not about bugs as such but known bugs that can be exploited. Another bug that will not come to light in another 17 years is not what I am concerned about. An important part about security is to keep the time horizon in mind.
Itanium's VLIW architecture was safe in that the optimizations were mostly done at compile time, so things like speculative execution aren't done on the fly nearly as much. That approach has its own problems, though, like requiring a recompile for every new chip iteration.
The good news: You are one small target in a billion.
The bad news: If they want to access your system they will. If not by hacking then by simply getting a job at your company and gaining access.
Sort of good news: If you take proper precautions - regular off-site backups, complex passwords that you change regularly and don't reuse - it makes it less likely that the lazy will take the effort to hack you and if hacked you can restore. And don't forget to have those financial company phone numbers handy in case you need to report fraud.
I thought the new conventional wisdom was NOT to change passwords so as to allow time for people to actually be able to memorize them and not have to rely on vulnerable mnemonics and sticky notes. Besides, anyone who managed to hack an account would only use it as a beachhead to set up a more-permanent independent access.
All of these attacks can be prevented by the addition of suitable caches, which are flushed at instruction retirement. Unfortunately, the caches are the bulk of the area of the chips. Furthermore, the wires and logic to drive them are also large.
But I'm seriously thinking about figuring out how to turn Spectre mitigations off when I'm gaming. Unplug the network, turn off Spectre mitigation, and see how much speed I gain. Stuff is getting unplayable.