Virtualisation has always bothered me. This is perhaps an odd statement to make; after all, I am personally responsible for virtualising thousands of servers. But the truth of it lies in the special status the IT community has ascribed to hypervisors. When we nerds talk about virtualisation, especially with relation to servers …
A box without users
> Yet by and large, we tend to neglect the hypervisor, trusting it to just work.
That's not an unreasonable assumption, since hypervisors don't have idiot users surfing to pr0n sites on them, reading their bug-infested email, or trying to plug in some dubious thumb-drive/peripheral/phone
When you rid your IT of all of those points of weakness, it's surprising how little effort is needed to keep a box secure, bug-free and reliable.
And where do the signatures come from ? Came I make them up so I can run my own code on the hardware I own ?
Or is this another attempt at TPM ?
@Tom Chiverton 1
Trusted Compute Pools use the TPM. All the noise aroudn trusted compute pools talks - of course - about support from the big vendors. I am suspecting at this stage that this is an attempt at shipping out-of-box "trusted" hypervisors.
In theory, there is a process to "register" a hypervisor - such as an update - with the TPM, thus allowing the signatures to for the update to be validated. That isn't going to happen automatically, think more along the lines of having to access a vPro-like interface on the hardware and manually register the signature of the new hypervisor.
Also: this tech should be usable for more than hypervisors. Want to make sure your Windows/Linux/Solaris server has not been tampered with? This should do it. Assuming those OSes produce signed modules of the type necessary for the TPM to play with...
I realise this doesn’t give you absolute and total control over every aspect of your hardware in the world’s most simple fashion. But, if it were easy to do, then it wouldn’t be very secure! The ability to register this remotely through a Trojan or botnet would entirely remove any semblance of security the thin was designed to provide.
Might be all for naught, if the processor manufacturer forgets...
... to turn the internal debugger off:
-- -- The Register: 'Super-secret' debugger discovered in AMD CPUs
-- -- -- -- http://www.theregister.co.uk/2010/11/15/amd_secret_debugger/
Note that the above article references a feature specific to certain AMD processors; I have not heard as to whether Intel CPUs contain the same feature.
However, if someone DOES manage to stumble upon a hidden - and still functional - debug mode in an Intel processor that implements the aforementioned "trusted compute pools," then the security protections the hypervisor trust model brings to the table may not amount to much if an attacker can manipulate the CPU at an even lower level...
...someone manages to figure out a remote attack that registers new OS module signatures with the TPM.
...someone manages to disable the trusted compute pools functionality altogether and then insert a borked hypervisor.
...someone manages to load malware into [insert some add-in card with it's own boot ROM that isn't part of the TCP scheme].
...someone manages to get hold of "measured in acres" computing power to compute alternate byte lines that meet the same signature hash but allow for you to use malicious code instead of the "proper" code.
It's a fun game! But it's still better than running the hypervisor with no checks or balances and just sort of praying. Traditional security (lock the damned door, keep the management interfaces on segregated networks and OFF THE INTERNET) are still absolutely required.
cpu bugs too
"As I said before, hiding in this list are 20-30 bugs that cannot be worked around by operating systems, and will be potentially exploitable. I would bet a lot of money that at least 2-3 of them
For instance, AI90 is exploitable on some operating systems (but not OpenBSD running default binaries).
At this time, I cannot recommend purchase of any machines based on the Intel Core 2 until these issues are dealt with (which I suspect will take more than a year). Intel must be come more transparent.
(While here, I would like to say that AMD is becoming less helpful day by day towards open source operating systems too, perhaps because their serious errata lists are growing rapidly too)."
The solution is pretty simple IMO - if your that paranoid about security than you should make sure you have physically separate infrastructure for your "secure" systems vs your "insecure" systems. Security will never be perfect, isolation is your friend, lower your avenues of attack.
RE: Or if...
First off, good article, Mr Potts! Whilst locked-down hypervisors sound like an added tool, we need to make sure we have as many security tools as possible and use them in a joined-up fashion.
".....keep the management interfaces on segregated networks...." The single biggest hole I see just about everywhere I go is that those management interfaces are on Windows boxes, and those boxes have Internet access for Windows updates, and/or are used for downloading patches for other systems, and/or are accessible by servers and workstations on the corporate LAN. All of which means nasties inadvertantly downloaded onto those management servers/stations can end up with access to the production servers. Using Linux for management devices is a slightly better option but often not a supported option by many of the software vendors that make management software. Management software vendors need to address this. They also need to get better at hiding their management tools - we often see scans of our Internet-facing machines looking for particular ports associated with certain management products.
Our solution is try and keep as large an air-gap between the management servers/stations and the corporate LAN; keep the bridging connections down to a few and MAC-checked connections, and do not allow direct connections to the Internet. Even by all those measures, we still had a virus incident on a few of out production servers this year.
Security v. Freedom
I would like to add that in the age old battle of security vs freedom this is another step in the wrong direction. Yes, we all fight daily in order to make and maintain secure systems, but at what point did we give up our freedom of choice. You speak of RAID and network hardware bios being verified by Intel. But what if I want to use non-intel approved hardware, like ATI/AMD or others which cannot get approval (for political reasons).
What if the verification for a specific piece or class of hardware becomes compromised? Who updates the verification microcode? This system allows the controller of the microcode in your hardware to plan your system's obsolescence, and that controller is not *you*.
This feels like the next step of TPM only with only the approved soft and hardware being allowed to run. Will my port of XEN receive intel's approval? How much work will be involved? How much will it cost me, the end consumer to get this "feature" i have no control over?
Come on guys, we're smarter than this, aren't we?
"But the truth of it lies in the special status the IT community has ascribed to hypervisors."
That sentence is missing the word "gullible". Work out where it fits.
This sounds terrible. Yes, it potentially locks me in to Intel-approved devices with Intel-approved firmware releases. How sure are you Intel will be approving new releases of rival products as quickly as their own - or at all? Maybe they start competing in a new field and use that as a little extra edge over the established providers, or favouring one partner over another. With ATI being part of AMD now, maybe nVidia will have an easier time getting Tesla approved?
For urgent patches, this will add a new delay: your new Broadcom firmware to fix a lockup from malformed packets giving an easy DoS attack? Sorry, have to wait for Intel to approve it now - and they're busy right now checking the same issue with Intel's own card...
Not to mention a new malware target to attack: trash the built-in key, suddenly your shiny new five figure 48 core virtualisation super-platform is a paperweight, unless/until you can get it rekeyed to boot properly!
A lousy idea in every possible way. To be secure, know what you're running, and be careful about it. There is no technical fix for user stupidity - and of course no guarantee at all that the "secure" genuine Intel code is actually secure, whether signed or not.
I think perhaps not all architectures are the same, so, for example, POWER chip architecture machines, hosting say Linux and AIX partitions, such as the p-series have rather more secure hypervisors?
Hmmm.... The EAL-4 hype that IBM likes to parade out whenever discussing LPARs is common to just about every OS you are likely to use (right back to Windows 2000!). You really need to ensure you are using good security practices (not just the vendor's recommended best practices, they are often slow to update them) rather than relying on the vendor's out-of-the-box security ratings.
One of the big discussions over hypervisor security is "do I want a full-fat OS layer as my hypervisor?" Essentially, all hypervisors are software acting as an OS, either a full-featured one (type 2 hypervisors, like the hp-ux OS host for the Integrity Virtual Machines software package) or a cut-down one where the software is bundled into the same package (type 1s, think the Windows server 2008 base of HyperV, or the Linux that is in VMware's ESXi, or what underlies IBM's z/VM or LPARs). Even so-called real hardware partitioning relies on software, in hp's npar case run on a management processor board, essentially a mini computer built inside the server. The devil is in that you will usually have a network-based access to the virtualisation layer to allow remote administration, and this is the security hole. Crack the admin login to the management console and you can disrupt the VMs at will, or possibly introduce virii into the images used to build VMs with some virtualisation products.
The pros for a cut-down OS is that it is much smaller and uses less system overhead, leaving more resources for the virtual machines. As there is less software involved in a one-task hypervisor, there is a less of an attack surface presented to the network. If it is a separate mini computer then it actually doesn't use any of the main system's resources (hp's old hardware partitioning on the Integrity rack servers left 100% of the main server available). But, if you have a cut-down OS, how do you enforce network security? You have to take the vendor's word that they have locked it down. For example, I can't buy Symantec anti-virus for the cut-down OS in ESXi or HyperV, only for the full-fat OS VMs sittign on top. With older versions of VMware you actually had a Linux command line you could log into to poke around, but this has been removed in the latest version for security. That's great, as long as you didn't like being able to go into the underlying Linux.
With a full-fat OS layer as a hypervisor, you have to give up more system resources to the virtualising layer, but you have the same OS as is used for general tasks and so you can apply the same security policies and lock it down in a flexible manner, tailored as you require. If a new threat becomes apparent then you have full control of the virtualising layer to make configuration changes, whereas with a cut-down hypervisor you have to wait for the vendor to introduce a patch.
The main problem I see with type 1 hypervisors is you don't have the ability to go in and check the security (a worrying thought given that HyperV is essentially half-fat Windows Server). If Intel (and hopefully AMD) do go for more checking of the hypervisor layer than that can only be a good thing.
- World's OLDEST human DNA found in leg bone – but that's not the only boning going on...
- Lightning strikes USB bosses: Next-gen jacks will be REVERSIBLE
- Pics Brit inventors' GRAVITY POWERED LIGHT ships out after just 1 year
- Beijing leans on Microsoft to maintain Windows XP support
- Storagebod Oh no, RBS has gone titsup again... but is it JUST BAD LUCK?