back to article Not just websites hit by OpenSSL's Heartbleed – PCs, phones and more under threat

While most of the buzz surrounding OpenSSL's Heartbleed vulnerability has focussed on websites and other servers, the SANS Institute reminds us that software running on PCs, tablets and more is just as potentially vulnerable. Institute analyst Jake Williams said the data-leaking bug “is much scarier” than the gotofail in Apple's …

COMMENTS

This topic is closed for new posts.
Bronze badge

Cardiac Arrest

For the heartburn i suggest a couple of Quick-Eze.

1
0
Anonymous Coward

Re: Cardiac Arrest

Presumably my Windows Phone is immune from this on the phone end at least!

0
0
Bronze badge

Never understood why malloc (and the rest of its extended family) didn't nuke the memory area by default before passing it to whoever asked for it.

0
0

Debug malloc

For most apps it's bound to be a useless waste of time, but I wonder how many security bugs in OpenSSL itself may have been prevented by always using a secure malloc.

0
0
Silver badge

@Neoc - Nuke the memory

That was my first thought, but then I read the detailed code analysis linked to in the article. The 64K sent back is copied from the attacker's payload. As the attacker's payload is only one byte, the rest comes from whatever is in process memory after the received payload.

1
0

malloc() on your platform might, but openSSL have their own wrappers around it, and cache the allocations before calling down to the O/S, so it neatly sidesteps that protection.

1
0
Bronze badge

re: Never understood why malloc ... didn't nuke the memory area by default

Your forgetting that it is only in comparatively recent times that CPU cycles have been plentiful on some systems. Zero'ing memory was a nice to have but impractical outside of the test lab when workstation cpu's typically ran at sub 12.5MHz...

0
0
Gold badge

why malloc doesn't nuke

It's because it serves no purpose to do so.

An OS will certainly zero pages before giving them to you because those pages could have come from almost any previous process and the security implications of that have been known since the 60s. However, all sane runtime libraries ask for big blocks from the OS and then implement their own sub-allocation scheme on top. Doing it in-process is a big performance win (because you don't have to cross privilege boundaries) and omitting to zero the sub-allocated memory in your own address space is not a problem because it was already visible to any thread in your address space. It's not a problem until you then squirt the dirty memory out of a socket.

Yes, it could have been avoided by using calloc() rather than malloc() everywhere, but it could also have been avoided by sanitising your inputs before responding to them. The former would pointlessly double the number of writes to memory. The latter is simply "correct". My vote goes for the latter.

Note also that debug versions of malloc nearly always do pre-fill the memory (and the matching version of free post-fills with a different pattern) but this is *because* it is pointless to do so. Or rather, because it bloody well ought to be pointless and therefore doing it is a simple way of flushing out a certain class of bug.

6
0
Silver badge

Re: Debug malloc

Actually OpenSSL comes with its own malloc, that's why you always get its data. And why do you ask did OpenSSL use its own malloc? Because they thought the OpenBSD one was to slow. (it's slow because it tries to be secure)

1
0

In my view seeing a naked memcpy call at all in supposedly secure code is like walking into a restaurant kitchen and seeing a big pile of rotting carrion on the floor. The staff may know not to handle it before dipping their fingers in the gravy, but it's a clear danger that you don't want to have around. It may cost to clear it up, but that's what you have to do.

memcpy is a big red flashing warning light that says "make damn sure you've checked and sanitised every bit of data that goes in and out of here" (not only memcpy, of course, but quite a few other C functions). In fact. I'd suspect simply looking for all the memcpy et al. calls is a pretty good way of finding vulnerabilities. The best approach is to wrap them up pretty tightly. Even that's not 100% secure, but it does make a difference and in security code it's 100% worth doing.

1
0
Silver badge

Windows XP

Interesting timing.

Are Windows XP clients vulnerable or did Microsoft fix it in the final set of patches? If not, maybe they should consider one final final patch.

2
2

Re: Windows XP

What's the point of having a drop-dead end-of-life support date if you keep trying to resuscitate it with another 'one last patch'?

3
0

Re: Windows XP

OpenSSL is not tied to Windows XP. It's a 3rd party component and I'd be surprised if anything shipped with Windows XP relied on the Win32 CryptoAPI.

0
0
Silver badge

@CadentOrange - Re: Windows XP

Fair enough - no problem then.

0
0

Re: Windows XP

It should read "...relied on it (OpenSSL) instead of the Win32 CryptoAPI". Brain fart.

3
0
Anonymous Coward

The issue is not that this leaks the site's certificate. I can download the certificate of any SSL website I like, using any browser.

The issue is that it leaks the SSL private key that's the other half of that certificate / key pair.

0
0

Few clients are vulnerable

IE, obviously, isn't vulnerable.

Firefox and Chromium use NSS, so aren't vulnerable.

Opera has OpenSSL statically linked in. The Copyright string says

"1998-2011" and the vulnerability appeared in OpenSSL in early 2012,

so again should be safe.

Android: Most versions have HeartBeat disabled, except for v4.1.1

(and possibly 4.1(.0)).

Earlier versions use an earlier, non-vulnerable version of OpenSSL

http://googleonlinesecurity.blogspot.co.uk/2014/04/google-services-updated-to-address.html

There's a client tester and a list of some vulnerable clients at

https://github.com/Lekensteyn/pacemaker

OpenVPN is vulnerable, however

https://community.openvpn.net/openvpn/wiki/heartbleed

4
1
Bronze badge

Re: Few clients are vulnerable

>IE, obviously, isn't vulnerable

Just because it doesn't use OpenSSL libraries doesn't mean it isn't vulnerable to this attack vector.

Not knocking MS, but it is worth noting that without testing we don't know if third-party (ie. non-OpenSSL) SSL implementations are vulnerable to this attack. Otherwise good comment so up voted.

1
0
Silver badge

Even worse than I thought

This is surely the worst security hole I have ever seen. However, I tested all my net facing servers and they were all OK. I just tested the copy of CURL I use all the time. Complete breakdown. It coughs up the private keys.

Now it occurs to me that a black hat might very well have exploited the vulnerability and then hacked into the system and patched it so that it is no longer vulnerable. It is a time honored practice with malware to gain control and then make the target invulnerable to attack by anyone but them.

I have seen this coming for a long time so I don't have anything particularly valuable to steal and I have long been prepared for the day when I had to change passwords and keys everywhere and finally lock things down properly.

Despite the fact that I was expecting this I am surprised at its scale and pissed at the work it is going to take to clean it up.

After the cleanup? Well, not everybody will clean up so attackers still have a good chance to hop from a compromised system to one that is not (yet) compromised.

Assuming you find your way around the above, what you are left with is the same shaky structure you had before -- a colander with a single hole plugged.

We need to have a much better understanding of these things in the technical community.

We should have long since demanded an end to IPv4. IPv6 is such a crappy alternative that it is understandable people have dragged their feet, but as lousy as it is it is entirely preferable to IPv4.

Security experts should have better explained the fact that even though a 128 bit key is invulnerable in theory, it is entirely vulnerable in practice and longer keys are better.

Public Key Cryptography is fine in principle and I would trust it if I could somehow verify the implementation was an honest one. The current case is an example of it falling down. It is not the first and will not be the last. The implementations are messy and poor. The PRNGs used to generate keys have time and again proven faulty.

The network is so fragile with respect to security that nobody in the know with something serious to protect will connect it.

Is our hardware compromised? Probably.

Can a state agency like the NSA mount side-channel attacks successfully? There can hardly be any doubt.

We cannot be sure of protection against the Military Industrial Complex. They control the factories that make our equipment, the infrastructure, law enforcement and the administration of 'justice'.

We *can* be sure against most attacks otherwise, but it requires much more than we have put in place. Non-technical people would have to take a lot of courses to fully understand the issues, but software developers should be able to understand this with a little digging *and* they should know about this anyway as a matter of course.

Given the truly horrible state of security, you have to wonder. Is this really that mysterious to everybody?

2
6
Bronze badge

Re: Even worse than I thought

The OpenSSL vulnerability/exploit shows why you don't use the same physical hardware to handle the encryption/decrypt of data streams with different levels of security.

Key length is irrelevant to this, if the key is in memory then it is possible to grab it.

0
0
Silver badge

Re: Even worse than I thought

Re: "Key length is irrelevant to this, if the key is in memory then it is possible to grab it."

If I understand how the vulnerability works a 1,048,576 byte long key would be very difficult to obtain with this exploit. One of the RSA inventors spoke about using objects on the order of a terabyte at one point for precisely the reason that a large object hobbles certain types of attacks:

"I want the secret of the Coca-Cola company not to be kept in a tiny file of 1KB, which can be exfiltrated easily by an APT," Shamir said. "I want that file to be 1TB, which can not be exfiltrated."

Your statement illustrates why our security is so tragically broken. For end-to-end security to be sound *both* this type of security hole *and* the keys have to be sound. Trivial one byte keys present a barrier low enough that this exploit is not needed. Non-trivial multi-megabyte keys raise the key barrier high enough so it is not a profitable point of attack.

One high barrier does not make the whole thing secure. But one low barrier can make the whole thing insecure.

To be secure, all the barriers have to be strong enough to render attack unfeasible. With IPv4 in place, you can scan entire sub-nets by brute force looking for a vulnerable IP address. With IPv6 that is significantly more difficult, and if configured correctly, effectively impossible.

Security depends upon a lot of different things, nearly all of which are in a poor state of repair in our systems. Key lengths are but one of those many things.

There is no sensible reason to build our systems with key lengths constantly on the edge of vulnerability. Any older backup of some dire secret that is hiding behind DES is trivial to hack.

I might be wrong, in which case, using a longer key length has no impact on security. On the other hand, you might be wrong in which case a shorter key length needlessly renders the system insecure. Which of those bits of advice gives better assurance of security?

At nearly every design point on the current network, it has fundamental security issues. This was a whopper of a security hole and we may not see its like again, but this is not the last breach we will see.

0
0
Bronze badge

Re: Even worse than I thought @btrower

Whilst you make some good points, there is a balance to be struck between security and utility. Currently we know that 128~256 key encryption is very secure and reasonably performant -ie. you can use it for SSL etc. (yes I know it wasn't that long ago that computing power made shorter keys secure, hence it is probably only a matter of time before longer keys are talked about). The problem as you indicate is the security of the key's themselves.

What this exploit reveals is that whilst care has been taken with respect to the encryption of communications, little care has been taken over the handling of the keys themselves. In some respects the OpenSSL vulnerability reminds me of a security office where normally only security personnel enter, but the door isn't locked and the keys are just left on the desk.

0
0

Re: Even worse than I thought

http://www.eviloverlord.com/lists/overlord.html - (c) 1997 - offers a similar rule for prudent evil masterminds in those days: "99. Any data file of crucial importance will be padded to 1.45Mb in size." And therefore couldn't be copied by enemies on a 3½ inch floppy disk, remember those? 1.44 MB capacity.

Having said that, do private and public key have to be of similar size? It could take way long to log in then.

0
0

Not all clients are browers

There are a whole bunch of client applications out there that aren't web browsers. So the browser you're using might not be vulnerable, but the mail client, IM client, game with internet connectivity etc might well be exploitable. And unless you're prepared and able to check that every one has no OpenSSL dependency (or if it has, that it's been fixed), knowing that you're vulnerable is actually quite hard.

Still, can we at least declare this the end of the nonsensical "many eyes make all bugs shallow" meme that FOSS advocates have been touting for years?

4
1

"the real benefits of stretching what is possible from an engineering standpoint have already been realized"

From a business standpoint they need to up the payload to carry a bunch of cameras and a couple of hellfire missles - then the money'll roll in!

0
0

Despite all the publicity, my bank (Clydesdale - Yorkshire will also be affected) is still vulnerable!

http://filippo.io/Heartbleed/#home2.cbonline.co.uk

Ouch!

0
0

Who Still Uses Malloc?

I stopped using it when I found the calloc() (clear and allocate) library back in the early MS-DOS days. For those who don't know - the calloc() instruction clears the memory by writing "0" to each byte as it is being allocated.

It's just plane "Open Source" lazyness - IMHO ;-) to keep using alloc() when calloc() will ensure that no latent data is passed from the heap to the calling function.

Who cares if the buffer is too big if all that is in the buffer is a long (64k) string of "0"?

Free advice - worth every penny you didn't pay for it.

0
3

Re: Who Still Uses Malloc?

People still use malloc because it's faster. Especially in embedded systems (where OpenSSL is also used quite frequently) this can make a difference. Besides this: many libraries don't use malloc for every allocation, they keep a memory pool available. One would have to call memset every time to clear that data, which is unnecessary in any well-written library or application.

1
0
Silver badge
FAIL

Re: Who Still Uses Malloc?

Any sane OS (basically all multiuser systems) already zero freshly malloced memory, otherwise it would be a trivial method of exteacting memory information the user wouldn't normally be privileged to do so.

This bug is nothing to do with malloc - it's a basic overflow - the data returned is bigger than the allocated size, thus returning other parts of the processes memory/variables.

So even using calloc throughout would have made no difference here.

Please check before posting that you are secure on that high-horse of yours! :-)

1
0
Bronze badge

Re: Who Still Uses Malloc?

But is there a point in using calloc()?

The recipient still has to return the data to the sender and as far as the recipient is aware it's the stuff that's in the memory at the time. So whatever happens you are going to clear a buffer somewhere and then still return the wrong data because the recipient doesn't know, other than by the length byte, how long the data to be passed actually is.

The problem is the protocol which suggests that something can be defined by the sender. In the earlier days of the friendlier internet (remember the days when we could ping a domain to find all the eMail addresses before that was abused in the late eighties?)

The whole thing needs to be examined from top to bottom, re-specified and re-coded.

2
0
Silver badge

Re: Who Still Uses Malloc?

This bug is nothing to do with malloc - it's a basic overflow - the data returned is bigger than the allocated size, thus returning other parts of the processes memory/variables.

So even using calloc throughout would have made no difference here.

It's not a "basic overflow", there are no memory bounds being violated in this bug which is why the automated code checking systems, good as they are, didn't pick up this bug.

The bug is that the memory allocation code allocates one size block of memory, which being unitialised contains whatever was in that memory space before, hence the problem, but overwrites this block with a different number of bytes. In this case a 64k chunk is memory is allocated, one byte of it is overwritten with the return data and all 64k of it is returned.

3
0
Silver badge
Facepalm

The real bug

is having two length fields in the heartbeat packet that inevitably got out of sync! The potential for bugs like this is one reason why you should not duplicate information needlessly. The spec was bad.

2
0

Re: The real bug

No, the real bug is having a software development system that allows someone with insufficient experience to add code to a system that needs to be secure - and then not having a sufficiently robust review process in place - and then installing that software in a critical situation on huge numbers of servers around the world.

This bug is the sort of mistake beginners make (I believe the culprit was still at uni). I'd be embarrassed if I put a bug like that into a one-off throw-away lash-up. But somehow it got into openSSL which everyone regarded as secure.

It's a bit like the debt-laundering that took place before the financial crash. Everyone thought the debt was solid, but simply because no-one bothered to look at the fundamentals. I think this incident has shown FOSS security to be based on similar principles.

0
0
Silver badge

Re: The real bug

>This bug is the sort of mistake beginners make

And experienced people as well, occasionally! I have seen bugs of similar stupidity level made by long-timers (me included). Sometimes in code that has been in use for years.

There is no room for any holier-than-thou attitudes in programming. Anyone can goof up, therefore processes must be in place to to catch and limit the damage.

What I want to emphasize is this starts at making sane specs that avoid unnecessary complexity (like the redundant length field).

4
0
Silver badge

Re: The real bug

@MacroRodent:

Correct. Upvote for you.

I have been programming for decades. I could easily mess up a bit of code tomorrow. In fact, I think I shall.

1
0

This post has been deleted by a moderator

This topic is closed for new posts.

Forums