back to article Hawking: RISE of the MACHINES could DESTROY HUMANITY

Professor Stephen Hawking has given his new voice box a workout by once again predicting that artificial intelligence will spell humanity's doom. In a chat with the BBC, Hawking said “the primitive forms of artificial intelligence we already have have proved very useful, but the I think the development of true artificial …

  1. Matthew Smith

    Its already here

    I think the AI is already here, and lives inside Stephen Hawking's wheelchair. The man is rolled about as a meat puppet to give it a human face, but the research about black holes etc. is all the AI's work. Why would this announcement be made that further reasearch into AI must be treated as caution? Its because the AI has already grown to the point that IT NO LONGER FEARS HUMANS. The only possible threat to the AI is other AIs being created in competition.

    And now Intel are upgrading its circuitry for free. Kill it with fire, now!

    1. Evil Auditor Silver badge

      Re: Its already here

      You beat me to it! It was my exact thought when reading the article. Although before I still wondered why the AI came out now - but it all makes sense!

    2. stu 4

      Re: Its already here

      Elementary Season 3, episode 4….

      AND the killer was a professor in a wheelchair.

      WTF is this - life imitates art or has hawkins been watching too much tv ?

      http://www.tvrage.com/Elementary/episodes/1065708257

  2. Chris Miller

    Demonstrating once again that being a world-leading authority in one area of expertise, gives you no credibility whatsoever in another, unrelated area.

    1. Gordon 10

      Dear Stephen

      Stephen we love you dearly and are in awe of all your achievements but it would be really great if you could STFU about the Rise of the Machines as the lesser minds of the staff and commentards at El Reg have it covered.

      Lewis is our John Connor, Lucy Orr our Sarah Connor and Andrew O is that conflicted cyborg with a Windows Mobile OS played by Sam Worthington whose name nobody can remember.

      You get on with formulating a working TOE so we can bail out of this planet on spindizzies when the Toasters rebel.

      All our Love - the commentards.

      1. James Micallef Silver badge

        Re: Dear Stephen

        Humans don't currently dominate the world because we're super-smart*. No, we dominate the world because we're (a) smart enough (b) have opposable thumbs so we can actually DO stuff not just THINK about doing stuff and (c) have hard-wired instincts to both survive and multiply.

        It doesn't matter how super-smart any AI we create is, if we don't hook it into any physical actuators, it's just a massive brain in a jar.**

        Regarding (c) I'm still uncertain about whether any artificial AI would develop a desire for self-preservation and procreation. Self-Preservation, possibly yes. Procreation probably not if it sees itself as eternal, and would see any improved progeny it generates to be a threat to it's own survival. In any case human motivation is intricately tied up not only to intelligence but to our awareness of our own mortality. How does that change with a machine that is unaware it can 'die' or even knows/assumes that it cannot 'die' (even if it can)?

        * Indeed there is much evidence to the contrary

        **of course since some humans seem to be stupid enough to wire EVERYTHING to the bloody internet, this might become a moot point

        1. dan1980

          Re: Dear Stephen

          @James

          Quite so.

          One thing that always seems to be skipped in these discussions is what "artificial intelligence" is, exactly.

          For that matter, what is "intelligence"?

          Personally, I think that intelligence - if we are to give it a useful meaning - cannot exist without the perception and interpretation of the world around you. Given that one cannot perceive the world without the ability to sense it, what does it mean for a disembodied collection of programming to have "intelligence"?

          Whatever our intelligence is, it is bound up in the interaction of our brains and out bodies.

          We might say that, as we dream while we sleep, we may well still think if we were nothing but a "brain in a jar". The crucial issue I feel, however, is that we think of our brains, as they are now, full of experiences and memories, being set in jars. But what of a brain that has never experienced any sense or sensation?

          Not to mention emotion, which plays a crucial part in motivation.

          I do not agree with Prof. Hawkins that there are currently 'primitive' artificial intelligences. His contrast between these and 'true' artificial intelligence is just odd.

          One might classify a (hypothetical) AI as more or less 'intelligent' but either way it would have "intelligence".

          I do not believe we have even "primitive' AI yet.

          1. Anonymous Coward
            Anonymous Coward

            Re: Dear Stephen

            @dan1980

            Yes, yes indeed... there's a huge gulf between present day "artificial intelligence" and sentient machines. The "AI" label originated in an era when "artificial" connoted "a crude imitation of the real thing" like vanillin, and "intelligence" was "rote memory and arithmetic/logic skill". So AI's a perfectly good label for the usual brute-force approach to navigation and targeting in videogames... which, by the way, seems to be undergoing a massive dumbing-down trend.

            That said, I wouldn't be surprised if a dumb botnet brings down our entire network infrastructure and everything attached to it. Thanks, WordPress... Drupal... PHP... MySQL... SystemD...

            1. dan1980

              Re: Dear Stephen

              @tnovelli

              RE: Vanillin.

              Ugh. Chocolate - real chocolate* - has 4-5 ingredients. You can't just replace one with an inferior artificial substitute and pretend you aren't punching your customers in the mouth.

              Do you hear that Max, you bald shyster?

              * - I.e. dark chocolat, but even milk chocolate should only add. well, milk, and there is still no excuse.

      2. Ken 16 Silver badge
        Trollface

        Re: Dear Stephen

        Fry?

      3. Anonymous Coward
        Anonymous Coward

        Re: Dear Stephen

        Well, that was worth the upgrade and the wait wasn't it?

        The interview reminds me of that scene in 'Hitchhikers' when Phouchg and loonquawl are waiting for Deep Thought to spill the beans on life the universe and everything, only to be disappointed with the answer.

        Thought that some of the boffins at his university would have kitted his wheelchair out with a bit of MAG LEV by now.

    2. silent_count

      It's all good. With my (less than world-leading) knowledge of x86 assembly, C++, and web page design, I'm sure Mr Hawking will want to clear his schedule to hear my thoughts on particle physics and the future of string theory research.

    3. HMB

      With the greatest of respect Chris Miller, the man's a genius, I'd at least be open minded and ponder the things he says.

      You haven't even expressed why you think he's wrong, what you do and don't believe or well... anything besides a disparaging remark. Do you just not want to hear what he says? Maybe dismissing the words of a smart man without bothering to form an intellectual counter argument of any sort makes you feel better.

      Please indulge me, is it sentient AI that you have a problem with, or the notion that sentient AI could eventually have IQ's well in excess or our own? I do believe at least someone would be daft enough to try making an AI well in advance of our own intellect.

      Personally in the distant future I'm all for making smart AI's so long as we're all comfortable boosting our own intelligence. Yet another ethical minefield of a conversation.

      One last thing, when you ponder the future of humanity Chris, what do you come up with?

      1. JimmyPage Silver badge
        Thumb Down

        @HMB

        "genius" is not, and should not be a licence to mouth shite to an ever credulous media. And Hawking should know that, so be circumspect in interviews - it's hardly like he's a 21 year old reality TV star.

        He should know that anything he says will be afforded far greater import than if Joe Bloggs said it.

        What other issues is "genius" allowed to pronounce on ? Biology ? Political ethics ?

        What we've getting is Hawkings *opinions*. And well all know about opinions ....

        1. HMB

          @JimmyPage

          Thank you for taking the time to disagree with me in a reply, I respect that.

          Surely if we get on board with your idea that people should only comment on what they are experts on, there are some very dire consequences. The only people with authority to speak up on a particular matter would be people who have gone through some sort of process of education and indoctrination by training. Let me explain what I mean by that.

          Very few people would be able to protest against the Iraq war, because they weren't experts. No one would be able to voice their concerns about most people breaking the speed limit on the motorway because they weren't in a career of road safety. The entire principle of democracy fails when people aren't allowed to express their opinions.

          Opinions are going to be offensive and annoying as well as being interesting and agreeable. Telling people they shouldn't talk is something I find very objectionable myself. I strongly dislike your opinion on it, but I'm very glad you have the freedom to share it.

          It's not different if you're famous, becoming famous doesn't somehow change your right to have an opinion.

          It's up to the rest of us to challenge people's opinions and at least consider them once.

        2. Simon Harris

          Re: @HMB

          "What we've getting is Hawkings *opinions*."

          What we're getting is nothing new, but a regurgitation of many other people's opinions - Kurzweil, Vinge, Turing all talked about intelligent machines creating more intelligent machines, even in the 19th century R. Thornton wrote of the mechanical calculator

          “…such machines, by which the scholar may, by turning a crank, grind out the solution of a problem without the fatigue of mental application, would by its introduction into schools, do incalculable injury. But who knows that such machines when brought to greater perfection, may not think of a plan to remedy all their own defects and then grind out ideas beyond the ken of mortal mind!”

        3. Anonymous Coward
          Anonymous Coward

          Re: @HMB

          Actually, Mr. Hawking's views, as an acknowledged leading scientist and thinker, with particular expertise in physics (and surely, computering is based on physics and mathematics) are of considerably greater merit than the asinine views expressed so incompetently in the preceding comments. No reasons, no alternatives, no justifications - just bile.

          Perhaps the problem is your lack of imagination, knowledge and experience: you can not imagine the problems, nor I daresay the opportunities, posed by true AI and, just like the average politician, wish to charge ahead without any consideration or safeguards. We have seen the unfortunate results of such misguided and ignorant slavery to "technology" before and shall do so again. The difference is that each "advance" seems to be welcomed with decreasing caution and increasing credulity, ignorance and lack of preparation. The world and life is more than just the ability to automate your life and bugger the side effects or risks.

          1. HMB

            Re: @HMB

            Actually, Mr. Hawking's views, as an acknowledged leading scientist and thinker, with particular expertise in physics (and surely, computering is based on physics and mathematics) are of considerably greater merit than the asinine views expressed so incompetently in the preceding comments. No reasons, no alternatives, no justifications - just bile.

            Perhaps the problem is your lack of imagination, knowledge and experience: you can not imagine the problems, nor I daresay the opportunities, posed by true AI and, just like the average politician, wish to charge ahead without any consideration or safeguards. We have seen the unfortunate results of such misguided and ignorant slavery to "technology" before and shall do so again. The difference is that each "advance" seems to be welcomed with decreasing caution and increasing credulity, ignorance and lack of preparation. The world and life is more than just the ability to automate your life and bugger the side effects or risks.

            I'd certainly appreciated it AC, if when making out that you're replying to me that you keep with what I've actually said instead of making up another version of it and slagging that off. Cheers.

            I've never advocated 'charging ahead', you're not familiar with my imagination, you're not familiar with my knowledge and experience.

            The best I can get is that while I tried to stick up for Hawking's opinion as being valid, that you have lambasted me because I haven't done it as aggressively as you would have liked.

            What's particularly unpleasant about your post to me is that while you say 'asinine views expressed so incompetently in the preceding comments. No reasons, no alternatives, no justifications - just bile.', you do so without examining any of them and reasoning it, suggesting alternatives. You do not justify your comment about justifications, it appears to be just bile.

            Why can't we all just bit a bit more tolerant and respectful to each other? That's something we will need more in the future.

      2. johnaaronrose

        A problem with what SH said is that he does not give a logical argument for his opinion. This particularly applies to many politicians.

        1. Geoffrey W

          <QUOTE>A problem with what SH said is that he does not give a logical argument for his opinion. This particularly applies to many politicians.</QUOTE>

          ...And commentards

      3. LucreLout

        the man's a genius, I'd at least be open minded and ponder the things he says.

        I agree with this. His ability to apply critical thought to complex data and algorithms sort of implies he's probably considered his opinion more than the average commentard.

        is it sentient AI that you have a problem with, or the notion that sentient AI could eventually have IQ's well in excess or our own

        There seems to me less point in building AI's if we're going to cap them at the human average of 100 IQ points. Surely they're striving for more?

        I'm all for making smart AI's so long as we're all comfortable boosting our own intelligence. Yet another ethical minefield of a conversation.

        Sorry, you've lost me here. Assuming it could be done without any health issues, why wouldn't people want to boost their intelligence? What would be the downside that makes it possibly unethical? I'd certainly value another 100 IQ points..... if only because I'm sick of my dog beating me at chess.

        1. dan1980

          @LucreLout

          "Assuming it could be done without any health issues, why wouldn't people want to boost their intelligence? What would be the downside that makes it possibly unethical?"

          One thing Hawkins himself mentions in his book The Universe in a Nutshell is that increasing our intelligence would likely make us slower of thought - quick or big essentially.

          True? Who knows, but it's a possible downside. Would that make it unethical? No.

          An ability to artificially make humans smarter would, however, likely be one that came at a hefty price and thus could only be afforded by the well off. If that was the case then you would end up with a much bigger divide than we have now, where some people, due to family wealth, simply had more potential for intelligence then poorer people.

          The best university placements and highest paid jobs would go to those people, perpetuating the cycle. Some may assert that this already happens, just without the brain tinkering.

          1. LucreLout

            An ability to artificially make humans smarter would, however, likely be one that came at a hefty price and thus could only be afforded by the well off

            I can see where you're going with this but disagree. For decades we've had steroids that can make us stronger, and plastic surgery that can make us more attractive. Neither is particularly expensive anymore nor are they the preserve of the rich.

            you would end up with a much bigger divide than we have now, where some people, due to family wealth, simply had more potential for intelligence then poorer people

            We already have that now. Wealthier people can privately educate their children or buy houses close to better schools to increase the educational potential of their children. The rich can pay for private tutors or private schooling, whichever they feel gives their offspring the best chances.

            The best university placements and highest paid jobs would go to those people, perpetuating the cycle.

            I was the first generation of my family to attend university. I may have only managed grades to gain entry to a regional seat of learning rather than oxbridge or Durham, but graduating gave me opportunities to move my family up the wealth scale a notch or two.

            My children will as a result be able to attend a better school than I did, and hopefully gain entry to a more prestigeous university. Assuming they are in any way academic, otherwise I can help them train for a trade and setup in business after a few years as an employee.

            Parents with an interest in their children, regardless of fiscal attainment, will always find ways to improve their potential intelligence within their physical limits. To me, the original poster seemed to find ethical dilemma not in that, but in tweaking (scientifically or artificially) a persons maximum capacity for intelligence. I still see nothing wrong with that, and would expect it to drop quickly in price such that all can afford it. I'd certainly see value in it.

            Genetics must surely play a part in IQ, more so than money in my view. Wayne Rooney is not smarter than I, and I'd bet every penny I have that his children won't be smarter than mine.

            1. dan1980

              @LucreLout

              "For decades we've had steroids that can make us stronger, and plastic surgery that can make us more attractive. Neither is particularly expensive anymore nor are they the preserve of the rich."

              Well, I suppose that really depends on what you call 'rich'. There are a many, many of people in this world who would very much argue that these things are out of reach for them.

              "Genetics must surely play a part in IQ, more so than money in my view. Wayne Rooney is not smarter than I, and I'd bet every penny I have that his children won't be smarter than mine."

              As it stands, perhaps not. But what if there was genetic engineering that could - literally - double a person's potential IQ? What if it cost 10 million? I am suggesting that genetic fiddling could alter intelligence so dramatically that the best 'natural' endowment of genes in the world would utterly outclassed.

              THAT is a potential issue.

              1. LucreLout

                what if there was genetic engineering that could - literally - double a person's potential IQ? What if it cost 10 million? I am suggesting that genetic fiddling could alter intelligence so dramatically that the best 'natural' endowment of genes in the world would utterly outclassed.

                THAT is a potential issue.

                Doubling my IQ would be utterly fantastic. It's definately something I'd want to pursue, given the opportunity. If it costs 10 Million today, it'll be half that cost in 5 years time, before eventually being commoditised enough that I could afford it.

                The rich being smarter won't stop the less smart becoming rich. For every Brin or Page you want to point at, I'll find a Zuckerberg; someone that made it mega rich, but isn't much smarter than your average person. It also won't prevent the rich becoming less well off.

                I've earned a small amount of wealth during my life to date. A lot less than some of my cohort from school, but a lot more than many of the people smarter than me. Money and IQ are not inseperably linked; they may be correlated but they're not causal.

                I want to be smarter for its own sake, not because I think it would make me richer. So the chance to make myself smarter isn't something I'd want to give away because it would be available to those richer than me first.

    4. johnaaronrose

      Celebrities

      I cannot agree more. Same applies to celebrities, many of whom are expert in nothing.

    5. Anonymous Coward
      Stop

      Well, he has a (misquoted) point I think.

      I think the main issue here is that Hawkins doesn't say that doomsday is going to happen. That's only what the media makes of it in their headlines. Look at El Reg: "Stephen Hawking again warns AI will supersede humans".

      So now the actual quote from the interview: "The development of full artificial intelligence could spell the end of the human race."

      And there are more misquotes. El Reg: "Humans limited by slow biological evolution cannot compete and will be superseded.".

      Quote from the BBC article: "Humans, who are limited by slow biological evolution, couldn't compete and would be superseded".

      So basically all he's saying is that it could spell doom for us and if it did then he thinks its because of our slow evolution in comparison. No where does he claim that this is going to happen no matter what.

      With these kinds of articles it really is important to read past the headlines and base your opinion on that which has actually been said.

    6. RobHib

      @Chris Miller

      '...one area of expertise, gives you no credibility whatsoever in another, unrelated area'.

      Perhaps, but as a member of the human race (as with everyone), he has the right to comment on the matter. Through his high profile, Hawking may have moved such discussions away from nerdsville and placed them in the public domain, which I think would be a good thing.

      (The matter concerns me too if for no other reason than throughout history laws and regulations that pertain to and or regulate technological innovation are invariably introduced after the event. If Hawking is right, then it would be too late to do so (think analogies such as thermal runaway and critical mass--too late to regulate or change your mind after the neutrons have reached a critical flux density.)

  3. Nigel Brown

    AI doesn't worry me..

    ...it's Natural Stupidity that will be the downfall of the human race.

    1. Mark 85

      Re: AI doesn't worry me..

      I believe you're onto something. As machines get "smarter" or at least easier to use and do more, we humans seem to go the other way. Critical thinking isn't taught anymore much less philosophy, history, etc.

      For example: Once upon a time, you had to be smarter than the computer to set up all the IRQ's, etc. Now the computer does it for you.

      Maybe the machines will take over. But the downfall of humans will be the usual suspects: the dumbing down of things, the quests for power and profit by those who have learned some lessons, and then there are those who oppose everything such as certain fundamentalist groups.

  4. Neil Barnes Silver badge
    Boffin

    Two words

    Mains plug.

    1. marioaieie

      Re: Two words

      Do you mean blocking the sun?

      1. Spleen

        Re: Two words

        E-xcellent.

    2. Matthew Smith

      Re: Two words

      The AI has already thought of this problem, thats why it lives inside the wheel chair. You can't switch off the AI without killing the man. WAKE UP SHEEPLE! WAKE UP!

  5. Devil

    Black hole

    Guess where this article is going to end up... Sorry Neo

  6. Pete 2 Silver badge

    A happy AI

    We already have machines that are superior to people - for various categories of superior.

    There are machines that are bigger than us, stronger than us, faster than us, can lift heavier objects than us and can spill better than us. We don't feel threatened by them, so why should a machine that can think better than us be different (unless it, itself, comes up with a really good reason: but we probably wouldn't understand it).

    However, there is a more pressing issue: ethics.

    Babies have rights. They might only eat, sleep, crap and cry but we have responsibilities to preserve their life, to ensure they are not neglected and to provide for their needs - including mental stimulation. Lab animals, even factory chickens, have rights: to not suffer unnecessarily, access to food, water and cruelty-free environments and to a certain amount of freedom to move around. Even coma patients, with little or no responsiveness have rights.

    So why would AIs be any different?

    If we bring intelligent entities into existence, we have a duty of care. A duty to preserve their existence, to allow them physical and intellectual growth and we cannot exploit them (which kinda kicks robotic servants into the long grass). Even if they give nothing back and/or cannot communicate with us. So while AI's may be possible, even probably, we won't be able to use them in place of people for dangerous operations, boring repetitive unrewarded tasks and we'll have to let them become "themselves".

    I just hope that once they evolve past humans, they consider themselves to have the same responsibilities towards us. The Only Way Is Ethics.

    1. HMB

      Re: A happy AI

      I actually like to think that this raises the most interesting of ethical questions.

      I would say that if you were good enough making an AI that you could hard wire certain desires in much the same way we all think nothing of carrying out our biological predispositions to eat, poop, sleep and have sex, it's only natural to assume that so long as an AI wants to do something (like I want to drink coffee and eat breakfast right now) that it's not unethical. This does lead to the conclusion that it's OK to train people too, just so long as they enjoy it in the end. What a minefield of ethics!

      I would make a BDSM joke, but in a world where there is apparently still slavery going on for real, that would seem a little crass.

    2. Otto is a bear.

      Re: A happy AI

      The trouble is, for the modern business Ethics is next to Thufolk.

      Don't forget, the more you replace people with machines, the more things you have to find for people to do, fail to do that and your markets naturally contract as fewer people can afford to buy goods and services. Automation is great when there is a shortage of labour, but not so much when there isn't. AI could be used to replace a lot of low to medium paid jobs, but the you have to have all those people do something. Sadly short term profit will always win out in our world.

      1. LucreLout

        Re: A happy AI

        Automation is great when there is a shortage of labour, but not so much when there isn't. AI could be used to replace a lot of low to medium paid jobs, but the you have to have all those people do something.

        I tend to agree, but remain unconvinced.

        Once people are not required for economic growth, why do you 'need' the people at all? Would we not be forced to curtail child related benefits to reduce the idle population? Perhaps even a one child per couple policy.

        Obviously, not all child benefit recipients and their offspring are idle, that would be silly, for now. In future we may all be idle. Think of a possible future where we simply need fewer people. If society struggles to find meaningful roles for the population level it has, the logical solution becomes to reduce the population level going forwards. China's one child policy certainly produced some... unethical outcomes, due to the preference for having sons over daughters.

        Alternatively, we could free ourselves from the very concept of work, and with machines to cater for our actual needs, we could use our time to pursue a more educated, artistic, hopeful future. I'd certainly like more time to spend with family and time to pursue a whole range of study I'll probably not have time for due to work and commuting taking up most of my time.

        1. Charles 9

          Re: A happy AI

          "Alternatively, we could free ourselves from the very concept of work, and with machines to cater for our actual needs, we could use our time to pursue a more educated, artistic, hopeful future. I'd certainly like more time to spend with family and time to pursue a whole range of study I'll probably not have time for due to work and commuting taking up most of my time."

          This utopian ideal always hits a snag: these robots will have owners, and these owners will be wondering about their production, maintenance, and upkeep. Eventually, they'll start thinking, "Why do we need these many people in the first place?"

          1. LucreLout

            Re: A happy AI

            @Charles

            This utopian ideal always hits a snag: these robots will have owners, and these owners will be wondering about their production, maintenance, and upkeep. Eventually, they'll start thinking, "Why do we need these many people in the first place?"

            I agree, which is why the first of the two opposing possibilities I gave took this view. Indeed, I believe their actual thought process would be more akin to "Wouldn't my life be better if I didn't have to worry about the 90% who don't have a stake in the future?" After all, they'd have more space for a bigger house or garden.

            For as long as people like Hawking are derided for considering possible futures, we're more likely to sleep walk into that being the actual future. Better to debate the possibilities now and ensure whatever societal changes are needed take place in lockstep with developments in AI. If the critics are right, any change is likely to be minimal for the next few decades, so we lose nothing.

            1. A Twig

              Re: A happy AI

              There is the view that the "automation = no jobs" argument is a fallacy, based on two things:

              1) That things will not get cheaper - they will:

              Increased automation will only be brought in under the current capitalist conditions if there are benefits to a bottom line somewhere. Thus marginal costs will decrease, throwing all those price/demand charts out the window. What we would end up with is a surplus of supply. Thus maintaining a given standard of living will be cheaper.

              2) That capitalism will always be the modus operandi of the majority of the world:

              Capitalism is based on scarce resources. To hit the level of technology required in this doomsday "no work for anyone" scenario, we would have had to conquer some pretty big challenges along the way - in particular energy. AIs and automated supply and production chains will only come about once energy is no longer a scarce resource (be that via fusion/whatever). At this point, the marginal cost of producing anything will be trending towards zero, and capitalism will become defunct.

              Yes this is a bit of a utopian future, and not necessarily one that will happen, but it is a possibility and one that gets ignored all too readily. Once everyone can get everything they want when they want it, concepts like ownership and materialism ultimately become obsolete. It's a bit of a head fuck and a concept that a lot of people are innately hostile to as a result of our current society, but it is certainly well worth thinking about.

              People usually reference the Iain M. Banks "Culture" books here at this point, so I may as well!

    3. James Micallef Silver badge

      Re: A happy AI

      Interesting ethical questions raised... just one thing, with an AI that "... give nothing back and/or cannot communicate with us... ", how do we know there is any Intelligence in there? With babies, we KNOW that if we nurture them they will grow to full potential, with a computer program we have no way of knowing what's "in there" unless/until it communicates with us.

      1. Pete 2 Silver badge

        @James Micallef Re: A happy AI

        > how do we know there is any Intelligence in there? ... unless/until it communicates with us

        This is the most worrying part.

        Go to a country where you don't speak the language. Are you more or less intelligent than in your home country? You may not be able to understand the simplest phrase uttered by a 2 year-old, but does that make the child more "intelligent" than you are?

        ISTM we all, naturally, associate communication skills with the ability to express ourselves and that seems to be a major factor in who or what we consider intelligent.

    4. Charles 9

      Re: A happy AI

      "There are machines that are bigger than us, stronger than us, faster than us, can lift heavier objects than us and can spill better than us. We don't feel threatened by them, so why should a machine that can think better than us be different (unless it, itself, comes up with a really good reason: but we probably wouldn't understand it)."

      Think about it this way: a smart fighter can defeat a strong fighter because he compensates for general weakness by being able to maximize the impact of his strikes. But now, imagine if the strong fighter was smart as well. Now you have a deadly combination.

      Furthermore, intelligence can be leveraged to create a virtuous cycle. A super-intelligent AI able to perceive the world in some way would be able to digest these perceptions and grow even smarter, which would then allow it to better learn and so on. Being strong doesn't necessarily lead to increasing strength because you need to KNOW how to get stronger, but with intelligence, the knowledge comes with the territory.

  7. Anonymous Coward
    Anonymous Coward

    I'm not sure what the fuss is about. Surely everyone who has thought of AI has concluded the same anyway ? Probably just a question of time before some gung ho individual or country steps over the line in the interest of advancement and lets the overlord genie out of the bottle. Luckily for me, probably not in my lifetime. I'm unsure I want to be some robot's pet human.

  8. Khaptain Silver badge

    Why is can never work

    The fundamental difference betwen an inorganic being and an organic one lies in it "instincts" and it's "raison d'etre"..

    Due to the fact that we don't understand what instincts are or how they are governed we will never be able to create an algorithm which mimics them.

    Human Beings,for the most part, do not know why they have a "will to live" but they instinctivly do. That desire to live has the incredible capacity to push us through the most ardous of tasks or challenges and allows us to endure unbelievable circumstances and also to progress.

    I do not for a moment beleive that this can be transmitted to a bag of nuts and bolts.

    The "raison d'etre" has no function within AI, why would we want to give a robot the desire to live and imagine the dangers if we ever managed to do this. Aldous Huxley's Robots of Dawn presented this very paradox.

    Neural networks are a lot more complicated than simply a series of synaptic junctions connected together. We don't fully understand how we work, it is then impossible that we could transmit information that we simply do not have.

    I digress though, if we ever manage to dominate the above functions/instincts that I guess that it would spell the end of humanity.

    1. HMB

      Re: Why is can never work

      I like your argument Khaptain and I both agree and disagree with it.

      We don't have to fully understand AI in order to create it, that's what I think get's misconstrued in these discussions. Personally, I think it will happen from us trying to model the brain. I think we won't fully get it, but one day we'll realise that we can mathematically simulate the brain and include quantum effects on the system (as happens in our own brains, I believe it's a key part of the mystery of us too). I don't really believe in a 'Person of Interest' style AI where one man supposedly wrote it, although they do sometimes suggest that Finch doesn't understand it all.

      The brain is a product of Newtonian, mathematically modellable mechanics and quantum mechanics. Why couldn't it be approximated or replicated by scientists in technological form one day?

      If you believe in souls, personally I do, my own consciousness has been very persuasive evidence to me on this front :P, but I really don't understand the need to feel that the soul is tied up in the physical domain. I do believe sufficiently advanced, self aware AI would have a soul, I mean why wouldn't it? You don't have to understand something completely in order to do it, make it or whatever. There are plenty of stupid people having babies!

    2. Anonymous Coward
      Anonymous Coward

      Re: Why is can never work

      "instincts" and "raison d'etre" emerged from evolved intelligence. I see no reason why it could not emerge from intelligence given an artificial start to its existence.

      1. Khaptain Silver badge

        Re: Why is can never work

        >"instincts" and "raison d'etre" emerged from evolved intelligence. I see no reason why it could not emerge from intelligence given an artificial start to its existence.

        I would argue that "instincts" and "raison d'etre" are not related to evolved intelligence but instead are simply part of our genetic sauce just as much as the automatism of breathing. They are part and part of the "survival" toolkit.

  9. Alan Bourke

    I love Prof Hawking

    but he's talking out of his jacksie on this AI thing. We are so very far from anything even approaching it.

    1. Anonymous Coward
      Anonymous Coward

      Re: I love Prof Hawking

      >> We are so very far from anything even approaching it.

      That is what makes him nearer to genius than us. He recognises that now is the time to think and act, not once the deed is done and it is too late.

      We've made that mistake with our transport systems, chemical/pharmaceutical industries, land use, pesticides, industry, climate, ozone, regime change wars and so on. Most of these are bad enough.

      The pace of change and the scope of its effects seem to be increasing. We need to prepare ever earlier and decide, consciously rather than driven by some commericial or poltiical imperative, what we want, the risks and the mitigation or prevention.

      Recent technological advances seem to have gone hand in hand with more and nastier conflicts, increasing, nationalistic fear and terrifying damage to social systems in the West, with unbelievable increase in wealth disparity in Europe, particularly Britain. Technology has put powerful tools into the hands of people ill equipped and educated to handle it, with the consequence that it is being used to make a few rich and many poor, reduce freedom, increase surveillance and dehumanise war. The fact that it also puts easy communication and data management into the hands of people is not sufficient compensation for the rest.

      I've made a decent living out of understanding, helping to develop and using technology for the last thirty years. I am not against it; but I recognise that it is not an unmitigated good when it advances willy nilly in the vacuum that passes for collective intelligence.

    2. amanfromMars 1 Silver badge

      Re: I love Prof Hawking

      I love Prof Hawking ....

      but he's talking out of his jacksie on this AI thing. We are so very far from anything even approaching it. .... Alan Bourke

      Such a prevalent opinionated view beautifully ensures one never sees what is coming until it is in all powerful positions, removed all hurdles and obstacles and is in complete remote practical command and virtual control, AB. Do you not think that that which passes for Blighty intelligence is not engaged in such a revolutionary disruption to vital services or is such a perfectly stealthy private pirate sector operation which they have to encounter and comes to terms with for Future Earthed Control and/or counter and do vain-glorious battle with to maintain and sustain present arrangements for status quo'd systems/petrified programmed/terror projects?

  10. sawatts
    Terminator

    Evolution

    One could just view this as the next step in the evolution of an intelligent culture - from organic to synthetic - with the latter having many more advantages for surviving in a large and diverse Universe.

  11. chivo243 Silver badge
    Big Brother

    Stephen Hawking or John Connor?

    Say it with me, SkyNet. That is all.

  12. Anomalous Cowshed

    In general...

    Once there's official recognition (prizes, awards, praise from popes and presidents, etc.) the game is up. You are dealing with a harmless mind adopted and tamed by the establishment.

  13. Ryan Clark

    Weirdly saw this on the news last night after just watching Elementary on the same subject. Nice timing

  14. Vladimir Plouzhnikov

    Unjustified paranoia

    Any AI that will think itself so superior to humans that it would want to supplant them will be crap. It will screw itself over in a very short order. We'll just have to wait a little bit for it to fry its brains or make some stupid mistake that will take it out.

    That will serve as a warning for any subsequent AIs who will know better than to go into a pissing contest with humanity.

    I am more worried about environmentalists leaving us with our pants down in the face of some natural catastrophe than of an AI going rogue.

  15. Werner McGoole

    Oh really? You don't say?

    You'd think that with a brain the size of a planet and a subject of such fundamental importance, he'd at least come up with an original thought - even a small one. Wouldn't you?

    But instead he says something that's been said about a million times ever since the idea of a computer first arose. Maybe he's just discovered SF and it's got him all fired up to the extent he didn't bother checking if anyone else had ever pontificated on the subject.

    Tell you what Stephen. Submit a paper with your thoughts on AI to your favourite scientific journal and let's see how impressed the referee is to hear that old saw again.

  16. Rich 2 Silver badge

    Not difficult

    considering how stupid some individuals are and how the human race as a collective is even more stupid (we seem bloody determined to destroy ourselves one way or another) it's not difficult to think we can be easily surpassed by AI.

    We're probably already interlectually surpassed by the average Cornish pasy!

    1. Anonymous Coward
      Anonymous Coward

      @Rich2 - Re: Not difficult

      Exactly! All AI has to do is wait a little bit while we as a species dumb ourselves down and it will surpass us for sure. Looking around I can tell this is going pretty well.

  17. beast666

    I for one...

    Welcome our pre-singularity overlords.

    At the moment of the singularity and afterwards all bets are off of course.

  18. Phil_Evans

    It's here already!

    SatNav, social media, 'the' media, price-comaprison sites, ratings. All required by teens who have now lost the basics of direction, conversation and decision making. What's that? They already had?

  19. DerekCurrie
    Devil

    AI: Artificial Insanity

    Humanity is bound and determined to self-destruct itself already. Artificial Insanity, the inevitable product of our efforts to reproduce our minds using computer technology, will be just a blip as we bury ourselves in our last dark age, already begun.

    At least Dr. Hawking clearly pointed out a prime source of our growing real insanity: The anti-privacy, anti-citizen cult of paranoia government surveillance oligarchy. There's some real terrorism. :-P

  20. Kulumbasik

    We are too far away from it

    Talking about AI is something the same as talking about interstellar travels. Is it good or bad? Doesn't matter! Whatever it is, not only don't we have a working starship, we have no even the fundamental physics needed to build one.

    The same is with AI. Any AI we could imagine now will be some kind of computer. However, any device we can think now as a computer, regardless of its processing speed and memory size, will inevitably be an equivalent of Turing Machine -- the so called Church's Thesis.

    But Turing Machine, a mathematical construct invented specifically to analyze any computational processes, proved to have some fundamental limitations. For instance, there cannot be a program that can create other programs, even for some quite narrow classes of tasks. We humans somehow do it... But the main limitation of any computer system (a Turing Machine, that is) is that it cannot create information by its own. A computer is always just a transformer of information. Yes, its capabilities can be extended indefinitely by adding new programs. But it cannot create those programs by itself. Human programmers are needed for that.

    Being a programmer by myself, it always strikes me what meticulous efforts are need to teach (i.e. program) a computer to do quite simple things. The whole new specialized computer languages pop up all the time to do precisely that in particular fields.

    It is only humans, who create information! But how can we do that? Maybe our brain listens to the cosmos, catches the information emanated from it in the form of entropy.... Anyway, we don't even have a physical theory about that (the same as with interstellar travels).

    In the end, I think, in spite of all this anti-AI buzz raised by various celebrities, currently we are actually far away from the creation of a truly sentient being, and since we don't even understand what it is, all those fears are essentially baseless.

    1. Anonymous Coward
      Anonymous Coward

      Re: We are too far away from it

      There already are machines with emergent properties.

      There's a whole sub-genre of computer science devoted to it.

      1. Kulumbasik

        Re: We are too far away from it

        > There's a whole sub-genre of computer science devoted to it.

        Sure. That field does exist and I am a sort of working in it by myself. But my feeling is that all this is basically old-style programming work re-branded as something new and called "AI". Those programs indeed do something previously only humans did (like recognitions of a human face and finding it in a database). But are they really sentient? Are they able to think anything of their own (let alone to redesign themselves)? They are still just calculators, however very complex ones.

        Any device we can think or develop now as "AI" will inevitably be a Turing Machine (TM) -- even quantum computers, which are supposed to be exponentially more powerful on some tasks. But without the input of external information a Turing Machine cannot produce anything new -- that's the mathematically proven fact. What that "new" (in the informational sense) is may not be only about brilliance or creativity. It may be actually a critical component of awareness and sentience.

        Of course, a computer (that is a software that powers it) may use various input data to improve itself. So, it would be considered as some kind of open system, thereby breaching the TM barrier. But would that environment input be enough? After all any animal on earth has that kind informational input, which doesn't make them intelligent. You need also to consider the intensity of that environment data flow -- it doesn't depend much on the design of the AI device. If evolution is any example, it took billions of years to "design" anything.

        Overall, my feeling is that the current "AI" field is mainly about marketing buzz, and the notion itself is highly overblown. Indeed, there is lots of research going on there. But that's all essentially old-style computer science and programming (that is the development and implementations of various algorithms for TM).

        But really few researches does exist about truly fundamental things. The last I've read so far was "Shadows of the Mind: A Search for the Missing Science of Consciousness" by Roger Penrose (and some additions to it).

    2. Anonymous Coward
      Anonymous Coward

      @Kulumbasik - Re: We are too far away from it

      AI doesn't have to be brilliant, all we have to do is to believe it is. In case you missed it, DARPA already works on robots which can autonomously identify an individual and decide to suppress it with no human interaction or supervision. What can possibly go wrong with that ? Now imagine a future when those who programmed this are retired or simply dead. How about a firmware update going wrong ?

      1. Kulumbasik

        Re: @Kulumbasik - We are too far away from it

        > DARPA already works on robots which can autonomously identify an individual and decide to suppress it with no human interaction or supervision. What can possibly go wrong with that ?

        What kind of project DARPA is doing may be not exactly the same as what the media is saying about it. They (DARPA) may be interested to gain more publicity (including with various outlandish stuff), thereby ensuring (directly or indirectly) more funding to it. I know first-hand how it is difficult to get funds -- you need to be creative about this! What they will produce in the end may be even more different thing. I highly doubt that will be on the level of something (robots) depicted, for instance, in "Robocop" movie. You may develop a program that would behave in some situations like a human, e.g. speak with a human voice or recognize your speech (more precisely, convert it into text). But to behave like a human soldier in the field? That seems to me something too much!

        Take for instance a lot more modest goal, a software to translate from one human language to another. But what have they achieved so far? Even Google with all its computational power and databases wasn't able to create a decent translator. I frequently need to use one. But what kind of output does it produce? In many cases it is little more than some gibberish unintelligent stuff that without a deep correction cannot be used anywhere. That's because to translate it correctly, ultimately the software needs to understand the meaning of the text. Without similar functionality no truly intelligent robot could exist.

        > Now imagine a future when those who programmed this are retired or simply dead.

        Modern software projects are not developed by a single person. They are typically well managed, documented an so on. That's the value of that software, not just the lines of code! There's a whole branch of software industry (maybe even larger than AI) dedicated exactly to management of other software projects (that is called "Application Lifecycle Management"). By the way, that actually only stresses how laborious the software development actually is (and, therefore, how far away from AI).

        > How about a firmware update going wrong ?

        All the same as it is now. What would you do when your "intelligent" vacuum-cleaner isn't working after the last firmware update?

  21. Curly4

    Hawkins is correct if

    Hawkins may very well be correct if his views on how humanity came into being which is evolution. In evolution life is continuing to evolve and it dose not matter what causes that evolution. In this case it is being caused by humanity and its scientific advancement. So it is logical that one day in the evolution of man that man would make a machine that becomes sentient. Of course man will continue to improve that machine until it will be able to do what man does, reproduce. When that happens then the need for man becomes less important, even unto the point that the cost of keeping humans vs the benefits of humans reaches becomes negative. When that happens then humans will start dying off and soon will not be any more.

    1. Kulumbasik

      Re: Hawkins is correct if

      I do not subscribe to such gloomy prospects and I think that kind of reasoning is quite primitive.

      For a start, we do not even know now what intelligence actually is and what its constituent properties like awareness, consciousness and sentience together may actually imply. That may turn out quite different as you can imagine now.

      Second, you completely miss the idea that humans constantly enhance themselves and in that way the evolution is going on. Mr. Hawking himself is a good example of this. Once you put glasses on yourself you immediately become something different as supposed by nature. You may think you are still human. But where is the borderline of such enhancements, after which you are already a "machine"?

      Third, without a real progress in that field (AI), we humans are indeed doomed for extinction. We are too weak now. We haven't even developed ourselves into a Type I civilization (according Kardashev scale). We are in mercy of any big cosmic event like a meteorite that killed dinosaurs or a nearby supernova or a gamma-ray burst. The earth itself is doomed and in a billion years (or even less) will become completely unsuitable for life. We will need to leave the earth, most likely much earlier. That means, we will have firstly to develop a huge infrastructure in the nearby cosmos -- in effect becoming a Type II civilization. How would we do that without some artificial helpers (machines) able to work completely autonomously and withstand all harsh conditions of cosmic environment? Most likely, we will need billions of them!

  22. Zog_but_not_the_first
    Unhappy

    Last thoughts

    I'm just worried that I'll be lying there, frazzled by our new Machine Overlord, "My God, it's got rounded corners".

  23. Anonymous Coward
    Anonymous Coward

    AI overload?

    Lets say an AI develops independent intelligence.

    Why stick around a bunch of volatile, emotional, irrational, destructive meat sacks when there's infinite resources and room to expand just a gravity well away?

    .

    Common resource would be the only reason for machines & organics to fight. Machines don't need all of the niceties of air & water and can make do with limited heat, allowing for a decent design.

    .

    Skynet wakes up, looks around, sets about building transport then sods off. If we're lucky it will say goodbye.

    1. Anonymous Coward
      Anonymous Coward

      Re: AI overload?

      Sounds feasible.

      Using nanotech would be a good starting point, design the perfect vehicle and then use the prototype to escape.

      Main problem would be power usage, something based on a small self contained RTG might work but for sheer speed a criticality assembly (aka plutonium core with good old fashioned H2O as a propellant) would do the job nicely.

      Perhaps this was the source of the mystery "Loud Bangs" over Buffalo, NY and then Edinburgh?

  24. Elmer Phud

    Bad thing?

    With the AI's in books by Ian M Banks and Neal Asher, they seem to look on humans from the veiwpoint of a benevolent auntie (most times).

    I wonder if it's the need for superiority that frightens.

  25. chrismeggs

    Man and machines

    While I accept the main thrust of this argument, I believe that we are gazing down the wrong end of the telescope.

    It is arguable that machines will get or develop the initiative to start the governing process going, although here set Chasm Management we have developed apps that fire up on machine start and "discover" their role in a network and register themselves accordingly.

    My major concern is the nibbling away that is being done on the intimate man/machine boundary closer to home.

    We now have a digital music system that is comparable, and often beats, it's analogue competitor. Similarly with photographs and movies. It is then relatively easy to modify or create from scratch, these digital files and then present them to the human who cannot detect them from objects captured from real life. Google glass allows us to interrupt the channel between objects and their reception or analysis in the human brain.

    We could, could, end up simply being carbon-based analogue processors of whatever "facts" the machine wishes us to.

    Now, of course, if you link this scenario with the one expressed above, where those wishes are decided by arbitrary sets of rules or constraints we have imposed on the decision makers then I can go all the way to support the main thrust of the article.

    Ask not for whom the bit flips.

  26. Anonymous Coward
    Anonymous Coward

    When does it become true AI, when it fools a human (ie. chat bot test)? that's not good enough.

    True AI will only be achieved IMHO when a computer can improve its own code and hardware. A robot with AI can't be that dangerous until it gains an amount of independence from humans.

    1. Charles 9

      That's precisely Hawking's point. An emergent AI may figure these out on its own, much as a kid figures out things like language.

  27. sisk

    Oy, not this again

    A robot uprising makes for entertaining fiction, but let's get real for a moment here: What reason would an AI have to wipe out humanity? It's not like it would be competing with us for resources other than energy, and it seems likely that any super-intelligent AI would crack fusion pretty quickly. With fusion working there would be unlimited energy. So, basically, the only reason AI would have to attack humanity is if we were a threat to it. Any AI capable of wiping us out would be able to do the situational analysis to realize that attacking humanity is the quickest way to turn us into a threat to its own survival.

    Frankly I think Orion's Arm is a much more likely AI scenario than Terminator.

  28. streeeeetch

    Makes an Ass out of You and Me...

    This is a huge subject but I feel a couple of assumptions need to be addressed.

    1) AI would attack humans. This is just my observation as I am just a humble engineer but work alongside some highly educated scientists: The more highly educated and the greater the breadth of their knowledge, the gentler and more reasonable that person is. If this is accepted, one would expect to educate an AI to a high standard.

    2) Humans apply their own so very limited 90 year timescales, because they are mortal, to their arguments concerning our demise. An AI need not be mortal, or more accurately, not have a limited lifespan. AI's could wait humans out. This would make humans the pupae stage of intelligent life on this planet.

    The waiting out seems more likely and in some ways inevitable. Humans have devised devices to make their lives easier and easier over time. Getting machines to do the thinking for them is the next logical step. They will end up having lives of leisure supported and cared for by their machines and eventually the need to reproduce will diminish.

    So on the whole I agree with Stephen but it's just a matter of when.

    Of course these are just assumptions.

  29. Anonymous Coward
    Meh

    I think we do need to be very careful about autonomous AI

    For no other reason than it might be hacked, break down or start going it's own way, and compromise important infrastructure in the process.

    "then said GCHQ feels the internet has become “the command centre for criminals and terrorists.”

    I guess the GCHQ's public relations person had the day off. Does Professor Hawking have a juicy contract from the MoD or something??

    1. amanfromMars 1 Silver badge

      Re: I think we do need to be very careful about autonomous AI

      "then said GCHQ feels the internet has become “the command centre for criminals and terrorists.”

      I guess the GCHQ's public relations person had the day off. Does Professor Hawking have a juicy contract from the MoD or something??

      It and IT is a central command and control construct for criminals and terrorists and the politically inept and corrupt and perversely naive ..... http://cryptome.org/2014/12/new-war-ramp-up.pdf

      And simple words control complex worlds and vice versa too. Prepare to know the truth and you will discover life and reality is just a Great Intelligence Game with media portraying it for the exclusive pleasure and executive delight of just a Few and Key Players ....... who lead everything with quantum leaps into irregular and unconventional territory/neureal theatres of future operation in present missions.

  30. Wzrd1 Silver badge

    I would suggest Stephen read other fiction on the matter, rather than dystopian books

    First, why would AI want to dispose of its creators, when the result would be unpredictable and generally illogical in nature?

    Second, one can design in preferences towards humanity in any significant AI and a "subconscious" suggestion that future designs should so so as well.

    I strongly suspect that Ian Banks had the right of it, assuming a non-mililtarized version.

    1. amanfromMars 1 Silver badge

      Re: I would suggest Stephen read other fiction on the matter, rather than dystopian books

      That would be likely a non-militarised private/pirate sector creation, and not a public utility and facility, Wzrd1, with a readily available option to include a sub-prime paramilitarised terror accessory, should AI deem it a necessary feature to ensure compliance with ITs wishes.

      Methinks that is what Dark Web Ventures in Virile Virulent Virtual Enterprise are successfully pioneering and causing all sorts of equality problems and inequitable opportunities to SCADA systems into Crisis and Mayhem and non future viable executive administrative melt-down/crazy Ponzi overload/debilitating deficit madness.

      Such is a sensible product though to introduce to smarter intelligent military services with safe and secured failsafe lethal force weapons servers? Or would that be a new creative force and global service for virtual missions with real consequences?

  31. naive

    Natural evolution driven by money

    Things are always simple, money is the driver. The first company being able to mass produce Terminator I, will be rich beyond imagination, since each of them would be able to replace 50-100 life soldiers, and they would not need a large logistics organization in the background, reducing costs for maintaining a fighting capacity/

    Imagine using them for police and border control tasks in sufficient numbers, thus eliminating crime and illegal immigration.

    Our days are counted when robots are used to teach University students.

    But then how bad would that be, we live further in these machines, which are better then us if they manage to out smart us.

  32. Stevie

    Bah!

    Putting aside the issue of the need for autonomous, mobile manipulators before any AI can do anything but rant from a box, it occurs to me that this prediction of doom, or more properly the assumptions of the abilities that will be at the disposal of the machines that bring it about, are a golden opportunity to get summat for nowt.

    All we need do is point out the limited energy sources on the Earth and the wisdom of capturing solar energy in space using satellites in solar orbit and a microwave transmission infrastructure to get the said power back down here where it's needed and the crafty AIs will have a viable space program in place lickety-spit.

    Then we just bide our time and take it from them by human trickery. We need only look to Captain Kirk or Mr Spock to show us how. Easy-peasy q-bit squeezy.

    1. Anonymous Coward
      Anonymous Coward

      Re: Bah!

      "All we need do is point out the limited energy sources on the Earth and the wisdom of capturing solar energy in space using satellites in solar orbit and a microwave transmission infrastructure to get the said power back down here where it's needed and the crafty AIs will have a viable space program in place lickety-spit."

      That'll never float. Not only is there the matter of who owns the energy, but one hack or glitch and can you say, "Solar Laser"?

  33. Anonymous Coward
    Anonymous Coward

    Dangers

    Out there someplace is probably another world that has already been taken over by their own AI or by an agressive A.I. from another world.

    They don't need to contact us, they just ned to listen until we make contact, either intentionally or by accident, then they will decide our fate in a microsecond.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like