back to article Hackers' Paradise: The rise of soft options and the demise of hard choices

John Watkinson argues that the ubiquity of hacking and malware illustrates a failure of today’s computer architectures to support sufficient security. The mechanisms needed to implement a hack-proof computer have been available for decades but, self-evidently, they are not being properly applied. The increasing power and low …

  1. JimmyPage Silver badge
    Thumb Up

    Backward compatibility ...

    One thing this otherwise excellent article doesn't really address, and is the *real* reason we are where we are.

    PC/MS-DOS were inherently flawed, as they were single-user in execution, they had no concept of root and user access. Whoever switched on the machine was king of the hill. And this paradigm carried on throughout the 1980s and 90s - right the way up to Windows 95.

    We *could* have had a true multi-user multi-tasking PC/OS combination in 1987. But the second business realised they would have to pay for new software to run on OS/2, the die was cast.

    Excellent reading and discussion material for a Friday. It's like being back at Uni ;)

    1. Cynic_999 Silver badge

      Re: Backward compatibility ...

      I don't see that as a flaw at all, any more than it is a design flaw that anyone with physical access to your kitchen has the ability to switch off your freezer or operate your kettle. Like your kitchen, it was designed for *personal* use, not as something for use by masses of untrustworthy strangers.

  2. Anonymous Coward
    Anonymous Coward

    Needs amending

    We have licence plates on cars so that bad drivers can be identified. Perhaps one day it will become the law that no computer message can be sent whose sender cannot be identified. Perhaps not, but spare me the howls of protest and come up with a better idea.

    Let me amend that: if access to that information can be contained to law enforcement who actually operates under the constraints set and ethics expected of them, then yes. Otherwise, no thanks. We already have enough surveillance.

    The choice is between living with a degree of nuisance and taking some responsibility yourself to keep your computing house in order (and have ways to hit computer providers over the head for failing to do it right - yes, Microsoft, I'm also looking at you) or giving the state total control and live in the box they allow you to exist in and pray you don't hit the wrong key so it appears you are a criminal. Well, there is plenty of evidence that the state is definitely not to be trusted, so for me, only the first option is viable. Freedom has a price: you have to work for it.

    1. thx1138v2

      Re: Needs amending

      "We have a worldwide anti-virus industry that for a small fee will _close your stable door after the horse has bolted._" Identity tracking has the same problem. It's only known who the bad actor is AFTER the damage is done. What is necessary is to prevent the possibilty of damage in the first place. Applying a humanistic approach to a machine is just plain silly. It's a machine. It structure and processing can be controlled.

      As far as I know, there has never been a technology developed that hasn't been abused.

      1. Anonymous Coward
        Anonymous Coward

        Re: Needs amending

        As far as I know, there has never been a technology developed that hasn't been abused.

        Dental flossing?

        :)

        1. oldcoder

          Re: Needs amending

          Nope. Mis-used dental flossing can cut teeth off. Several times it has been reported that over flossing cut into the tooth to the point of requiring either a filling, or a root canal.

  3. cracked

    If I could, I would ...

    We have licence plates on cars so that bad drivers can be identified. Perhaps one day it will become the law that no computer message can be sent whose sender cannot be identified. Perhaps not, but spare me the howls of protest and come up with a better idea.

    The trouble with securing the machines - like rego plates on cars - is that, to date at least, if someone builds it, someone else figures out a way to break it (eventually). And anyway, only since roadside cameras were widely deployed did the defence of "I wasn't driving" become (almost) obsolete.

    There is a telling line in the article, I think:

    "Back-in-the-day boffins did not want to do harm"

    If they had, would VAX and the like have remained untouchable?

    That said, the amount of social issues that would be created by requiring openness before connection to the internet was allowed - Right To Be Forgotten take a bow - would take some mitigating.

    And so a system that allows anonymous posting, but full traceability (if needed) is probably the best half way house I can envisage, in the short term at least.

    But: Given the lack of trust in any organisation(s) that might seem capable of managing the planet's Active Directory (governments et al), I don't hold out much hope of any implementation at all.

    Though as the article says: what other effective choice is there?

    Really excellent article.

    1. Roo
      Windows

      Re: If I could, I would ...

      ""Back-in-the-day boffins did not want to do harm"

      If they had, would VAX and the like have remained untouchable?"

      History says no. Students were cracking machines to get more compute/disk space and cause mischief before the VAX-11/780. I suspect people are unaware of this because they are too lazy to search USENET archives, or they assume that if it isn't indexed by Google then it didn't happen. Computer history has developed a "dark age" because people tend to use Google and WWW as their primary sources rather than books, journals and periodicals. Anything pre-WWW seems to be forgotten... :(

    2. Tam Lin

      Re: If I could, I would ...

      It's been a long time since I read Cliff Stoll's book "The Cuckoo's Egg", but the gist is that roughly 30 years ago, Unix, GNU, VMS, et al were being hacked into regularly (by the NSA and others), locally and remotely, with an ease roughly proportional to each software provider's hubris. I guess PBS/Nova made a sensational (in the derogatory sense) TV show based on the book called "The KGB, the Computer, and Me".

      Later, a designer of one the above insecure systems was hired to architect Windows NT (now known to everyone as Windows 7/8/9).

      Čas je šarlatán a věci sotva změní.

  4. Brewster's Angle Grinder Silver badge
    Flame

    This is just a rant. It starts by explaining how things were in the good old days. Then, on the final page, the author admits they don't understand how things are these days. And then he says he doesn't care.

    Well here's the simple version. Data has to be transferred from user space to kernel space so it can be written to "peripherals" or to the user space of another process. Bugs in that code allow viruses to insert themselves into the kernel. For historic reasons, one operating system is more vulnerable to this, but until we write bug free software it's always going to be possible. Even an OS on a separate CPU won't save you from that.

    And Airline avionics don't have to run a web browser that mediates between the user and the internet.

    And as for licenses, we can't even stop telemarketers phoning me up and telling me I have a virus and please could I go to their website. If we can't control the phones, what chance computers?

    1. Roo
      Windows

      "This is just a rant. It starts by explaining how things were in the good old days. Then, on the final page, the author admits they don't understand how things are these days. And then he says he doesn't care."

      The article reflects very badly on the knowledge base and quality of thinking in the BCS.

      I can't help but suspect that John Watkinson is trying to justify mass surveillance with the intent of hitching his wagon onto the anti-liberal-government think-tank/quango gravy train.

      1. smellmyfinger

        I had exactly the same reaction

        Shame on Mr. Watkinson for advocating increased surveillance and less freedom in the name of the illusion of increased safety. I imagine he loves to walk naked before minimum wage flunkies when going through the airport for the illusion it makes him safer, and for having all his banking activity, text messages and use of the internet, stored and analysed for the same reasons.

        None of which is ever misused by those with access to it.

        I don't, so much so I left the developed world to live in Africa.

        Make yourselves sheep, and the wolves will eat you.

      2. Infernoz Bronze badge
        Meh

        And the backwardness of the BCS was why I never bothered with the hassle of becoming a member even though I could have easily done so.

    2. yoganmahew

      it's different, innit...

      "Well here's the simple version. Data has to be transferred from user space to kernel space so it can be written to "peripherals" or to the user space of another process. Bugs in that code allow viruses to insert themselves into the kernel."

      You cannot insert code into the kernel of a mainframe. You just don't have the write access to do so. Or to execute the instructions to give you that write access. You could start randomly writing bits of data, should you get to a point of being executed, but changes are good you'd protection exception long before that.

      It's not just 'not a similar' architecture, it's totally different.

      TPF mainframe since 1990...

    3. Lis 0r

      An ignorant rant - reminded me of A-level computing: grossly out of date, irrelevant, and unwilling to update.

      The author didn't know whether modern processor had all these stone age memory management techniques? Here's an idea - use the internet, and look it up! Of course they've got all that gubbins, it's just malicious hackers are a good deal sneakier than the friendly people who might attack a 60s mainframe.

  5. sysconfig
    Pint

    Great article!

    Very interesting read. Bravo!

  6. Destroy All Monsters Silver badge
    Headmaster

    The last machine on my desk with no MMU was an Amiga 2000

    Since malware relies on having access to the whole computer in order to do harm when the code is executed, malware on such a machine will be defeated, because even if it manages to get into the machine as a bona fide piece of code, as soon as it runs, it will find it has no direct access to anything except an area of RAM. It can’t mess with the operating system because it will run in user mode. It can’t mess with the mass storage because only kernel processes can reach the physical addresses of peripherals.

    It seem the author has totally bypassed any knowledge about how modern computer systems or even malware actually works and is making things up as he writes? It's like listening to a neocon explaining the political situation in the Middle East and how we need to smash in some doors lest people get uppity etc.

    I recommend taking up a good old Tanenbaum explaining principles of Operating Systems. Then to start studying practical examples of how security is being bypassed by various means in case the system is not kept at minimal levels of complexity with a strictly enforced and mathematically describable security policy with no bugs in the code underlying it. These systems are very rare, very restricted in functionality and the hoi polloi doesn't want them.

    The combination of kernel and user register sets in the CPU with hardware memory management and a small amount of hard-wired logic that no software of any kind could circumvent, meant that with a competent operating system, these machines were essentially bomb proof.

    Bomb proof my arse: Morris worm says no. Oh you mean it needs to run VMS? Right.

    1. oldcoder

      Re: The last machine on my desk with no MMU was an Amiga 2000

      Actually, the Morris worm simply guessed passwords.

      It did not even attempt to break into the kernel, or even attack other processes.

      So the phrase "bomb proof" still remains valid for the hardware.

      1. Michael Wojcik Silver badge

        Re: The last machine on my desk with no MMU was an Amiga 2000

        Actually, the Morris worm simply guessed passwords.

        That's utterly incorrect, as a quick glance at any of the analyses of the Morris worm would tell you. Its most famous vector was a stack overflow in fingerd (for 4BSD, the worm's target), but it used a number of them.

        It did not even attempt to break into the kernel, or even attack other processes.

        Except for fingerd, sendmail, etc.

        So the phrase "bomb proof" still remains valid for the hardware.

        That's a vapid claim. VAX systems running 4BSD were infected by the Morris worm, which was malware by any sensible definition, so clearly Wilkinson's claim is a load of rubbish.

  7. Roo
    Windows

    Rose tinted glasses are misleading...

    The old machines he refers to were actually very prone to being hacked, people found holes the microcode, OSes and peripherals, they were anything but bombproof. The only thing making them look better than they were are the rose tinted specs being worn by John Watkinson.

    He's right to touch on the software side of problem, but I think the OS folks have (mostly) got a good grip on what needs to be done now, the nastiest security holes seem to be in userland these days. Sometimes those holes are usually caused by app developers circumventing/ignoring OS security provisions & policies, but often it's down to userland developers failing to design and implement an robust and verifiable security model.

    The verifiable bit is really important, ideally the verification process should be repeatable, cheap, transparent and available to the end user. Anything less than that is a fail. This is all doable now, but it is often viewed as a nice to have - rather than an essential part of product development. That will only change when vendors get hit very hard in the wallet.

    The idea that having people identify themselves online will somehow improve the hacking situation is extremely naive and extremely dangerous. Crackers, and other criminals will *continue* to spoof ids regardless, meanwhile folks who would like to make an honest protest will be now have a massive bullseye painted on their back. Personally I don't think we should trade legit protest for an increased incentive for criminals to commit id theft and spoofing.

    1. cracked

      Re: Rose tinted glasses are misleading...

      @Roo

      A gentle, partial rebuttal?

      The idea that having people identify themselves online will somehow improve the hacking situation is extremely naive and extremely dangerous. Crackers, and other criminals will *continue* to spoof ids regardless, meanwhile folks who would like to make an honest protest will be now have a massive bullseye painted on their back. Personally I don't think we should trade legit protest for an increased incentive for criminals to commit id theft and spoofing.

      The situation now could be categorised as: In order to stay safe, everyone must hide

      Surely it would be better if it were: In order to be nasty, someone must hide?

      If hiding is difficult - and I appreciate some people will always be clever enough to hide - then the majority won't do it. Catching a minority is, I would imagine, far easier than policing an anonymous mass.

      ... And to be fair to me - and who else will be?! - I suggested traceability, not visible openness ... even I am not that naive ... not today anyway ;-)

      (I accept the point about the usefulness of anonymity for protest and the like - But, just like there are clever bad-actors there are clever good-actors too)

    2. cracked

      Re: Rose tinted glasses are misleading...

      (2nd try at this post - hopefully the Comment-Monster doesn't eat this one ...)

      @Roo

      A gentle, partial rebuttal?

      The idea that having people identify themselves online will somehow improve the hacking situation is extremely naive and extremely dangerous. Crackers, and other criminals will *continue* to spoof ids regardless, meanwhile folks who would like to make an honest protest will be now have a massive bullseye painted on their back. Personally I don't think we should trade legit protest for an increased incentive for criminals to commit id theft and spoofing.

      The situation now could be categorised as: In order to stay safe, everyone must hide

      Surely it would be better if it were: In order to be nasty, someone must hide?

      If hiding is difficult - and I appreciate some people will always be clever enough to hide - then the majority won't do it. Catching a minority is, I would imagine, far easier than policing an anonymous mass.

      ... And to be fair to me - and who else will be?! - I suggested traceability, not visible openness ... even I am not that naive ... not today anyway ;-)

      (I accept the point about the usefulness of anonymity for protest and the like - But, just like there are clever bad-actors there are clever good-actors too)

  8. Irongut

    What a load of bull

    According to this article malware is all the fault of IBM and MS. If we were all using minicomputers running Unix it would not exist.

    The Morris Worm disagrees.

  9. John Smith 19 Gold badge
    Unhappy

    "with a competent operating system, these machines were essentially bomb proof."

    The author may not know if any system has an MMU but most of the readers here do.

    And anything above the '286 should be able to muster a competent MMU to get the job done if the OS meets it half way.

    The question is why does it not meet the hardware half way?

    I'm describing the article as basic tutorial/nostalgia/rant.

    1. Jim Hague

      Re: "with a competent operating system, these machines were essentially bomb proof."

      Basic tutorial, ill-informed and rose-tinted nostalgia, and ignorance. Anyone who thinks a 11/780 with VMS was hackproof obviously wasn't there in the day. And do I read this right, but the author thinks that modern CPUs don't have MMUs? Is this the level of expertise displayed by all Chartered Information Systems Practitioners? God help us.

      Poor article, El Reg.

      1. oldcoder

        Re: "with a competent operating system, these machines were essentially bomb proof."

        You missed part of the writeup.

        Many/most CPUs DO have MMUs... But what DOESN'T have an MMU every time are controllers...

        And a controller is just another name for a CPU. Thus hacking a controller bypasses the MMU...

    2. Hargrove

      Re: "with a competent operating system, these machines were essentially bomb proof."

      @ John Smith 19

      The question is why does it not meet the hardware half way?

      My guess is ease of development and marketing advantage.

      I believe JS19's characterization of the article is accurate and fair. However, I believe that the article does a service by raising exactly the question JS19 poses.

      Being approximately as old as dirt myself, I'm nostalgic about the day when OS's were OS's, executables were computer programs, and data files were stored as such. As many commenters have pointed out, this did not necessarily make the systems more secure. What it did was give me, as the user, better visibility and control.

      My admittedly jaundiced assessment is that MS's introduction of the Hardware Application Layer had more to do with locking users into their products than it did providing them with enhanced capabilities. I also believe, based on information from people far more expert in the field that I am, that to facilitate this strategy, certain hardware security features in the microprocessor hardware were not exploited.

      If memory serves, MS DOS for my first PC, fit on a single 160 kB floppy disk. A floppy disk . . . . oh, never mind. The Windows folder on this W7 computer comprises more than 25 GBytes, (I appreciate that this is not all, strictly speaking, "the OS." But frankly I despair of figuring out what is or isn't part of the OS.)

      A fundamental principle of engineering (engineering is a ancient discipline , , , oh, never mind)

      The principle is that simple designs with fewer moving parts tend to be more reliable and easier to maintain than large, complicated Rube Goldberg concoctions with gazzilions of parts. My personal common sense assessment is that an OS of this size--requiring constant multiple updates on a monthly basis, all done in the background without the user having either visibility or control--is categorically unprotectable. System integrators and users cannot manage the configuration effectively, because the details of what is happening inside the updates is proprietary.

      Critics of the piece are right, things cannot be as simple as they were back in the day. But the author has an even more valid point to make. . .things do not have to be anywhere near as bad as they are now.

      At this point I've spent close to half a century in designing, building, and testing complex systems. I know technology advances so I offer this just an observation. In the past, the kinds of hiccups, flaws, and foibles that are being reported with increasing frequency in the Reg have always been a reliable indicator that the wheels are coming off the wagon.

      One old man's observation, for what it's worth.

      1. Brewster's Angle Grinder Silver badge

        Re: "with a competent operating system, these machines were essentially bomb proof."

        @Hargrove

        Your Windows' folder isn't full of the "operating system"; it's full of libraries and applications bundled with the OS. And that's what most of the updates are for. If you want things less complicated---and there is definitely merit in reducing the attack surface--then try a Chromebook. If that won't do what you want, then you need the complexity of the Windows' folder.

        You actually don't see many OS level bugs reported. Mostly it's application vulnerabilities. Even Heart Bleed happened in userspace without needing to penetrate the kernel.

        1. Hargrove

          Re: "with a competent operating system, these machines were essentially bomb proof."

          @brewster's angle grinder

          Thanks. Good on all points.

          By way of (hopeful) clarification, what I was aiming at was that: (1) Bundling things with the operating system this way introduces vulnerabilities through which the users resources can be effectively attacked, and (2) the current situation is the result of design choices that arguably included marketing considerations.

          The point about Chromebook is also well taken. I have an iPad that, given a decent ergonomic keyboard would support 80% of what I need, The problem is that while what used to be called the "thin client" provides simplicity, it comes at the price of dependence on Cloud computing resources. This poses a whole other set of issues,

        2. Roo
          Windows

          Re: "with a competent operating system, these machines were essentially bomb proof."

          "If you want things less complicated---and there is definitely merit in reducing the attack surface--then try a Chromebook. If that won't do what you want, then you need the complexity of the Windows' folder."

          It's not an either-or proposition at the moment (thank goodness).

          There are a whole spectrum of possibilities, that provide different strategies for tackling attack surface.

          A base install of OpenBSD is pretty minimal, it might be a better fit to Hargrove's OSes of yesteryear, all the core stuff is designed to be "secure by default", but you can, at *your* discretion, install 'ports' (ie: imported stuff like GNOME :P), either in pre-built form or build them from source. undeadly.org publishes hackathon reports if you want to know what is being hacked on and why.

          You can get read-only Linuxen that support persistent storage, all the way through to a full on 'experience/clusterfunt' like Android and Ubuntu. Then there's NetBSD, FreeBSD, and some looney Russians trying to clone Windows NT. Pretty much all of those will run Thunderbird, Firefox, and some descendent of OpenOffice which covers about 80% of the time people spend using computers for work and play.

          So there is some choice out there, and if folks threw half as much money at an Open Source project as they spaffed on Oracle licensing they would have a better product that fits their needs perfectly.

          Love it or loathe it Open Source has given us a massive amount of choice and it has given the vendors a massive kick up the arse. Prices have fallen, utility and security have improved at a far greater rate since Open Source showed up. I expect this process to accelerate - because the percentage of people who can write code is going up every day, and they now have a massive library of mature open source components to use.

          It will be interesting to see which vendors adapt and survive. IMHO the odds don't look good for Oracle while Larry & H-Bomb are showing their faces at the office. :)

        3. Suricou Raven

          Re: "with a competent operating system, these machines were essentially bomb proof."

          Which is a way part of the problem. OSs have long competed on features out of the box - even Windows, though it was mostly competing with the previous version of Windows. This has lead to a clean-install OS steadily doing more and more and more over the years - and with more complexity and more active services, there are more things that can go wrong or contain vulnerabilities. Look at Windows as an example, though some linux distros are just as bad: From the first install, it runs a a SMB/CIFS server. Even if you have no network shares. It's already listening, even if just for devices wanting to access your media library for DLNA purposes. That's a great big juicy target, a service running that really shouldn't be running until after the user has indicated a desire for it. It's just as bad outgoing - every time you access a network drive it starts poking the address on port 80 to see if it's for a WebDAV service and it listens for UPnP devices on the network. That's just the easily-reached network services. If you include the rest it's got all manner or sillyness. A printer service that runs even if no printer is installed, a wireless configuration service that runs even if there is no wireless interface.

          Complexity breeds vulnerability. An OS that tries to do everything, all of the time is going to grow bloated and insecure.

        4. oldcoder

          Re: "with a competent operating system, these machines were essentially bomb proof."

          A better choice would be a Linux based system.

          MUCH better separation of user space and kernel space.

          MUCH better definition of the operating system and user applications.

        5. Hargrove

          Re: "with a competent operating system, these machines were essentially bomb proof."

          @ Brewsters Angle Grinder

          Thanks. Good points all. (My initial response was going to be "Roger that." However, from reading the Register I surmise that the Brits use the term "roger" in a sense that would be directly counter to what I wanted to say.)

          My points, which I did not make clear before, are: (1) that it is the way things are bundled with the OS that introduces vulnerabilities that can be exploited to attack user data and resources (2) That this is the result of design choice, and the programs did not need (and do not) need to be anywhere near this complicated and vulnerable to provide the capabilities.

          Your point on Chromebook is well taken. I have an iPad which, with the addition of a few available Apps, and an ergonomic keyboard, would readily support most of my needs, except for a few high end graphics and design tasks, and the requirement to store and manage large volumes of data files locally. For those requirements, as a practical matter, simplicity comes at the price of being forced into the Cloud. This poses another set of issues, being hotly debated in the Reg..

      2. LDS Silver badge

        Re: "with a competent operating system, these machines were essentially bomb proof."

        HAL is the Hardware *Abstraction* Layer and was designed to decouple the kernel from the actual CPU. NT was designed to run on Intel, Alpha and MIPS CPUs. But it also meant NT didn't use the full security capabilities of Intel Protected Mode, because of portability issues among different CPUs. There were also performance reasons, because hardware checks cost cycles.

  10. Anonymous Coward
    Holmes

    Unfortunately management types are more interested in

    remuneration, golf and skiing than security. Their lizard brains cannot process much else. They are the ones with the power and (well paid) job security. Unlike the techies who are treated like shit, paid shit, often work in a shit windowless room in the basement and often have shit short-term contracts.

    "Unfortunately, it seems that it is only after such an event that something gets done. Until then complacency seems to rule."

  11. Anonymous South African Coward Silver badge

    Default deny

    What about creating a default-deny state on computers?

    Meaning any kind of program (whether software or hardware) will not be able to run - at all - until their existence have been verified and approved by the operator?

    The only drawback to this is that you'll be overwhelmed with a plethora of access requests, or that somebody who don't understand the implications, will grant running rights to a nasty piece of malware.

    1. Jediben
      Joke

      Re: Default deny

      Windows Vista says "Hi!".

      1. Fatman Silver badge

        Re: Default deny

        Windows Vista says "Hi!".

        DAMMIT MAN, you beat me to it!!!!

    2. Anonymous Coward
      Anonymous Coward

      Re: Default deny

      "The only drawback to this is that you'll be overwhelmed with a plethora of access requests, [...]"

      Which then becomes a human default "accept". Alarms should only go off in exceptional circumstances - so they get full attention. Several aircraft accidents were attributed to too many alarms with similar sounds - most of which were signalling events of low importance.

      For reasons I have not yet found - W7 always asks for permissions to run a succession of standard motherboard utilities after starting a "Limited" user login. Why it doesn't remember the "Yes" is frustrating. The intention was to stop malware taking advantage of an "Administrator" privilege login. However - all these alarms will lead to the user either wrongly saying "Yes" on one occasion - or them switching to using the "Administrator" login permanently.

    3. Roo
      Windows

      Re: Default deny

      "What about creating a default-deny state on computers?"

      Default deny is one way of looking at it, it may be more constructive to turn it on it's head and say "what shall I allow this operation to read/write/execute ?"... ie: Capabilities a la KeyKos. Simple to understand, safe by default (ie: you have to load the gun before blowing your toes off), but please don't let the vista UI bods skin it... Instead of supplying signed vendor supplied templates for apps they would insist on swarms of dialogs to swat down.

    4. oldcoder

      Re: Default deny

      Already done.

      Linux doesn't allow execute mode at all UNLESS the admin first permits (a mount with "noexec" disables all executables...)

  12. Anonymous Coward
    Anonymous Coward

    Good beginning, missing middle and end

    It was all a nice tutorial on MMUs, and then I turned to the final page expecting mention of memory safety, stack-smashing, ASLR, segmentation, capabilities, iMPX, formal verification, sandboxing, language runtimes and ... was disappointed.

    As for the 'if we had a hardware restriction between kernel and userland we wouldn't have problems', I can only point you at XKCD:

    http://xkcd.com/1200/

    And, for the record, my last computer without an MMU was a BBC Micro.

  13. Tim 11

    wrong on two counts

    I have no reason to question the author's knowledge on the history of computing (I didn't study the article too closely) but he makes two fundamental misjudgements about the nature of people.

    Firstly, many (and an increasing number of) computer attacks do/will not arise from a lack of hardware/operating system protection; they come from things like social engineering attacks - fooling the naive user into doing something they don't understand.

    Secondly, we are now in 2014 yet still a large proportion of the worlds population does not have access to drinking water, and any relatively rich nation spends a large proportion of its wealth on going to war (and inventing machines to kill people). Over half the world's population still believes in God for f***s sake. Humanity is simply not rational nor capable of acting in the best interests of the whole world.

    1. Brewster's Angle Grinder Silver badge

      Re: wrong on two counts

      Humanity tends to act in the best interests of the individual human concerned.

    2. Hargrove

      Re: wrong on two counts

      @ Tim 11

      Over half the world's population still believes in God for f***s sake.

      And what, pray tell, does this have to do with the subject at hand?

      Otherwise, a fine post.

      1. Anonymous Coward
        Anonymous Coward

        Re: wrong on two counts

        > And what, pray tell, does this have to do with the subject at hand?

        I think he is saying that a still significant number of people are incapable of the clear and rational thought required for solving computing issues of this kind.

        I do agree with one thing previously mentioned, that the fact that there is no clear distinction between what is operating system and what is application, and that which sits in some ambiguous area in the middle. Better, clearer segregation between the two is essential for good design.

        There is a good argument in the microkernel world for the idea that a smaller kernel is better for security, but in some ways that has the additional problem of pushing the issue out into the wild west of user space.

      2. Anonymous Coward
        Anonymous Coward

        Re: wrong on two counts

        "Over half the world's population still believes in God for f***s sake.

        And what, pray tell, does this have to do with the subject at hand?"

        If the subject at hand is based on science, engineering, and technology, which are meant to be objective and evidence-based, then surely the relevance is obvious.

        If, on the other hand, you believe that science, engineering, and technology should be largely based on faith and fashion, then there's a management job for you in the "IT industry".

    3. Hargrove

      Re: wrong on two counts

      @ Tim 11

      Over half the world's population still believes in God for f***s sake.

      And what, pray tell, does this have to do with anything under discussion.

      I believe that somewhere I read that the Reg does not do creationism. Neither should it do the brand of categorical atheism that claims to have proof for the non-existence of any and all of the diverse concepts that different people chose to tag with the name "God." (Not suggesting Tim 11 did that. But, by implication is leans sharply in that direction.)

      Otherwise, this is a good and thoughtful post.

      To be honest, I sincerely appreciate the motivation for Tim 11's angst. It is an understandable and rational response to the myriad atrocities and injustices perpetrated in the name of God. On the other side, atheists have done no better.

      1. Anonymous Coward
        Anonymous Coward

        Re: wrong on two counts

        > It is an understandable and rational response to the myriad atrocities and injustices perpetrated in the name of God. On the other side, atheists have done no better.

        Well, I'm an atheist and I don't remember ever committing any atrocities. As a non-theist, I cannot think of a single reason why I or anyone else would want to. It takes the suspension of rational common sense and personal responsibility to commit atrocities in the name of religion and we see the worst examples of people being total jerks to each other only in those places where religion or a quasi religious form (hero worship, dogmatic obedience: so yes, naziism and the Russian form of "communism" does count) are predominant.

  14. naive

    Learning curve

    Perhaps the most important issue that the author points out is the broken learning curve. The rise of MS-DOS/Windows is like the downfall of the Roman Empire, which stalled any progress in Europe for 1000 years. The concepts of VAX VMS and Unix were implemented in the 70's, which offer strong foundations to build a secure system. The Morris Worm, exploiting a mailer bug, does not invalidate this.

    Were these systems given the opportunity of growth and scale of use comparable to current Microsoft use, internet would be safer. Think of it, Microsoft builds operating systems since the late 80's. Still the list of issues on every patch Tuesday is chilling.

    So after 25 years of windows, with billions of copies sold, it is still bad due to the design issues pointed out by the author. But hey, why would they with 95% market share, no competition and super tankers full of dollars unloading every year.

    1. Hargrove

      Re: Learning curve

      @ naïve

      Nicely put.

    2. LDS Silver badge

      Re: Learning curve

      Sorry, but it was the raise of the Roman Church and its alliance with secular power to ensure its 'god approval' in exchange for religion rules enforcement that hindered progress for a thousand years. When someone is able to dictate how everybody should think, and enforce it with some aptly designed tortures and executions, you can't have any type of progress.

      Some 'approved sciences' did progress, like building engineering because it was useful both to religion pride and defensive needs, others that could question both powers were crushed.

      1. oldcoder

        Re: Learning curve

        The same can be said of MS trying to get systems labeled "Pirate" if they aren't sold with a MS operating system...

        It was successful for quite a while.

        As for building engineering... you forgot the design failures that has cathedrals collapsing. They still are - foundations cracking, structures are unsafe (they wouldn't even meet the building codes of 50 years ago, they all were grandfathered in).

      2. Maventi

        Re: Learning curve

        "When someone is able to dictate how everybody should think, and enforce it with some aptly designed tortures and executions, you can't have any type of progress."

        Hmm, why does that sound a little familiar?

    3. Danny 4
      Linux

      Re: Learning curve

      @naive

      "The concepts of VAX VMS and Unix..."

      Good post. Linux is now their spiritual successor for the modern world. No reason to continue using Windows...

    4. Hargrove

      Re: Learning curve

      Well put,

    5. Terry Cloth
      Unhappy

      Can someone please release the source code for Multics?

      I bet there's a lot in there to learn. On the other hand, the last backup was probably discarded in the '90s.

  15. This post has been deleted by its author

  16. spork
    FAIL

    How do I downvote this twaddle?

    Security is much too important to your readers to waste their time with a back-in-the-olden-days preamble and then confess "but security sucks now though I have no idea why". Arrrghhh! Why don't you get Peter Gutmann or someone who KNOWS SOMETHING to write a series that would both delight and educate your readers? The most active front in this war is userland, and anyone who thinks a couple hardware features and a secure OS "guarantee security" clearly doesn't know the first thing about how security works and needs to stop being published. To the editor who thought this "feature" was worthy of publication: this piece is damaging your reputation, and you should probably take it down - just look at all the comments. Shame on you, Reg.

  17. Anonymous Coward
    Anonymous Coward

    Data descriptors

    ICL VME target architecture had hardware "descriptors" to police the memory locations accessed during execution. IIRC that meant that individual data items in a program instance were each given their own range limits. Think of it as hardware enforced type casting.

    1. Michael Wojcik Silver badge

      Re: Data descriptors

      That's a type of capability architecture. Other prominent examples are IBM's AS/400 and Intel's stillborn i432.

  18. Stevie Silver badge

    Bah!

    You made me read 3 pages of Selections from Maurice Bach so that you could grunt "Windows Bad" at the end?

    Only the numerous pictures of the Westrex - a proper manly man's computer terminal - save you from a finger-wag.

  19. FutureShock999

    He can't out-old man ME...

    Yes, Mr. Watkinson, I ALSO have a third-edition copy of Ted Nelson's "Computer Lib" to give cool diagrams on MMUs, and I did grow up programming in Fortran on time-shared mainframes in my high-school, then eventually moved on to 6502-powered PCs (my first was an Ohio Scientific C1P, with THE first copy of MS Basic in ROM AFAIK, copywritten 1977 by Gates). I also programmed a variety of single-board computers, like the COSMAC Elf along the way. I grew up reading "Soul of a New Machine", and that made me want a career in computing. So, similar vintage to yourself.

    And I think your article is twaddle. I remember breaking security on my university mainframe FOR FUN when forced (against my will, but required to) take a COBOL class, and getting caught because I bragged about it. 45 minutes with the Dean of Students before I managed to get off with a slap on the wrist - because I had thankfully ALSO told my teaching assistant so it was deemed "an experiment" rather than hacking. I remember my first time with a DEC-20 at RPI university...and breaking passwords on that in 20 minutes to admin accounts. I remember my roommate at one point in GREAT detail describing the Man-In-The-Middle attack being used to break VMS networks he was working with. Security was NOT golden back in the golden days, it was utter rubbish! The only thing that made it SEEM more secure was that there was two to three orders of magnitude fewer hackers, because there were fewer high-value targets, and fewer trained people. Back then, there were very few criminal GANGS responsible for it - underwriting teams of coders/hackers, auctioning off automated attacks for Bitcoins on the Tor network. It was all individuals, or very small teams...working in private. Now hacking is a full-blown criminal enterprise, with outsized rewards, safety via huge distances and extradition laws, and often the threat of physical violence.

    So of course we feel less safe these days...the level of intrusion investment being deployed against modern systems is at least two to three times the magnitude of the "golden days". And THAT has a whole lot more to do with the relative "insecurity" of today's systems (which all do have MMUs that are insanely well-engineered, btw) than any mythical hardware/software deficiencies.

  20. Primus Secundus Tertius Silver badge

    Before IBM PC

    Before the IBM PC there were Word Processors (*): dedicated machines, not generally programmable, running on 8-bit processors with crude and simplified MMUs. So there was no obvious need for user login, file protections, and of course the many pitfalls of networking.

    As the author implies, it was that tradition that the IBM PC inherited.

    But what do we do now? As others above remark, the article kind of fades away into nothing.

    It seems fair to say that VMS and Unix sorted out most of the basic issues for single machines. What have never been properly resolved are the many issues in large scale networking. I remember seeing claims in the early days of the Arpanet that much research on network principles and details was being done. That all seemed to come to a halt after IPv4.

    (*)There were also the dedicated calculator/plotter machines found in many labs, mostly made by HP.

    PS VMS could be hacked if the machine minders had not amended certain system accounts intended for maintenance and testing. I speak at first hand.

    1. Anonymous Coward
      Anonymous Coward

      Re: Before IBM PC

      "PS VMS could be hacked if the machine minders had not amended certain system accounts intended for maintenance and testing. I speak at first hand."

      Far from relevant, arguably far from any recent truth. I speak at first hand too.

      Once upon a time (maybe 1980s, but probably not 1990s) VMS came with default passwords on a tiny handful of "certain system accounts intended for maintenance and testing". If an idiot sysadmin had left the passwords at default values, and didn't routinely disable (not delete) the accounts except when they were actually needed, there was an obvous way in.

      At some stage (1990s?) it was made very obvious that leaving the passwords involved at the default values was a bad idea. Iin fact the whole concept of 'factory default' passwords on VMS went away. and sensible sites acknowledged that the accounts in question should be disabled except when explicitly needed. Many sites also sensibly decided that the accounts in question should only be available via a local (not networked) login session (yes VMS makes that distinction).

      There may be hacks around for VMS. Default passwords barely count, whatever the OS in question. I've probably had Linuxes with default root passwords, if I go back far enough.

      "the article kind of fades away into nothing."

      On that, we are agreed.

  21. LDS Silver badge

    The x86 architecture offers memory segment protection since the 286...

    Intel added the opportunity to specify what a memory segment is for, and which code can access it (and how) since the 286. No operating system I know - including Unixes - ever took full advantage of the security features built into those chips for compatibility and performance reasons. All of them took the shortcut of flat address spaces, near calls (to avoid expensive gates and traps, and therefore bypassing hw security) and used just two of the four security rings.

    It's impossible to solve security issues at the software level only - hardware specific features are needed, and those available should be used, but the software industry never took care of security properly, especially against low-level attacks.

    1. Michael Wojcik Silver badge

      Re: The x86 architecture offers memory segment protection since the 286...

      It's impossible to solve security issues at the software level only - hardware specific features are needed

      It's "impossible to solve security issues" in the general sense, so this claim isn't substantive. A microcontroller-based embedded system may be much safer against any threat model that doesn't include physical tampering than a multiuser system with a fancy capability architecture. Without specifying the threat model, there's no sense in which "security issues" can be alleviated, much less "solved".

  22. Peter Gathercole Silver badge

    Nice to see the PDP-11 architecture being used as the reference for mini-computer memory management. Should always be regarded as a classic architecture.

    But the final analysis is flawed. There were micro's with MMUs available when the IBM PC was produced. There were MMUs for 68000s and Z8000s that would have allowed proper protected mode OS's like UNIX or Concurrent CPM/86 to run on the desktop. They were, however, too expensive for the types of machines that IBM envisaged (single user, single task machines that worked like Apple ][s, but with a more 'modern' processor). Cost and maximising profit was the main cause of using poor hardware that did not have the required capabilities for security.

    It was a failure of imagination that led to the development of the IBM PC and PC/MS-DOS in the first place, and once out there, nobody was going to be able to shake the dominance of these platforms on the desktop, even though they were technically flawed and limited, even when they were new.

    Imagine if Gary Kildall had actually met and agreed to supply IBM with the OS for the IBM PC. I'm absolutely sure that with a CPM/86 derived OS, multi-tasking, potentially multi-user and running protected-mode processes, together with a supervisor mode OS would have appeared in desktop machines way before WinNT.

    Windows even now is still living with the legacy of poor design decisions taken in MS-DOS and early versions of Windows, which persisted well into the times of hardware (and indeed Windows core security capabilities) capable of running properly protected.

    1. LDS Silver badge

      I don't believe so. Software companies are often inherently conservative, and don't like to be forced to rewrite applications with new tools and technologies, maintain different codebases, once they have products selling well with the actual ones, and not all customers can easily move to the new machines and OS.

      Windows replaced DOS only because Microsoft saw an opportunity to crush competitors in the user application market, while established DOS players tried to avoid moving until it was too late... it would have happened with CP/M as well - how many did develop for OS/2 1.x?

      Even *nixes suffer from designs that made sense in the '70s, but are today truly obsolete, but can't be easily changed due to compatibility reasons.

      1. Peter Gathercole Silver badge

        @LDS - Not sure what you mean.

        I am positing that PC-DOS was never provided by Microsoft. If CP/M-86 had been the OS for the IBM PC, then MS-DOS, OS/2 and Windows would never have happened, and the PC would have evolved to multi-tasking and protected mode machine as the hardware became cheap enough, because the rudimentary features were already in CP/M-86. With a proper multi-tasking OS, a windowing desktop would have followed quite naturally.

        I've deliberately not mentioned UNIX, although it has been my career, because I'm well aware that in the early '80s, the requirement for a hard disk that UNIX has would have prevented it from appearing on commodity hardware.

        Yes, I admit that some historic features of UNIX may be undesirable, particularly the security model which is effective but probably too simplistic by what is required today, but I would again suggest that if UNIX had been more prominent outside of the server room, there would have been more pressure to modernise some of the least desirable features of UNIX. In some respects, UNIX is a victim of being as good as it was when it was written. It's been just about capable as written, so people were able to work around problems, never requiring significant re-write.

        1. LDS Silver badge

          Re: @LDS - Not sure what you mean.

          The issue with x86 protected mode, especially if embraced fully, is that it requires deep changes both in the OS, the development toolchain, and the applications. Otherwise you can use sw only security, but that has more chances of being bypassed.

          Due to hw limitations (CPU power and RAM), the need for a multitasking, multiuser DOS didn't materialize soon, and due to the risk of killing the golden eggs chicken, the risk of introducing an incompatible costlier, although superficially similar OS was high. Especially then when most software vendors were still small companies, and the number of developers behind each application very limited, while the applications cost was high, and it wasn't yet so 'natural' to spend a lot in sw and hw.

          IMHO the situation would have not changed if there had been CP/M instead of DOS. Is the application availability that dictates what OS users will run, and porting an application to a new, more complex technology has costs that need to be justified by sales, not just because they are technologically sounder. You need a strong driver to move market in a new direction, and then security was not. The driver to the x86 protected mode was mostly the larger address space, not security. Had real mode offered larger space, I guess protected mode would have been adopted much later.

          Maybe history would have been different if IBM had chosen a different CPU, not the 8086/8088, with a different upgrade path to improved CPU features.

        2. oldcoder

          Re: @LDS - Not sure what you mean.

          CP/M worked just like OS-8. And OS-8 had MMU support (that was how you addressed more than 4k of 12 bit words).

          UNIX didn't require a hard disk - a floppy was sufficient. The original UNIX ran on a single 1.5 MB had disk - thus it would ALSO run on floppies when they reached 1.5 MB. Also remember, the original UNIX only required 16k-24k (or thereabouts) for the kernel.

          The major modern problem is that the device controllers don't have an MMU the way the VAX systems did.

          1. Peter Gathercole Silver badge

            Re: @LDS - Not sure what you mean. @oldcoder

            The first UNIX system I ever used had 2 RK05 cartridge disks, each 2.5MB in size, and 128KB of memory (this pre-dated the PC by several years). It was never about the size of the disk, it was about the speed of the disk and the model used for running commands, especially if they were chained together in a pipeline.

            I used a system that had a minimal UNIX-like OS (it was so similar, I wondered whether it was a direct port of V6) on two floppy disks. One was the system, and the other was used for user/application data including the pipe files (if you remember back as far as UNIX Version/Edition 6/7, you will remember that unlinked files were used to keep the data that was in the pipeline).

            The amount of thrash that went on between the two disks whenever you ran something as simple as "ls -l | more" (IIRC it was a port of UNIX V6 with some BSD 2.3 enhancements, possibly called IDRIS) was more than anybody could bear, and for these systems, you could only really use the OS as an application launcher, not in the way that a UNIX power user would use it.

            AFAIK, all systems that Ken worked on either had Core memory, which was persistent and had the OS loaded from paper tape or DECtape, or had hard-disks. There were no floppy based UNIX systems at Murray Hill.

            PDP11s (except for the very smallest ones) had MMUs that allowed them to address up to 256KB or 4 MB of memory dependent on which model they were.

    2. oldcoder

      IBM COULD have used a M68000 processor.

      Even without an MMU.

      They specifically CHOSE not to.

  23. disgruntled yank Silver badge

    Wow

    "The processors were still microcoded by the computer companies and thus did exactly what the designers wanted."

    I guess that we should all be writing microcode then--no other sort of code seems to do exactly what the designer wants.

    Several years ago. the comp.risks news group carried a link to a security study done on Multics. Many more security precautions were taken with Multics than ever made it over to at least the earlier Unixes; it was written, for one thing, in PL/1, which is not quite so libertarian as C. But the researchers found plenty of exploitable holes.

    The other point to be made is that it really doesn't matter what kind of a lock you put on the door if the residents leave it open. One good, crappy superuser password and your security can be gone.

    1. Michael Wojcik Silver badge

      Re: Wow

      My favorite was this bit: "Since malware relies on having access to the whole computer in order to do harm when the code is executed..."

      Since that premise is completely, utterly, obviously false, pretty much everything that follows is a big ol' load of rubbish.

      As others have said, this article is an embarrassment. I'm concerned that so many people early in the comments praised it; that doesn't reflect well on their understanding of IT security. Or contemporary hardware and operating systems, but particularly security, on which this piece is appallingly simplistic.

      (And while we're on the subject: "the Orwellian term 'Information Technology'"? Oh, please. That's a perfectly good and neutral use of both of those words. Does Watkinson fret about the use of techne as a term of art in rhetoric, too?)

  24. Cynic_999 Silver badge

    I disagree with the entire premise of the article. It is impossible to have hardware segregation to prevent malware attacks because the hardware cannot know the legitimate purpose and scope of an application. If hardware were to completely prevent any user application from accessing the mass storage devices for example (as is suggested in the article), most of the applications we need would be impossible. As it is, modern operating systems *do* prevent access by applications to physical I/O - applications can only use hardware by going via an OS call. But that does not prevent files being deliberately corrupted or deleted, because there is no way for the OS to know whether a call to modify or delete a file is what the user desires to do or is a command issued from malware that the user is not intentionally running. And memory is segregated and allocated to processes just as described - any attempt to read or write outside the bounds allocated will result in an exception trap. As most Reg readers will know, that's what the "Out of memory error" meant in WinXP etc., not that you had insufficient RAM!

    1. diodesign (Written by Reg staff) Silver badge

      Re: Cynic_999

      "It is impossible to have hardware segregation to prevent malware attacks because the hardware cannot know the legitimate purpose and scope of an application"

      You're absolutely right, IMHO.

      C.

    2. Anonymous Coward
      Anonymous Coward

      Code vs data

      Would you mind a minor variation:

      "It is impossible to have hardware segregation to prevent malware attacks because the hardware cannot know the legitimate purpose and scope of [any given piece of memory]"

      A whole raft of modern programming techniques, from Office macros to Java and JavaScript, blur the distinction between code and data and therefore are not amenable to hardware-based protection. Should that make the techniques useless?

      1. Michael Wojcik Silver badge

        Re: Code vs data

        A whole raft of modern programming techniques, from Office macros to Java and JavaScript, blur the distinction between code and data

        While I've made several objections to this piece, Watkinson1 does correctly point out that this distinction went the way of the dodo with John von Neumann. It's no more "modern" than the rest of computer programming.

        1Ugh, I see I've gotten his name wrong in previous posts. Bad form on my part, and I apologize. I'll see if they're still editable in a moment...2

        2Think I got the one post in error.

  25. Richard Conto

    Tedious and uninformative

    This article spends 3 1/2 pages of a 4 page article on computer technologies through the mid-1990s, and then fails to show how the lessons of memory protection (and privileged instructions as well) are insufficient for modern computer architecture.

    This was a complete and utter waste of my time.

    For what it's worth, my synopsis of why memory protection and privileged instructions are insufficient for modern computer architectures can be outlined as follows:

    (1) Modern OSes (Windows, OSx, Linux, presumeably IOs too) do run with protected memory, privileged instructions, etc.

    (2) Computers are among the most hideously complex devices created. (And networked systems of computers are even worse.)

    (3) The complication of (2) above means that the OSes on those devices will need updates (necessarily from external sources.) Networks, USB drives, etc. make this convenient and possible.

    (4) Most computers sold to end-users as such (or as phones, game devices, tablets, etc. other infotainment) are incomplete - they do NOT have what the end user wants, so a mechanism must be given to obtain that from external sources.

    (5) Often, the add-on services represent a virtual-machine in and of themselves - JavaVM is explicitly a VM, but even the javascript environment in a web browser is a VM, as is Adobe Flash. It is nearly impossible to make these VMs more secure than the underlying OS and hardware.

    (6) Various extensions to the underlying OS in order to provide better speed (i.e.: kernel level device drivers, extensible file systems, etc.) or to patch flaws in the OS security model (i.e.: anti-virus hooks) complicate the security model, weakening it overall.

    (7) Software installation often requires higher privileges in order to install software the customer wants. This is as often for the convenience of the software developer as it is required by the underlying security model.

    (8) Software manufacturers / publishers have evolved a model whereby they're not necessarily liable for flaws in their software. This leaves the need to publish quickly paramount in their priorities.

  26. AnoniMouse

    Worm Holes galore

    As pointed out in other comments, many OSs were late to (or still do not) take full adavantage of hardware features in modern CPUs for memory protection.

    Another massive route to compromising systems is the means by which "application code" invvokes (privileged) OS code (System Calls) with their API poor design and inadequate parameter validation. These are supplemented by numerous application-level "frameworks" which have the ability to escalate the privileges of the current process, so that vulnerabilities in_application_code can readily lead to compromise the whole system. Thus the number of worm holes penetrating the so-called protection of the priviileged parts of a system just continues to increase.

    Sadly, the focus (from the sales and marketing community, not to mention the "got to have the latest" crowd) is on novelty rather than continual improvement; and change, especially, when not strictly necessary, creates needless opportunities for the creation of more vulnerabilities.

    The fundamental issue is the lack of rigour or formality in designing and verifying almost all modern OSs. Not a great foundation for a world that is increasingly dependent on this stuff.

  27. Colin Ritchie
    Windows

    I have a question.

    How do MMUs relate to the Windows Registry?

  28. Christian Berger Silver badge

    Uhm, no

    Well first of all most of todays computers have MMUs. Virtually every slightly more modern mobile phone has one and certainly everything that runs Linux. And yes I think we all understand how MMUs work, and what vital work has been done in the last years to improve on it.

    "License plates" or other identity schemes where you have to show your passport to get to the net, won't help anything. Just look at malware like WhatsApp, you can find out who made it, but that's of no use, you still cannot get the malware aspects out of it. The only thing this helps with is make it easier for governments and companies to track the opposition. There are people whose life depends on anonymity.

    What we need to do instead is to make computers more secure. We are already much better at this than we were in the 1990s, except on Windows and mobile phones. Instead of misinformed grumbling we should go on on that path and make our systems even more secure. We need to understand where our current weaknesses are and find out ways to circumvent them.

    1. oldcoder

      Re: Uhm, no

      Part of the problem is that I/O devices and controllers do NOT have MMUs - yet, they have full access to memory.

      Which allows a huge vulnerability when it comes to plug-and-play devices, from mice, keyboards, audio/radio, networking, and storage devices.

      Where? the major problem is in Windows and the MS policy of mixing applications, OS tools, and kernel, with the default that everything is an executable...

      And the easy circumvention is to not use Windows.

  29. Magnus_Pym

    User choices

    PC sales were based on cost and ease of use. If you could say 'Our PC compatible will calculate your spreadsheet in half the time of theirs' people would buy it. If you said 'Our PC compatible will calculate you spreadsheet more reliably than theirs' or Our PC compatible will calculate you spreadsheet more securely than theirs' they wouldn't. MS networking outsold Netware on familiarity and ease of use (mainly).

    The market got what the market thought it wanted and now has to live with it. Even today idiot users routinely disable anti-virus because it 'slows the PC down' and find secure passwords too much hassle.

  30. Graham Cobb

    PCs are not the battleground any more

    Unfortunately this piece misses the point. PCs are not the important concern any more. It isn't even tablets and phones. The area to be concerned about is the Internet of Things.

    The first reason is scale. PC's are well below one per person. Phones come in at around 1 per person. IoT devices will be tens per person or more. If you are worried about "Unfortunately, it seems that it is only after such an event that something gets done" then it is these devices which will have the most opportunity to cause chaos.

    The second reason is that many (not all, of course) IoT devices are going to be in either safety-critical or, at least, seriously-inconveniece-causing environments. They may be controlling important household functions (locks, heating, lighting). More importantly, they will be working in offices, factories, railway stations, etc. Putting threatening messages up on the departure boards at Waterloo station in the rush hour may cause more loss of life than causing a car to crash.

    The third, and most important, reason is that these devices need to be cheap. Really cheap. Designed and built down to a cost. And those which are not truly safety-critical (nuclear power station controllers) will not be regulated at all. Their hardware may be simple, their RTOS may not be designed for security, their interfaces will be wide open to simplify (make cheaper) integration, and their software will probably be crap -- more concerned about whether it is selecting and displaying ads correctly than whether it is functioning.

    We already see serious security issues in SCADA controllers. We already see serious issues in vehicle engine management systems. Both those might get targetted by regulators. But will non-safety-critical IoT devices ever be safe to use?

    1. Anonymous Coward
      Anonymous Coward

      Re: PCs are not the battleground any more

      "But will non-safety-critical IoT devices ever be safe to use?"

      Good question.

      How much damage can a washing machine do when the water inlet valve is turned on and left on?

  31. spodula

    I think your barking up the wrong tree.

    If computers of the 1960's had anything like the malware audience that modern computers have, i doubt they would have performed any better security wise, and probably a lot worse. mainframes didnt have anything like the hackers going after them they do now, both for Communication (Most of them had little or no outside connectivity), and knowledge reasons. The internet has lowered the bar on IT knowledge a lot. (This is both good and bad IMHO, but probably more good)

    All OSs since Windows NT have had memory management. pretty much all malware *HAS* to rely on bugs in the OS to take control because of this. This is why exploits for Linux, Mac and Windows are not generally interchangeable. All the hardware that he talks about IS included on modern chips and used.

    What this article demonstrates is the author's total lack of knowledge about security, old computers and modern technology.

  32. Paul Hovnanian Silver badge

    Anti social?

    "I think we also need to take on board that creating and sending viruses is every bit as antisocial and criminal as going around assaulting people and smashing up their property."

    Right. Like that's going to stop the problem. Try putting up a sign: "Please do not mark this property with grafitti" and see what that gets you.

    "We have licence plates on cars so that bad drivers can be identified."

    And they just don't care. Sloppy driving is rarely the target for a citation. And people (particularly in the USA) take crappy driving as some sort of God-given right. Watch how many people camp in the left (fast) lane on the highway.

  33. Anonymous Coward
    Anonymous Coward

    just a mess

    We have a sad situation.

    The common people only care about security if it means that they aren't impacted by whatever is crawling through their computer.

    Businesses only care about security if it means their people are productive and that they think their secrets aren't in their competitors hands. Oh, and that they don't get sued.

    Governments only care about keeping some of their stuff secured while at the same time they want to make sure they can lift anything they want out of everyone else's stuff.

    OS vendors care about market share (as is right) and not introducing stuff that makes it easier for the common people to switch. Which means backward compatibility and introducing just enough measures to not piss any of the above off.

    Security Professionals (the real ones) are being crushed by all of that.

    Is a "Secure" computer feasible in this environment? No. Is it technically feasible? Yes. Will any of the above change? Not without an event that forces Businesses and Governments to change their opinion on matters; probably something on the order of shutting down the entire infrastructure of a significant country.

    1. Anonymous Coward
      Anonymous Coward

      Re: just a mess

      "Is a "Secure" computer feasible in this environment? No. Is it technically feasible? Yes. Will any of the above change? Not without an event that forces Businesses and Governments to change their opinion on matters; probably something on the order of shutting down the entire infrastructure of a significant country"

      You'd have thought that Stuxnet would have been a bit of a wakeup call.

      Did it actually cause any real changes? None that I've seen. Anyone?

  34. gerdesj Silver badge

    Eye watering complexity

    As has been mentioned above, modern PCs are far more complex and potentially competent than a VAX. VAXen, System/36, AS400 not to mention mainframes and other old beasts I have used did have flaws which were mercilessly exploited but normally for a laugh rather than extortion. Mainly because there wasn't really anything to exploit in the same way that my phone or browser can get at my bank account.

    The laptop I am using now has a quad core i7 beastie and 16GB RAM in it. This thing could produce spam email at a heck of a rate, especially given it has an 80/20Mbs-1 connection to t'interwebs. However, Mr pfSense has been to told to stop that sort of nonsense.

    The OS n apps on this thing was compiled from source code via the magic of Gentoo but I have no idea whether it is particularly more secure than a Windows box. There could be all sorts of nasties lurking in anything from the Intel microcode, through to Chrome or FF.

    I still seem to be the only person accessing my bank account at the moment so it seems reasonable to assume its OK (for now).

    Cheers

    Jon

  35. Henry Wertz 1 Gold badge

    2 points.... plus a few more

    1) 386 got an MMU and in fact supports all the features of a VAX. It is definitely possible to keep process address spaces seperate, to use seperate kernel and user modes, and in fact these are all done by Linux, Windows, and Mac. (Android uses both the MMU and Java-style sandboxing.. which is probably unnecessary since the MMU would already keep everything seperate AFAIK.)

    2) The VAX. I think, if you look objectively, you'll find these had plenty of security problems despite use of the MMU hardware. Note I'm not distinguishing between VMS, BSD for VAX, and Ultrix (UNIX for VAX) bugs here, just saying the VAX software had plenty of security flaws over the years. Among these, they shipped with a field service account. Which for years was username: FIELD, password: SERVICE. Yup, walk up and log in and you've got superuser access. FTP with anonymous turned on, but could read/write where you shouldn't, not just a special ftp directory. Doing a "cd .." with FTP or other utilities to escape the "top level directory" you were supposed to be restricted too. World readable /etc/passwd files. Network utilities that would allow you to send a system a file, THEN ASK IT TO EXECUTE IT -- sometimes this utility would run as root, not user nobody! UUCP (UNIX to UNIX CoPy) had minimal to no security, you could request (for example) /etc/passwd off a system with this. On some systems, a user could submit cron jobs which would be run on root. This ignores the havoc packet sniffers could cause with everything unencrypted (encryption wouldn't be feasible with the processor speeds of these systems.) Seriously, though, the list goes on and on. See the Morris worm of 1988 (I'll get to that below.)

    I think you'll find the reason that *cough* certain OSes... are not as good security-wise is simply design and history. Quite simply, UNIX was deisgned to be multi-user (almost) from the start -- and programming practices since like the 1970s reflect this. Windows still supports methodologies from Windows 95 and older which assumed a single user account or complete system access. I'm quite sure there's some real messy code in there to support this. The big factor though, I think -- Microsoft didn't start to take security that seriously until NIMDA worm or so -- about 2001. UNIX vendors didn't usually take security all that seriously up through the 1980s either -- but they had their "NIMDA moment" with the Morris worm back in 1988. Quite simply, they had a 13 year head start.

    The problem is that programs in absolute isolation are just not that useful. Lets say you want to download a file, edit it, and print it. With perfect isolation.. first, the browser or FTP utility or whatever would not be able to get anything on to the screen, since after all that would break isolation of both the utility and the display software. Lets say you could download the file. Then it'd be in the browser or FTP utility's secured area; the word processor would be unable to access it. Lets say you get the word processor to open it, you edit and save it. Now, the print driver or utility (if isolated) would not be able to get any information to print. It's these needed points of overlap that can be hacked and exploited to pwn a person's (lets face it, probably Windows...) system.

  36. Brian Miller
    FAIL

    Bad article, miserable rant, no information

    "I cannot see how an OS could handle multiple processes without having a kernel mode. It follows that there must be at least some hardware support for security measures outlined above. Perhaps it’s all there?"

    Mr. Watkinson, your display of ignorance, on The Register, no less, is utterly shameful. Multiple processes can be run without a kernel mode, and it has been done quite often. As for "Perhaps it's all there," yes, it is all there!!!

    The Intel 80386 was released with four independent levels of protection, building on the features in the Intel 80286. The failure of a software vendor to implement those features in an operating system is not the failure of the hardware manufacturer.

    The reason that Windows is targeted for malware is due to its popularity. Really, with a minimum 80% market share, who wouldn't target Windows? As for Window's lack of security, well, it was never conceived as a secure system, so what can be expected? One of the "features" of Windows is to start a thread on another process that isn't yours! All of the backwards compatibility of Windows means that there is a lot of significant baggage that must be brought forward, release after release.

    You want software to be made secure? It's very simple. Software vendors must be penalized for bad code. If there is a fast and immediate monetary penalty applied, then effort will be made to write good code. It really is just that simple.

    Really, good techniques have been known for decades. There is nothing new, there is just very little willpower to carry out the task.

  37. phuzz Silver badge

    All the security features in the world can't protect users from themselves, if your want to allow a program such as Dropbox, then you're also allowing enough access for a malicious program to steal your files and upload them somewhere else, no matter that either program can't directly access RAM or peripherals.

    Of course, the user still needs to install/run the malware, but you can always find someone willing to click "Allow" for the promise of porn/money/free things etc.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019