back to article Every major OS maker misread Intel's docs. Now their kernels can be hijacked or crashed

Linux, Windows, macOS, FreeBSD, and some implementations of Xen have a design flaw that could allow attackers to, at best, crash Intel and AMD-powered computers. At worst, miscreants can, potentially, "gain access to sensitive memory information or control low-level operating system functions,” which is a fancy way of saying …

  1. A Non e-mouse Silver badge

    Re-Education

    The Register expects plenty of OS developers are about to be sent to compulsory re-education sessions on the x86-64 architecture

    The x86 architecture is so big and complex I'd be surprised if anyone knew it completely. The instruction set reference guide from Intel is over 2,000 pages long alone. And that's just one of the four volumes!

    The more complex a device, the less likely you are to fully understand how it works and the easier it is to get it (horribly) wrong.

    1. TRT Silver badge

      Re: the easier it is to get it (horribly) wrong.

      No exceptions?

      1. Warm Braw Silver badge

        Re: the easier it is to get it (horribly) wrong.

        No exceptions?

        Merely traps for the unwary...

        1. John Brown (no body) Silver badge

          Re: the easier it is to get it (horribly) wrong.

          "Merely traps for the unwary..."

          Who was that masked interrupt?

    2. stephanh Silver badge

      Re: Re-Education

      Especially since this whole "pop ss" hack is a throwback to the 16-bit segmented DOS days.

      The expectation being that the next thing you do is adjust the sp register and thereby restore the entire segmented ss:sp stack pointer to some previous location. If an interrupt handler would run inbetween it would smash some arbitrary memory at new-ss:old-sp.

      So no sane application program has been using this for >20 years but of course the complexity-induced insecurity remains with us.

      1. Anonymous Coward
        Anonymous Coward

        "is a throwback to the 16-bit segmented DOS days."

        Different instruction can manipulate SS, because there are situation to switch stacks - i.e. going from user to kernel and back (SYSENTER/SYSEXIT do change SS, and INT may as well). A CALL through a call gate will push the caller SS:ESP on the callee stack (so parameters pushed on the caller stack can be accessed).

        Intel CPU were never designed around a single (or a few OS) - that's a mistake usually only AMD does - so instruction that actual main OSes may not use do exist, and for good reasons, and could be use by other OSes.

        x86 segments were far more secure than the page bits used now to protect memory - just they were slower - exactly because all the security checks when an instruction went through a segment boundary. For this reasons, one day they will be back, not as a physical<->virtual memory mapping solution, but as a way to protect memory properly.

        1. Primus Secundus Tertius Silver badge

          Re: "is a throwback to the 16-bit segmented DOS days."

          Segmented address spaces are an utter pain for software development, and programmers welcomed the unified 32-bit address space when it appeared. But the worst situation I met was a system designed by hardware people, with hardware memory test logic screwing up data on addresses at 2**13 boundaries. However, as the software was in assembler one could avoid putting code or data there.

          The real problem here seems to be that code is interruptible at a weak point.

          1. Anonymous Coward
            Anonymous Coward

            "Segmented address spaces are an utter pain for software development,"

            Sorry, you don't understand segments. Small 64K segments like those in 16bit days were a pain, true, because you had to reload code and data registers too often. Allocating and swapping memory at the segment level, too, because segments became simply too large. With 32 bit segments and pagination, these limitations went away. You can use segments and pagination at the same time.

            But the flat 32 bit address space were code and data segments are fully the same was a very ill-fated decision, because it opened the doors to a lot of vulnerabilities and attacks. With a proper use of segments, Data won't be executable. Buffer overflows would raise exceptions. Code won't be readable nor writable, and techniques like return oriented programming would be very hard if not impossible to use.

            A sounder implementation would have used just a few segment to keep separated code and data. No need to use multiple segment - but for very secure implementation. But flat spaces were easier to implement, and were compatible with other architectures. Most of these design were forced by the need to be compatible with older processor that weren't as sophisticated as Intel ones, and because, yes, security has performance implications.

            We pay now the fee in terms of far less robust architectures - up to Meltdown stemming down by the stupid decision to map kernel memory into user space to save cycles when entering kernel mode.

            Anyway this vulnerability has nothing to do with segmentation, just POP SS it's not a relic of segmentation either - even in other architectures you may need to change stack....

      2. Jaybus

        Re: Re-Education

        "If an interrupt handler would run in between it would smash some arbitrary memory at new-ss:old-sp"

        Yes, which is the reason for the special interrupt-delaying handling of POP SS in the first place. Even the decades old 80386 manual stated "A POP SS instruction inhibits all interrupts, including the NMI interrupt, until after execution of the next instruction. This action allows sequential execution of POP SS and MOV ESP, EBP instructions without the danger of having an invalid stack during an interrupt1. However, use of the LSS instruction is the preferred method of loading the SS and ESP registers."

        Note that last sentence!

        Now note the footnote indicated at the end of the second sentence. The footnote states:

        1. Note that in a sequence of instructions that individually delay interrupts past the following instruction, only the first instruction in the sequence is guaranteed to delay the interrupt, but subsequent interrupt-delaying instructions may not delay the interrupt. Thus, in the following instruction sequence:

        STI

        POP SS

        POP ESP

        interrupts may be recognized before the POP ESP executes, because STI also delays interrupts for one instruction.

        The manual seems pretty clear on the subject of interrupt-delaying instructions, even going so far as to point out the exception when a sequence of interrupt-delaying instructions exists and, more importantly, to strongly suggest the use of the LSS instruction to load SS and ESP in an atomic manner.

        So, is it lack of clarity in the manual, or is it failure to RTFM?

    3. Steve Davies 3 Silver badge

      Re: Re-Education

      2000 pages and 4 volumes?

      I lean back in my office chair and look lovingly at the 3/8ths of an inch thick 'tome' that details the PDP-11 Instruction Set.

      Sigh. I really am getting old and need to give up this sofware development malarkey.

      1. JohnArchieMcKown

        Re: Re-Education

        I remember the IBM S/370 "Principles of Operation" manual back in the 1970s. It was about that same size. The great^n grand son of that, the IBMz z14 machine is 1902 pages long. I just looked.

      2. onefang Silver badge

        Re: Re-Education

        My arms used to be stronger, from all that constant lifting up of heavy programming manuals. Now I'm getting old, and the only exercise I get most of the day is moving my mouse around. I should go back to sysadmin work, where at least I get to juggle servers every now and then.

        1. TRT Silver badge

          Re: My arms used to be stronger,

          Hence the term "manual labour".

      3. bobajob12

        Re: Re-Education

        I lean back in my office chair and look lovingly at the 6 foot thick 'slim manuals' that details the Nortel DMS-10 Instruction Set. From the days when the manuals arrived on a shipping pallet all their own.

        1. Steve Davies 3 Silver badge

          Re: Docs arriving on a Pallet

          Shudder.... That reminds me so much of the VMS V5.0 release. AFAIK, almost every printer in Ireland was used to print the damm things.

          1. Anonymous Coward
            Anonymous Coward

            Re: Docs arriving on a Pallet

            almost every printer in Ireland was used to print the damm things.

            But it was worth it, for real documentation.

      4. CrazyOldCatMan Silver badge

        Re: Re-Education

        look lovingly at the 3/8ths of an inch thick 'tome' that details the PDP-11 Instruction Set

        We recently threw out our old POPS manuals (IBM S/370 assembler) left over from our TPF days. Their main use for the last 25 years has been to prop up various bits of tat in the garage.

        1. onefang Silver badge

          Re: Re-Education

          "Their main use for the last 25 years has been to prop up various bits of tat in the garage."

          That's what I use the Yellow Pages for, a monitor stand. The base of my main monitor is almost precisely the same dimensions as a copy of Telstra Yellow Pages, almost as if the book was designed for that purpose.

    4. HmmmYes Silver badge

      Re: Re-Education

      X86 is big. Its not complex, well it us, its more shit.

      Its like an instruction set with old crap bolted into it.

      Maybe intel can license power and just cincentrate on caches and inter chip links?

    5. panoptiq

      Re: Re-Education

      There needs to be a purging of the x86 architecture; in fact, it should have started 20 years ago. Oh well, long live unfulfilled computing potential.

      1. Anonymous Coward
        Anonymous Coward

        Re: Re-Education

        More than an x86 purge, Intel did try to begin again completely with Itanium ~20 years ago. I think the (approximate) death of Itanium is a shame. Sure, at the time it was expensive and under-performing. The constraints on instruction combinations in the VLIW code meant it was full of NOP instructions, a waste of cache among other things. Compilers couldn't handle the default parallelism so there were barriers (stops) everywhere;;

        With relatively abundant cache and possibility of high clock speeds today, these things would matter less - especially since the Itanium is (allegedly) immune to Spectre.

        Sadly, the lasting effect of Itanium seems to have been to kill off other promising architectures (e.g. Alpha) as almost everyone except AMD seemed to hang around waiting for Itanium to become much good.

    6. Anonymous Coward
      Anonymous Coward

      Not the major OS maker then?!!

      "Every major OS maker"

      Notice the biggest,most widespread OS isn't on that list.... Android has more active devices than Windows these days. Let's not forget that, so clearly it's not every major OS maker...

  2. malle-herbert Silver badge
    Facepalm

    So all of this is just a case of...

    RTFM ?

    1. spodula

      Re: So all of this is just a case of...

      Well, RTFM followed by a session of guessing what the hell was going through the Intel Engineers head, cos it sounds like the manual was far from clear.

      1. Archtech Silver badge

        Re: So all of this is just a case of...

        "Every major OS maker misread Intel's docs".

        From which it follows that the docs were unclear. If OS developers - highly intelligent people with excellent knowledge and understanding of their domain - misunderstood, almost by definition the documentation was faulty. The principle has been clearly understood (and enunciated) for more than 1900 years:

        "One should not aim at being possible to understand, but impossible to misunderstand".

        - Marcus Fabius Quintilian

        And he was just talking about trivial matters like law and politics. Computer architecture is far more important.

        1. Anonymous Coward
          Anonymous Coward

          "From which it follows that the docs were unclear. "

          Os that developers copied each other - happened even before StackOverflow was available...

          1. FuzzyWuzzys Silver badge

            Re: "From which it follows that the docs were unclear. "

            "Os that developers copied each other - happened even before StackOverflow was available..."

            I was thinking that too, is this simply a case where Fred was the first to read the manuals, he wrote about it and didn't get it quite right then others have simply thought, "Sod reading the manuals, this looks OK.", tested it and it works. Then that knowledge passes into lore and before you know it's simply the "done thing" without question.

            1. Fred Flintstone Gold badge

              Re: "From which it follows that the docs were unclear. "

              is this simply a case where Fred was the first to read the manuals, he wrote about it and didn't get it quite right then others have simply thought, "Sod reading the manuals, this looks OK.", tested it and it works.

              Hang on, I'm not involved :)

              (a) The number of years since I was that close to a CPU in programming is measured in decades and (b) it then involved a mere 8 bit chip.

              I plead the Shaggy defense :)

          2. sw guy

            Re: "From which it follows that the docs were unclear. "

            Linux code copied from Windows ?

            Non-GPL copied from Linux code ?

            I wonder which one is the less probable.

            However, OS developers sharing a common way of thinking, and reading the same (obfuscated) manual, this is something already seen elsewhere.

            In the past, an experiment was done by providing the same high level specification of a radar system to several teams, and they made comparable errors in same areas, that is where specifications where not clear enough.

            1. LDS Silver badge

              Re: "From which it follows that the docs were unclear. "

              Kernel developers may not be a very large community. People move from a company to another, they share knowledge and ideas on newsgroup and forums, meetings, etc. etc.

              Some approaches, after having being shared, could have become "facts" before they were fully validated.

            2. stephanh Silver badge

              Re: "From which it follows that the docs were unclear. "

              "Linux code copied from Windows ?

              Non-GPL copied from Linux code ?"

              Both copied from BSD? Windows has used code from BSD for networking, not so far-fetched to think that they also look there for "inspiration" on other topics.

        2. Munchausen's proxy
          Pint

          Re: So all of this is just a case of...

          "If OS developers - highly intelligent people with excellent knowledge and understanding of their domain"

          Try to work with memory overcommit in a shared HPC environment, then try to repeat that with a straight face.

          A kernel house of cards designed for no other real purpose than to make bad code run (for a while).

    2. oiseau Silver badge
      WTF?

      Re: So all of this is just a case of...

      Hello:

      RTFM ?

      Hmmm ...

      Me thinks not.

      You see, when someone writes a manual and everyone interprets the same text/set of instructions in that manual in the same manner (ie: wrong), the problem quite obviously does not lie in those who interpreted it.

      It lies solely in those who wrote it and obviously did it in a manner that could not be interpreted properly.

      ie: Intel

      Just my $0.02.

      Cheers,

      O.

      1. Iain 14
        WTF?

        Re: So all of this is just a case of...

        The problem with "RTFM" is that it makes inherent assumptions regarding the quality of the manual and the ease by which the reader can correctly interpret it. The "polite" meaning of RTFM that I was always taught was "Read The Fine Manual", and I like this because it puts some of the onus back on the manual writer - i.e. if the manual contains material that's wrong or open to misinterpretation, it's not "Fine", and therefore you can't really blame the reader for any resulting fallout.

        As someone who's currently having to rewrite someone-else's manual because an error in it caused an engineer to waste several days trying to work out why the hell his installation didn't work, this is a bit of a sore point at the moment - and I'm sure many here have had similar experiences...

      2. JohnArchieMcKown

        Re: So all of this is just a case of...

        IBM likes, or liked, to put in a sentence in some of their programming manuals which said something similar to:

        This manual describes IBM implementation of <some standard> as of <some date> as interpreted by IBM.

        or maybe it was "as understood" instead of "as interpreted".

    3. Vanir

      Re: So all of this is just a case of...

      Read the flawed manual?

    4. Anonymous Coward
      Anonymous Coward

      Re: So all of this is just a case of...

      RTFM.

      If the manual was anything like a Novell Netware manual, they probably just guessed.

  3. Teiwaz Silver badge

    So....

    What's the expected performance hit when the fixes get applied?

    There's going to be one, right?

    These things are getting like unexpected tax bills.

    1. Captain TickTock

      Re: So....

      Well, if it's all to do the segmented memory from the 16bit days, it should hardly ever come up, No?

      1. defiler Silver badge

        Re: So....

        it should hardly ever come up, No?

        Firstly, I am not a programmer. Secondly, it strikes me that you're correct in normal use.

        However, somebody intent on causing trouble can pop this into a malicious program to cause havoc. It's like saying the bullet will be safe so long as it's kept in the box.

        Can anyone offer a reason for using this segmented crap in any 32-bit system? If we're needing backwards compatibility for shitty old 16-bit applications, surely that can be isolated in an emulator or something these days?

        1. Anonymous Coward
          Anonymous Coward

          "Can anyone offer a reason for using this segmented crap "

          Would you like the kernel stack to be accessible by user code? Even if not using Intel "segments", you may want to change "registers" allowing access to different parts of memory when switching from a privilege level to another, to disallow less privileged code to access data of more privileged ones.

          There's a misconception about segments, because Intel used them for two separate tasks. One was to map physical memory to virtual one, and this can be handled better by pagination.

          The other was to set privileges and rights on a block of virtual memory. You can set what privilege level a segment belongs to, and what could be done with it - read/write/execute. You may have segments the CPU can execute but code can't read their contents - nice, isn't it?

          Pagination doesn't have so granular controls - even the NX bit was added later when it became clear that being able to execute any address was very dangerous. And pagination doesn't perform the privilege access checks segments does - which unluckily also makes their use slower.

          Yet better designed OSes cold have used at least an executable only code segment, a read-only segment for true constants, and a read/write, but not executable, segment for data - at least for non-script applications, instead of having a flat read/write/execute address space.

          Such kind of OS would be much more secure but incompatible with a most old code - so you'll never see it.

          1. defiler Silver badge

            Re: "Can anyone offer a reason for using this segmented crap "

            There's a misconception about segments, because Intel used them for two separate tasks. One was to map physical memory to virtual one, and this can be handled better by pagination.

            A-ha. I'd assumed that the segmentation was for the nasty old 16-bit-style memory paging, which (in my opinion) should be dead and gone by now. If it's actually still used for memory protection then I can't really complain.

            Told you I wasn't a programmer!

          2. Daniel von Asmuth Bronze badge
            Windows

            Re: "Can anyone offer a reason for using this segmented crap "

            Raad Tanenbaum's book on computer architecture, particularly the part about the design of the virtual memory system for Multics, and you find no reason to think that there is anything wrong with their combination of segmentation and paging.

            Then read about the Intel X86 design and figure out that their trick for addressing 16-bit address registers together with 160-bit segment selectors to produce 20-bit adresses was easy for Intel, but hard for developers.... Intel & Microsoft: a marriage made in hell.

            1. Anonymous Coward
              Anonymous Coward

              " together with 160-bit segment selectors to produce 20-bit adresses"

              That is a matter for system developers only. It's totally transparent to application developers. The OS setups segment descriptors, from an application perspective loading a segment register with a real memory address, or a selector, is transparent, they're just numbers - the CPU will do whatever is required behind the scene to get the physical address.

              Anyway, just look at how the pagination mechanism work - especially when more indirection are used - it's not much less complex - and it's a little simpler only because it lacks the security features of segments.

              Of course adding the security features at the page level makes little sense because whenever a different page is accessed checks should be performed, and it would be a bigger burden than making it at the segment level.

              So it really makes sense to use pages to manage memory mappings, and segments for security,

            2. Archtech Silver badge

              Re: "Can anyone offer a reason for using this segmented crap "

              "Intel & Microsoft: a marriage made in hell".

              But bloody profitable.

          3. Jack of Shadows Silver badge

            Re: "Can anyone offer a reason for using this segmented crap "

            Ignoring Ring N-3 (i.e. IME), I hope you do realize that no one, to my knowledge, uses anything except Ring 0 and Ring 3. Would that we did have the other two in use, our security models just might improve a smidgen.

            1. Anonymous Coward
              Anonymous Coward

              Re: "Can anyone offer a reason for using this segmented crap "

              OS do use only two rings because most CPU but Intel did have only two level, supervisor and user. Thereby for compatibility (even NT did support MIPS and Alpha) designers did use only two. Moreover, the more the ring transitions, the less the performance.

              But looking at it form a security perspective, the Intel design which proposed ring 0 core kernel, ring 1 I/O routines, ring 2 system libraries, ring 3 application was very sound and clever, and would have led to a much more secure OS (albeit quite slower). It was defense in depth.

              Just, for a long time, and probably still, most companies are obsessed with performance only, and we see how many avoidable security bugs we see each month.

              But as long as many people have been brainwashed that designs made forty years ago based on much more primitive CPUs are the best one and don't need to be revised and updated, we'll keep on having to face big vulnerabilities.

              Even more so when CPU start to be designed around outdated OS, instead of vice versa. As more advanced features disappear from CPUs, it will be impossible to design and create a more secure OS, let's keep on living in the '70s... and having the same security.

              It's impossible to create a secure OS without hardware support, software-only security is far less robust. But people usually understand such issues only when they hurt them badly.

              1. kirk_augustin@yahoo.com

                Re: "Can anyone offer a reason for using this segmented crap "

                Yes, the problem is Intel processors do not support OS security needs. If Intel had done paging right and not called it segmentation, then Intel could have done segmentation right and have a pair of guard registers for every user process. This is very old stuff and no one has to reinvent anything, just Intel did it all wrong.

      2. Anonymous Coward
        Anonymous Coward

        "t should hardly ever come up, No?"

        Non malicious code will never contain such instruction. But malicious code can use them explicitly to exploit the vulnerability, of course. AFAIK POP SS is not a privileged instruction, so it can be used by user code without being trapped.

    2. Archtech Silver badge

      Re: So....

      A tax bill should never be unexpected to an educated adult.

      Any more than death...

      1. Anonymous Coward
        Anonymous Coward

        Re: So....

        DON'T MIND ME. CARRY ON WITH WHATEVER YOU WERE DOING. I HAVE A BOOK.

        1. Norman Nescio Silver badge

          Re: So....

          DON'T MIND ME. CARRY ON WITH WHATEVER YOU WERE DOING. I HAVE A BOOK.

          A pony on Binky each way in the Gold Cup, then please.

      2. Primus Secundus Tertius Silver badge

        Re: So....

        After you are dead, you cannot say you did not expect that.

    3. Fruit and Nutcase Silver badge
      Joke

      Death and Taxes

      "...in this world nothing can be said to be certain, except Death and Taxes"

      "...in this world nothing can be said to be certain, except Death, Taxes and Computer Bugs"

      Benjamin Franklin

  4. Dan 55 Silver badge

    Which is worse?

    1) No documentation. You know you're on your own.

    2) Documentation which just lists functions or commands or instructions or whatever on separate pages without giving you an oversight into how things fit together, so you walk straight into a bear trap, just like every OS vendor on the planet did here.

    1. Charlie Clark Silver badge

      Re: Which is worse?

      No documentation is always worse. Insufficient documentation leads to different errors and shows a distinct lack of interest by the authors in the subject. Maybe someone needs to mention liability to them.

      1. sabroni Silver badge

        Re: No documentation is always worse

        I disagree. Bad, error filled documentation is worse than no documentation. With no documentation you see how the thing behaves and treat it accordingly. With bad documentation you assume you've done something wrong and spend ages trying to get it to work.

        Neither situation is ideal, obviously, but I prefer working it out over reading instructions that are wrong.

        1. Charlie Clark Silver badge

          Re: No documentation is always worse

          With no documentation you see how the thing behaves and treat it accordingly.

          With computers this is often much harder to determine than you think, ie. more unknown unknowns than you have in poor documentation and liability is clearer. But it quickly becomes sophsitry to discuss this without a specific example.

          1. Norman Nescio Silver badge

            Re: No documentation is always worse

            Hmm, an old programming team leader once told me documentation was like sex. When it's good, it's very, very good, and when it's bad, it's better than nothing.

  5. Pen-y-gors Silver badge

    I'm impressed

    by any mekon-brain who can understand all this sort of hyper-low-level stuff. Could we please go back to IBM/370 Assembler? That was vaguely understandable (and the quick reference guided fitted onto one folding card)

    1. defiler Silver badge

      Re: I'm impressed

      Could we please go back to IBM/370 ARM2 Assembler

      There - FTFY. 16 instructions, and a debate over whether to include Multiply...

      1. Anonymous South African Coward Silver badge
        Trollface

        Re: I'm impressed

        Could we please go back to IBM/370 ARM2 Assembler

        There - FTFY. 16 instructions, and a debate over whether to include Multiply...

        Z80 CPU's in parallel rather?

      2. Wilseus

        Re: I'm impressed

        "16 instructions, and a debate over whether to include Multiply..."

        I never thought it was as few as 16 instructions, but I think you might be right if you don't count all the condition codes, other flag bits and intrinsic shifts which on many other CPUs would be separate instructions.

        As for multiply, all commercial ARMs had it, no debate! It was divide that didn't come until later.

        1. defiler Silver badge

          @Wilseus Re: I'm impressed

          You're right. There were (from memory - it's been a _long_ time) 16 basic operations, and each one could be run conditionally based on the status register (allowing you to inline a few instructions that you'd normally have to JMP over), and there was a flag to have the instruction _set_ the status registers too. It wasn't mandatory. So, whilst there were many permutations of these options, it all came down to 16 simple instructions (which was ideal for learning assembly).

          Yep, all commercial ARMs had MUL, but I'm pretty sure I recall there being a debate whether they would at the time. The thinking was that it might be too CISC-y, and you could multiply in software. Compared to other instructions on the chip it took a long time too.

          I miss my Archimedes.

    2. Archtech Silver badge

      Re: I'm impressed

      Say what you will about companies like IBM and DEC - they produced extremely clear, comprehensive, professional documentation.

      I used to know a DEC technical writer who knew so much about the VMS file system that the developers used to consult her when they were in doubt as to just how something worked.

      Sort of the exact opposite of this present situation.

      1. Doctor Syntax Silver badge

        Re: I'm impressed

        "I used to know a DEC technical writer who knew so much about the VMS file system that the developers used to consult her when they were in doubt as to just how something worked."

        But if the documentation was as good as you say why would they need to ask?

        1. Anonymous Coward
          Anonymous Coward

          Re: I'm impressed

          "But if the documentation was as good as you say why would they need to ask?"

          Because otherwise they'd have to read? :-) There are many places documentation fails: the people who don't want to write it, the people who don't want to maintain it, and the people who don't want to read it.

          If you want documentation to work, you have to show the benefits. I tend to find in corporate level IT (rather the IT industry) that nobody really gets the benefits, everyone works with isolated knowledge with little desire to share. This seems to be that their perceived value to the company is their limited skill-set, rather than their ability to adapt to, learn and apply (new) technology. Documentation exposes this limitation as people become less depended on, so is shunned.

        2. Voyna i Mor Silver badge

          Re: I'm impressed

          "But if the documentation was as good as you say why would they need to ask?"

          Because the existence of good and accurate documentation does not imply the existence of a developer sufficiently intelligent and wide-ranging to understand it without further help.

          It's a kind of version of the Watchmaker Fallacy in reverse; the existence of a design manual does not in fact imply the existence of a designer.

          Disclaimer: I am terrible at understanding documentation without sample code.

          1. oldcoder

            Re: I'm impressed

            DEC documentation quite often included the example code - with before and after samples of what every instruction did.

        3. The Mole

          Re: I'm impressed

          Easy, most people are too lazy to actually read the documentation.

          Its a bit more justifiable when you know it is documented somewhere but not which particular document set you have to look in.

          And yes documentation (and some test teams) are often the people who get the biggest picture of how a complex system/application works. Most other people are too low level (concentrating on one particular component), or too high level (understand the architecture but not implementation details).

          1. CrazyOldCatMan Silver badge

            Re: I'm impressed

            most people are too lazy to actually read the documentation

            AKA - "I'm calling support because I want you to do my thinking for me"..

        4. I ain't Spartacus Gold badge

          Re: I'm impressed

          But if the documentation was as good as you say why would they need to ask?

          Also, it depends on the question.

          With an easy question, great documentation is all you need. Especially if it's searchable. How does this one command work? Well easy, I type it in and the info comes up. Now what if you know you want to do something, but don't know the command name? Well as long as you know the right terminology, you should be able to find that with 3 or 4 searches. So maybe that takes ten times as long to find, but still quick, once you find the info you need.

          What if your question is about how five different commands interact with a particular system or sub-system (and each other)? Then you need to read the documentation on all 5 of those, plus other stuff. At this point you need a much deeper understanding of what's going on. And that's where human help is useful.

      2. A Non e-mouse Silver badge

        @Archtech Re: I'm impressed

        Good (technical) product documentation is rare, nowadays. It takes a certain type of person to write it, and they need time to write it.

        However, companies nowadays see this effort as overhead and ripe for cutting. (I've even used a product where the supplier said they refused to write any documentation!)

        1. stiine Bronze badge
          Thumb Up

          Re: @Archtech I'm impressed

          Absolutely. I just posted a similar comment on ARS, that good Tech Writers are expensive, and for good reason. Twenty years ago, a good tech writer could get over $120/hr.

    3. John 48

      Re: I'm impressed

      I remember the first time I encountered protected mode assembler on a '386... it only took a couple of days to get a grip on the changes to the instructions set from 8086 style real mode stuff.

      The problem was it then took *months* to fully get your head around the vast changes in architecture and how they all fitted and played together. The documentation of the day was a single 3/4" thick Intel programmers reference guide (small print, thin paper!) that was pretty dense and hard going.

      The segmentation alone is vastly different and more sophisticated - but you could see that lots of it was engineered get you from a place you would rather not start at (i.e. DOS programs all hitting the hardware directly for maximum performance), and allow a transition to a system that could run several such programs concurrently and not have them fight to the death.

      1. Dan 55 Silver badge

        Re: I'm impressed

        Not to say that the 386 wasn't needed, but I'm pretty sure an 286 could have run a pre-emptive multitasking OS (without memory management) and had a hardware abstraction layer, it's just everybody ran DOS so hardware was used to solve the problem. Somebody's even managed it with a Z80.

        x86 is far too complicated and over-engineered for what it delivers and this is why it's creaking at the seams now.

        1. Anonymous South African Coward Silver badge

          Re: I'm impressed

          Wow... Symbos.de just blew my mind.

          Now that is proper programming within the hardware limits...

        2. Brewster's Angle Grinder Silver badge

          Re: I'm impressed

          "but I'm pretty sure an 286 could have run a pre-emptive multitasking "

          There's nothing magical about preemptive multitasking. All you need is a timer: dump the registers, switch stacks, restore the previous registers and resume. We used to do it on 8 bit machines. You could do it on DOS if you didn't reenter the OS.

        3. Anonymous Coward
          Anonymous Coward

          "286 could have run a pre-emptive multitasking OS"

          Protected mode was implemented to allow multitasking - with hardware support. It introduced virtual address spaces, and many other features the 8086 lacked to support multiple concurrent processes - in a secure environment. It's not over-engineered - it introduced advanced security features that weren't used mostly for speed and compatibility reason. For example, a call gate means you can't jump into an arbitrary address.

          The 286 had memory management, just it was implemented at the segment level. With 64K max segments, it looked feasible. Just like pages, segment can map physical memory to virtual one (with less granularity). Of course, it's not good for larger segments.

          When a segment is accessed, the CPU checks if the segment is in memory or not (only the segment descriptor needs to stay in memory, the referenced memory doesn't). If it is not, an exception is raised. The exception handler can allocate the space, load the memory contents from external storage, swap other memory to make space, etc. and then execution can resume.

          "Hardware Abstraction Layer" is not something a very hardware device like a CPU can implement - just in a multitasking OS you need to protect shared resources like I/O ports and physical memory addresses like the screen buffer from concurrent, non coordinated accesses. Protected mode allows to set which privilege levels can access IN/OUT instructions, and map specific physical addresses - usually the kernel, or anyway code running at an higher privileged level than user applications. You get hardware checks, so a rogue app can't easily create havoc.

          It's the OS that needs to implement an "HAL", so application don't need to access HW directly.

          Just, DOS applications were written to access directly memory and I/O ports, and would have not worked easily in a 286 protected mode OS, because it was tricky to trap those accesses and manage them.

          That's why Intel had to introduce the Virtual86 mode - when in this mode the CPU traps explicitly those attempts, and let the OS handle them transparently.

          In the end, in 286 times few users had more than 1MB of RAM that would have made a real multitasking OS useful. The few who had it, were happy enough to use EMS or XMS to allow for bigger spreadsheets, when the 386 came out, it had far superior features and it was time for GUI systems.

          1. Dan 55 Silver badge

            Re: "286 could have run a pre-emptive multitasking OS"

            few users had more than 1MB of RAM that would have made a real multitasking OS useful

            You can always tell who those who never used an Amiga are...

            1. anonymous boring coward Silver badge

              Re: "286 could have run a pre-emptive multitasking OS"

              Indeed. Useful applications would have been in the kB to 10s of kB ranges in those days. Multitasking very useful indeed! (Real, preemptive, one)

            2. Anonymous Coward
              Anonymous Coward

              "You can always tell who those who never used an Amiga are..."

              Did Lotus 1-2-3 run on Amiga? It was one of the few successful applications that could often require more than 640K, and spawned the need of memory extensions add-on boards, and later, software ways to access more memory. But PC memory was expensive in those days, and many business applications used most of the available memory, and swapping on those slow disks would have been painful...

              1. Dan 55 Silver badge

                Re: "You can always tell who those who never used an Amiga are..."

                If Maxiplan or Superbase wasn't good enough for you, you could run DOS.

                1. TchmilFan

                  Re: "You can always tell who those who never used an Amiga are..."

                  Superbase!

                  There’s a Proustian rush I wasn’t expecting today.

                  1. kirk_augustin@yahoo.com

                    Re: "You can always tell who those who never used an Amiga are..."

                    Amiga was still my favorite computer.

          2. Fruit and Nutcase Silver badge
            Thumb Up

            Re: "286 could have run a pre-emptive multitasking OS"

            @AC

            "Protected mode was implemented to allow multitasking - with hardware support."

            Exactly - the 286 was the target processor for OS/2 1.x

            1. kirk_augustin@yahoo.com

              Re: "286 could have run a pre-emptive multitasking OS"

              But no OS can prevent by passing security on any Intel processor. The fact you could run an pre-emptive multitasking OS on a 286 does not mean it would be secure. To be secure, you need to have a guard register at both ends of memory for each process, and prevent any cross over access. Intel does not provide hardware support for that. That is because guard registers are for segmentation, and Intel uses the word segmentation for their bizarre form of over lapped paging.

        4. CrazyOldCatMan Silver badge

          Re: I'm impressed

          but I'm pretty sure an 286 could have run a pre-emptive multitasking OS

          It did (sort of) - it was called QEMM (later QEMM/386). My old PS/2 50z[1] with an expanded RAM card did it quite happily. Enabled me to run Ultima (6?) while the IBM 3270 emulator sat in the background (and it was pretty finicky about being able to respond to incoming events..)

          [1] The old IBM sort, not the new-fangled games machine. Had a 50khz 286 chip with *zero* wait states for the memory. What a beast it was. Could run OS/2 (the early versions - not Warp) and was used for travel agency machines.

    4. CrazyOldCatMan Silver badge

      Re: I'm impressed

      Could we please go back to IBM/370 Assembler? That was vaguely understandable

      And, even more importantly (under TPF anyway) didn't use stacks..

      Of course, what it *did* use (a dedicated 4k block that every programme segment in the chain had access to) was just as bad. You put some data into your reserved address (EBW000+150), only to find that some numpty down the chain was also using it (but hadn't told anyone) and so when control gets passed back to you your data is essentially randomised.

      That's why good mainframe shops have QA departments with real teeth - to stop idiocy like that.

  6. Steve Button

    Most importantly...

    what's it CALLED? If it don't got a name, we ain't tekkin it serius.

    1. Tomato42 Silver badge
      Pint

      Re: Most importantly...

      called? Ha! If a vulnerability doesn't come with an interpretative dance now, it's not worth your time!

      1. Bronek Kozicki Silver badge

        Re: Most importantly...

        ... and does it have a logo?

        1. defiler Silver badge

          Re: Most importantly...

          ...and a dramatic theme tune?

    2. Anonymous Coward
      Anonymous Coward

      Re: Most importantly...

      Failed user code known execution design up privilege

      1. Fatman Silver badge
        Thumb Up

        Re: Most importantly...

        <quote>Failed User Code Known Execution Design Up Privilege</quote>

        I like the acronym.

  7. JimmyPage Silver badge
    Alert

    Segmentation ...

    I didn't like it then, I don't now.

    It had nothing to do with performance or features, and everything to do with keeping a stranglehold on the market with "backwards compatibility".

    We're starting to see the silicon equivalent of antibiotic resistance, as all those cumulative trade-offs make it impossible to secure a processor.

    1. Brewster's Angle Grinder Silver badge

      Re: Segmentation ...

      "It had nothing to do with performance or features"

      Did anybody ever say that? It was a hack to allow a 16 bit architecture to use 20 bit addressing without implementing a full 32 bit architecture.

      1. Anonymous Coward
        Anonymous Coward

        "without implementing a full 32 bit architecture"

        It wasn't easy nor cheap to add all the required silicon structures to implement a full 32 bit architecture.

        8 bit CPUs used even weirder ways to access more than 256 bytes...

        1. Brewster's Angle Grinder Silver badge

          Re: "without implementing a full 32 bit architecture"

          "It wasn't easy nor cheap to add all the required silicon structures to implement a full 32 bit architecture."

          I'm sympathetic to this. I spent enough time programming 8086 assembly that I have a soft spot for all its quirks. But the MC68000 arrived a year after the 8086 and showed what could be done.

          "8 bit CPUs used even weirder ways to access more than 256 bytes..."

          The ones I used in anger all had 16 bit address registers and 16 bit address buses. Though most had legacies of darker days.

          1. This post has been deleted by its author

      2. Herring`

        Re: Segmentation ...

        I think you're confusing the old 16 bit mechanism with the 386 protected mode segmentation. The latter was (is?) complex and allowed some pretty fine-grained control. No OS designer fully used it though - lots of overhead and hassle.

  8. Anonymous Coward
    Anonymous Coward

    Moral of the story

    Treat your technical writers well, or there will be consequences.

    1. Anonymous Coward
      Anonymous Coward

      Re: Moral of the story

      Treat your technical writers well, or there will be consequences.

      Too late, they are a dying breed. These days companies assume that it's sufficient to have the developers write a blog entry. Just like the way they replaced admins with web tools, the result is a crappy job done more slowly by people who are paid more than the original workers. Productivity drops, but so does headcount & that's all the beancounters care about.

  9. Anonymous Coward
    Anonymous Coward

    PC 2.0

    Quite a simplistic view on my part, but for a while I've thought we've gone far enough now with x86, isn't it time for a new platform with no backwards compatibility. What we do with computers today in some respects hasn't changed since their inception, but our dependence on them has changed massively. Backwards compatibility isn't always a good thing, and while premature planned obsolescence doesn't go down well we must acknowledge some things are just crap and need taking out of support.

    Personally I think it's time we ditched the dependency on legacy and started again. Yes, I know it's expensive, and yes I know it would take time and the initial release would be probably crap. You have to start somewhere though.

    I know this post will divide opinion (hence the AC), but answers on a postcard. What do you think. Are there merits to starting again or am I talking bollocks? **like a rag to a red bull asking you lot if I'm talking bollocks ;) **

    1. BinkyTheMagicPaperclip Silver badge

      Re: PC 2.0

      You're basically talking bollocks, it was tried before by Intel and called 'Itanium'. IBM tried it and called it 'OS/2 on the PowerPC'. Apple vaguely managed to make a go of it with PowerPC Macs but that couldn't withstand Intel either.

      The PC is actually slowly (very slowly) ditching legacy code. Classic BIOS are disappearing and being replaced by UEFI (yes, people will now point out that UEFI has some issues, but BIOS really is a mess). Lots of legacy ports have been removed, and the classic physical A20 gate is disappearing from modern processors.

      Pre ACPI SMP operating systems won't have run on hardware for the last few years, as practically all UEFI/BIOS now lack the MPS table needed.

      There are always more dependencies than you expect, I thoroughly recommend reading both os2museum.com and TheOldNewThing to understand just how extensive backwards compatible is, why it's required, and the effort Microsoft makes to support legacy code.

      A fresh break is usually not a good idea, and people do not like expense for no clear reason, this has been proven often enough it's not even a debatable point.

      1. Dan 55 Silver badge

        Re: PC 2.0

        A fresh break is usually not a good idea, and people do not like expense for no clear reason, this has been proven often enough it's not even a debatable point.

        It can be done. Apple have changed architectures twice, in the same amount of time all MS has managed to do is flail about with a schizophrenic UI.

        1. Voyna i Mor Silver badge

          Re: PC 2.0

          "It can be done. Apple have changed architectures twice, in the same amount of time all MS has managed to do is flail about with a schizophrenic UI."

          This is related to the old joke about how God created the world in only 6 days but it took IBM years to do an OS upgrade. Because God didn't have to worry about the installed base.

          Apple were able to change architectures three times (four if you count the one with the 12 bit CPU) because nothing really mission critical ran on their hardware. Once MS were in the server room for real, things got a lot more difficult.

          1. Amused Bystander

            Re: PC 2.0

            Microsoft tried it with WinRT - remember the original Surface that would only run RT code? I heard both the people who bought one were happy with the security.

            We hear much the same speech every day from younger coders - "This is all rubbish, lets start again and do it properly". That's how start-ups work:

            1) come up with elegant, simple, "obviously correct" design

            2) get successful

            3) fix bugs and introduce new features

            4) fix bugs and introduce new features

            5) fix bugs and introduce new features

            6) rinse and repeat

            7) become the legacy platform

            8) get replaced by a start-up

            The trick is to become so successful that other start-ups struggle to break in to the market. Hey I got an idea - write the code in your spare time (i.e. while not in your paid employment as a coder) and give it away, no one can compete with that.

            Fire extinguisher primed and ready :-)

    2. Primus Secundus Tertius Silver badge

      Re: PC 2.0

      I think it would make sense to design a computer architecture for use by modern high-level languages. Arithmetic in binary-coded-decimal is wonderful for financial applications. At a simpler level, the DEC PDP-11 was designed for a then-modern stack architecture.

      "All one has to do then" is to implement Microsoft Office and javascript for the new system.

      1. Anonymous Coward
        Anonymous Coward

        Re: PC 2.0

        Arithmetic in binary-coded-decimal is wonderful for financial applications.

        IIRC the VAX architecture had instructions for BCD arithmetic. It was replaced by alleged RISC architectures that were faster and simpler, most of which have now added so many new specialized instructions that they are more complex than the CISCs they replaced.

        1. Munchausen's proxy
          Pint

          Re: PC 2.0

          "IIRC the VAX architecture had instructions for BCD arithmetic."

          The VAX had instructions for EVERYTHING. Including a single machine instruction for solving polynomials.

          1. Voyna i Mor Silver badge

            Re: PC 2.0 - The VAX had instructions for EVERYTHING

            The only thing lacked by the VAX was a Temazepam dispenser to deal with the stresses caused by all the tape swapping at month end.

            In those days we had adequate CPUs but inadequate RAM and storage devices. Now everything is adequate and a flashlight app can root a phone containing a supercomputer larger and faster than a lot of earlier Crays.

            Progress!

          2. Steve Davies 3 Silver badge

            Re: Solving Polynomials on a VAX

            Ah yes, the POLY instruction.

            It could be interrupted (by Hardware level interrupts) at (from memory) six different points in its execution.

            Needless to say that it was used only by the more masochist programmer.

            My favourite PDP-11 Instruction was RLB (Rotate Left Byte).

            1. BinkyTheMagicPaperclip Silver badge

              Re: Solving Polynomials on a VAX

              Everyone also needs to look at this from the viewpoint that software drives hardware sales, and the two do not operate in isolation. Since the late 80s at the very least, and almost certainly considerably before then, the CPU manufacturers have asked the major OS producers what they want (starting with Microsoft asking for a faster way of switching from V86 mode to protected mode on the 386).

              If the CPUs aren't in a state you think they should be, the largest OS creators don't want it to be that way, probably due to compatibility concerns, or unnecessary effort.

              When intel went on their itanium crusade, AMD picked up the parts of the market that intel wasn't addressing - namely large memory support for the general x86 market[1]. The changes in architecture to allow many more registers in x64 mode, and the limited number of protection rings were almost certainly driven by asking OS creators what they actually used (OS/2 is one of the very few x86 operating systems to use more than two protection rings, and then only rarely), coupled with limitations on what AMD could actually get to market fast enough to create a commercial advantage.

              The PowerPC chips used by Apple, and by IBM in their i/p series were different (cheaper, less features for Apple). AMD have provided custom CPUs to Apple.

              Then you have AMD's desperate APU architecture, which is almost certainly driven entirely by them and not by the OS manufacturers in order to sell more chips. It's never been included in AMD's high end (so they're not serious about it), and software support has been lacklustre.

              I'd also note that for everyone that says (quite accurately) that a lot of modern tasks can be run in a browser, that even the non Windows world is extremely intel centric. It's true that whilst Linux runs on several (I can't be bothered to count) platforms, NetBSD on 57 platforms, OpenBSD on 13, and FreeBSD on 8ish, all platforms are not equal. A lot of those systems don't have a usable X server, and of the ones that do, many don't have a functional mainstream web browser as it involves a set of dependencies way longer than your arm.

              The one architecture that might be considered a vaguely viable modern intel alternative, ARM, is extremely fragmented, nowhere near as fast as intel, and plagued by binary blobs. POWER systems are beyond the reach of the average user, even in their cheaper (~£3Kish) configurations.

              [1] Yes, I remember PAE. It's horrid, and driver support was buggy.

              1. Anonymous Coward
                Anonymous Coward

                "the CPU manufacturers have asked the major OS producers what they want"

                The problem is they ditched security in exchange for performance in the very wrong times.

                While multiple cores, GPUs, and other hardware improvements helped a lot to increase performance even without reducing cycles or increasing clock, software security became much more important as soon systems became highly interconnected and attacks more and more sophisticated.

                Without adding security features to CPU, and made software layers more separated and resilient using them, it will be impossible to have far more secure OSes. One bug, and you're executing in kernel. at ring 0.. BOOOM!

                1. BinkyTheMagicPaperclip Silver badge

                  Re: "the CPU manufacturers have asked the major OS producers what they want"

                  They really didn't ditch security for performance at the wrong time. Performance has been acceptable for most end user purposes roughly since the Core2Quad came out. Spectre has affected everything since the pentium, that's fourteen years of 'everyone needs more speed'. You can argue over the necessity of embedded management engines as well, but modern chips have additional features that (in general) improve security.

                  Security does not sell product for anything other than a very specialist market. Witness the horrendous whinging that occurred upon the release of Vista and the UAC - it was very slightly over the top, but not particularly so, and Windows 7 and subsequent releases are less secure by default.

                  I don't see people putting their money where their mouth is and buying more secure systems, so it's the same as usual : commodity platforms with good enough security and occasional patches.

                  1. Anonymous Coward
                    Anonymous Coward

                    "They really didn't ditch security for performance at the wrong time. "

                    Wrong. When AMD designed x86-64, it ditched security features for performance - because it's a company that always been obsessed in competing in performances only against Intel. In turn, Intel delayed security checks to compete too.

                    AMD also designed the instruction set against *actual* - and now, all of them pretty outdated, from a security point of view - OSes, hindering thus any new more sophisticated design.

                    Sure, that sold well - until now, when OSes looks like Gruyere.

                    Sure, Spectre - but Meltdown? What was the fix? Stop mapping kernel memory into user one, and reload the selectors when needed.

                    Efforts should have been made into reducing the overhead when using the security features, not removing them.

                    There's no way to protect software from itself but with hardware support. And defense in depth is better than a simple 1 bit fence between privilege levels.

                    1. BinkyTheMagicPaperclip Silver badge

                      Re: "They really didn't ditch security for performance at the wrong time. "

                      Thanks for proving my point. AMD needed to get something out into the market, managed it, and gained a competitive advantage for a while. intel couldn't react fast enough and still thinks it isn't worth a notable redesign to provide extra security. I don't see any clamouring from most users, either.

                      A 'new, more sophisticated design' of OS is utterly pointless unless it addresses any of the major OS players. There hasn't been a new OS that stands a chance of success for well over a decade, in fact it's closer to twenty years, as examples such as WebOS, Sailfish, and Android are all based on a Linux kernel.

                      Until now, it hasn't been necessary to increase the performance of instructions used to mitigate the recent exploits. This will of course change in new chips in addition to specifically adding defences against side channel attacks, and people will get on with their lives.

                      Bear in mind that even the initial Spectre/Meltdown patches on intel kit offer performance many times higher than one of the unaffected ARM based systems.

                      Fundamentally I agree with you that improved hardware support would be a decent idea, but that doesn't sell product to most people, neither does it guarantee support from coders.

          3. CrazyOldCatMan Silver badge

            Re: PC 2.0

            The VAX had instructions for EVERYTHING

            Ha! I bet it didn't have a TCF instruction! (Terminate and Catch Fire)..

    3. Daniel von Asmuth Bronze badge

      Isn't it time for a new platform with no backwards compatibility?

      You mean the iPhone or iPad?

    4. ridley

      Re: PC 2.0

      The replacement is already here.

      Applications in the browser will take over for most needs no need to be hardware dependent then any device running the brows er will do.

  10. Dave Bell

    Be careful about version numbers.

    Readers should know this, but the Linux Kernel version numbers don't look right.

    I checked the Ubuntu link, and the version numbers they use are different. I'm currently running Ubuntu kernel version 4.13.0-39-generic and the patch is in version 4.13.0-41-generic, which has just come up as an update. I don't know why they don't use a format such as 4.13.41 but they have lists, they have versions for different processors, and they all have that extra zero in the version string. So do other Linux suppliers.

    The difference between you and the rest of the world looks so consistent that I am wondering just how reliable your reporting is.

    1. Steve Graham

      Re: Be careful about version numbers.

      The kernel source uses the x.y.z format. So referring to that format is unambiguous, whereas distro-makers might be doing their own thing.

      I compile my own kernels anyway. Distro kernels need to cover diverse hardware, while mine are specific to the machine they run on.

      1. gerdesj Silver badge

        Re: Be careful about version numbers.

        " whereas distro-makers might be doing their own thing." Oh they do ...

        Ubuntu take a stock kernel version eg 4.13.0 and then stick with it but backport fixes etc. Hence you get versions like 4.13.0-41-generic which is the 41st version of the Ubuntu version of 4.13.0 - in a generic way 8) This on the other hand: 4.16.5-1-ARCH is the first Arch iteration of the stock 4.16.5 kernel.

        Both kernels will have some stock mods applied before distribution so my 4.16.5-1-ARCH will be different to what you get direct from Linus and Co.

    2. Bronek Kozicki Silver badge

      Re: Be careful about version numbers.

      The article is referring to version numbers of the upstream kernel, not distributions. For the very simple reason that there is one upstream kernel and many distributions (each with its own set of patches).

  11. Frumious Bandersnatch Silver badge

    F00F!

    Just like that!

  12. anonymous boring coward Silver badge

    "Which – gulp! - isn’t a very far-fetched scenario, unless you run a tight ship of no untrusted code."

    I would assume that any non-trusted code running on a machine will one way or another be able to gain access to everything.

    I don't think we are at a stage where you could install a program from an untrusted source and think that because you didn't give it admin privileges it would somehow be a safe thing to do.

    It's not even true for supposedly much tighter platforms, such as Android and iOS.

  13. amanfromMars 1 Silver badge

    Patches? Against Immaculate Nectar? That would be equivalent to an AI Placebo.

    What Patches for and/or against what are Invisible Almighty Resources? Embrace and Engage with Them for True Sources Creating the Future LOVEnvironment ..... An Advanced IntelAIgent Space where Stellar Races are Run and Succeed to Generate Heavenly Reward as the Always WinWin Surprise in Live Operational Virtual Environment Enterprises. ...... Offering New Landers the Pleasures that Plunder Treasures but so Oft is Abused and Badly Used too. That's a Crying Shame of a Crime for which One can be Both Persecuted and/or Prosecuted to a Perfect Conclusion/Immaculate ReSolution.

    That is certainly not a Simple Path to Take Though for it is Full of Troubles and the Troubled and Angry Cloud Crowds/Angry Crowded Clouds.

    Leave them alone with the following instructions that lead wherever you really care for to share and dare go to .......... Full Deep Immersion and Sweet Surrender of Oneself to Simple Bliss is Testing For Special Services Server Administration by Decree with Right AIRoyal Command.

    Is Fabulously Crazy a Gift or a Burden, and do the Mad and the Bad and Genius see Eye to Eye and would aspire and inspire and even conspire to desire the future of their choosing and have IT Delivered for Root ACTivation Boot/Private Friends Missing ACTioN. Call. A Wonderfully Safe Haven Space that is Attractive and Attracted to Private AIMaster Pilots with Infinitely SMARTR Pirates Taking their Turns at Running the Asylum and Feeding the Crowd words they call news which tells tales of far-off views which changed instantly to something completely different, for the moment is forever gone to remain a mystery for the Past to Ponder, should it ever need to.

  14. croky

    Wow ! I feel much safer with my K6-2 now ...

  15. Cynic_999 Silver badge

    As a hardware and firmware guy ...

    I cannot recall the last time I read a manual or datasheet that was *not* unclear in many significant ways. Or gave information that was just plain incorrect.

    My best example was of a manual for a hardware module that stated: "Note that connector P5 is upside down".

    Well, P5 was mounted on the solder side of the board, but I didn't see why that would need a special mention. But it was not working as expected. I eventually twigged that what the manual actually meant was, "All logic signals on P5 are inverted". Obviously the result of a translation by a non-technical person.

  16. Nebra

    Bah, not an IBM fan then

    Disappointed with the headline, doesn't AIX / powerpc count for some of the market ?

  17. sisk Silver badge

    Imagine you're teaching a math class. You have one kid who's failing but the rest are doing fine. Odds are you have a bad student. If, on the other hand, all your students are failing odds are you're a bad teacher. You are failing to convey the concept to your students and your poor explanation is not their fault.

    If everyone misread the documentation then the problem is with the documentation.

  18. JBowler

    Ah, the challenges of having an ICIS (Insanely Complicated Instruction Set)

    Of course, ARM Ltd are going that way too but fortunately for ARM the historical architecture was simple; they're just making it more complex, whereas Intel of course are trying, but failing, to go in the opposite direction.

    Maybe there is a lesson, not a political one like "RISC", but a real one, like "start from scratch every 30 years". Intel started disclosing iAPX publicly at the start of the '80s and the first ARM chips were available at the end of the '80s.

    Personally I like CircuitPython and I can't see a reason for having a processor that does anything more than implement whatever CPython requires, but that's just me.

    John Bowler

  19. Anonymous Coward
    Mushroom

    Hahahahahahaha

    Thanks theReg i needed a good laugh!

    Now we can wipe out computers and software in one fell swoop and start-a-new get rid off the bugs and the bullshit once and for all. just keep Mr Bill and others occupied until we get it arranged.

  20. dlc.usa
    Boffin

    Linux Patch Was Developed in 2015

    ...according to Alan Cox who posted this link in support of the statement:

    https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-8897

    so the question becomes why wasn't it picked up before a few weeks ago?

    Well, the third internal link explains that:

    On Linux, the issue is fixed by commit d8ba61ba58c8 ("x86/entry/64: Don't use IST entry for #BP stack"), which has been available in Linus' tree and -stable kernels for some time. (Yes, the patch really was written in 2015. I fixed the issue as part of related work by accident, but I wasn't aware that the issue was at all urgent at the time, so the patch was never pushed out.) Most other vendors should have their own advisories and fixes available now.

  21. kirk_augustin@yahoo.com

    Sorry, but the article is blaming the OS programmer, and that is wrong. It is Intel that made the mistake. In fact, Intel did everything wrong with the x86, and it is really a single user processor, incapable of being secure. What Intel calls segmentation is actually a really bad form of paging, and each user process is supposed to get one code segment and one data segment, which are of variable length. That gives you guard registers, and is what segmentation is supposed to be for. The code segment is the process start in memory, and the data segment is the process end in memory, with them both growing towards each other. That is the way computers have implemented security since the 1950s. Intel just got it all wrong.

    1. Anonymous Coward
      Anonymous Coward

      I suggest you to read an Intel manual, because you got it all wrong. Since the 386, you can really ignore the paging features of segments, and just use pagination.

      But, for example Intel segments have a determined size because checks are made they can't crash into each other, nothing can't be accessed or worse, written, outside a segment, and code can't be overwritten by data. Usually, only stacks grow downwards - growing toward another area is very dangerous if hard limits are not imposed and truly enforced.

      And it's exactly because OS keep on using old and outdated implementation that they are less secure than they could be. Hardware and software is not like wine, it doesn't age well. It just becomes outdated.

  22. Morat

    I understood every word of that.

  23. HieronymusBloggs Silver badge

    OpenBSD is not affected

    Theo: "We didn't chase the fad of using every Intel cpu feature."

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019