back to article Windows slithers on to Arm, legless?

Perhaps more by accident or design, Microsoft managed to lower expectations for Windows on Arm last week – but not set them so low it kills off interest in the platform. In fact, the "redacted" advisory about limitations of Windows confirms what we already know – hairy hacks won't work, nor will various plugins that hook into …

  1. alain williams Silver badge

    Wedded to Intel

    That is part of Microsoft's problem. One CPU/instruction-set has really made things simpler for them and their users. However it is really difficult for them to change; 16 -> 32 -> 64 bit has caused enough problems.

    This is something that Unix has addressed from the start. I remember 30+ years ago porting my programs to at least 3 different instruction-sets before I considered it ready for others to test. So different platforms have always been part of *nix programmer & user expectations, part of the culture.

    Yes: need to think a bit more, but once you have the idea not that hard.

    1. ZanzibarRastapopulous

      Re: Wedded to Intel

      Windows, at least from NT onward has always supported multiple processor architectures. NT actually targeted the Intel 860 originally.

      1. kryptylomese

        Re: Wedded to Intel

        Windows does but not the software that runs on it and that is the problem!

    2. BinkyTheMagicPaperclip Silver badge

      Re: Wedded to Intel

      Have you forgotten that Windows NT has historically supported five different platforms (x86, MIPS, Alpha, PowerPC, and Itanium) and had incomplete ports for two (i860, Sparc)?

      The issue is not the cross platform nature - it generally worked well, it's the lack of software. Alpha was most successful, partly because DEC and Microsoft made the effort to keep the platform up to date, and partly because it included x86 emulation.

      This was then repeated with Windows CE, multiplatform, but limited software.

      Porting is likely to be considerably easier than between Unix variants, as the Windows API is standardised, but probably more expensive. This was definitely prior to the years of free compilers from Microsoft, however, and I'm not sure how well gcc/mingw coped with non x86 platforms. Third party compilers would undoubtedly be pricey.

      1. John Smith 19 Gold badge
        Unhappy

        "Windows NT has historically supported five different platforms "

        Note those words

        "Windows NT," and "historically"

        How many alleged "from the ground up" re-writes ago was that?

        1. ZanzibarRastapopulous

          Re: "Windows NT has historically supported five different platforms "

          > "How many alleged "from the ground up" re-writes ago was that?"

          To be fair almost all those architectures failed in one way or another, MS would look a bit daft maintaining an Alpha port. I think mips is about the only one still operating and that's hardly desktop level.

          1. jabuzz

            Re: "Windows NT has historically supported five different platforms "

            > I think mips is about the only one still operating and that's hardly desktop level.

            Don't know I just last year brought a couple of Ubiquiti Edgerouter Infinity's. They have in them a 16 core 1.8GHz 64bit MIPS chip with 16GB of RAM. Allegedly the chip is available in a 2.2GHz variant and has a couple of SATA ports too and a PCIe x4. So a version of that chip which slimmed down the eight 10GbE ports and stuck in a GPU on would be a seriously decent desktop machine. From memory it is on a 22nm process so goodness only knows what it would be like on a 14nm process.

            1. ZanzibarRastapopulous

              Re: "Windows NT has historically supported five different platforms "

              >> I think mips is about the only one still operating and that's hardly desktop level.

              >Don't know I just last year brought a couple of Ubiquiti Edgerouter Infinity's.

              I honestly didn't realise MIPS had reached such lofty heights, but that is still a router and an embedded application.

          2. Joe Montana

            Re: "Windows NT has historically supported five different platforms "

            MIPS is still competing in the embedded space, although they are way behind ARM, and really missed their chance to get ahead of ARM in the transition to 64bit.

            MIPS had a 64bit variant *long* before ARM, it's been around since the early 90s and has mature compiler support, as well as hardware available easily and cheaply. ARM64 was only announced in 2011, and took a while to get OS and compiler support.

      2. Charlie Clark Silver badge

        Re: Wedded to Intel

        The issue is not the cross platform nature - it generally worked well

        All the way until 3.5 when MS moved lots of stuff into the kernel for better performance on x86. This effectively killed off support for other architectures because the microkernel was lost.

        1. Anonymous Coward
          Anonymous Coward

          "when MS moved lots of stuff into the kernel"

          That was made to avoid costly ring transitions. As long as it is code that doesn't use any CPU specific feature or instruction, it can be recompiled for any CPU you like just using a different compiler, it really doesn't matter where it does run.

          For the matter even Linux moved a lot of its graphic code into the kernel because you can access graphics hardware only from there - and if you keep on going back and forth from user to kernel your performance will get a big hit - why OS were mapping kernel memory into user space protecting it with a paging bit only?

          And Linux too is not a microkernel architecture, still it runs on many different CPUs....

          1. Charlie Clark Silver badge

            Re: "when MS moved lots of stuff into the kernel"

            That was made to avoid costly ring transitions.

            x86-specific like wot I said. The Alpha didn't have the same problems.

            And Linux too is not a microkernel architecture, still it runs on many different CPUs....

            Yes, but the ports are not easy. Compare it with a truely multi-CPU arch such as NetBSD or a microkernel (hint why is Google thinking of dropping the Linux kernel for Fuchsia?)

            1. Anonymous Coward
              Anonymous Coward

              "x86-specific like wot I said. The Alpha didn't have the same problems."

              Just, they are the security checks. Look at what happened when Intel decided to ignore some checks during speculative execution for speed...

              Ring transitions in Intel chips are quite complex operations, and the protected mode of Intel chips is quite sophisticated. I don't know enough about Alpha chips because I never programmed one, but in Intel chips it's not just checking and flipping some bits, because descriptors have far more data to check.

      3. Richard Plinston Silver badge

        Re: Wedded to Intel

        > Alpha was most successful, partly because DEC and Microsoft made the effort to keep the platform up to date, and partly because it included x86 emulation.

        ... and partly because DEC settled with Microsoft over Dave Cutler reusing design work he had done at DEC when implementing NT.

        """Rather than suing, Digital cut a deal with Microsoft. In the summer of 1995, Digital announced Affinity for OpenVMS, a program that required Microsoft to help train Digital NT technicians, help promote NT and Open-VMS as two pieces of a three-tiered client/server networking solution, and promise to maintain NT support for the Alpha processor. Microsoft also paid Digital between 65 million and 100 million dollars."""

        http://www.itprotoday.com/management-mobility/windows-nt-and-vms-rest-story

      4. Joe Montana

        Re: Wedded to Intel

        Chicken and egg... Vendors won't port to a platform with no users, and users won't buy a platform with no software.

        The unix world was always different, you had several large well established vendors each with their own OS and later their own processor architecture (many started off on m68k before developing their own). Developers of software for windows on the other hand have never really had to deal with portability, they typically never considered processors with a different byte order or pointer size.

        Then there is the open source nature of many unix systems, especially today. Not only is most software portable, anyone can recompile it for a different architecture. You don't get the chicken and egg problem, as the vast majority of software is a recompile away once you have a unix-like kernel and gcc ported to the new architecture.

    3. Lysenko

      Re: Wedded to Intel

      You seem to be confusing Seattle with Cupertino.

    4. Anonymous Coward
      Anonymous Coward

      "However it is really difficult for them to change"

      "16 -> 32 -> 64 bit has caused enough problems"? What problems? There were more problems with applications written without taking into account the far more protected and restrictive NT environment than running them on a 32 bit system. Going from 32 to 64 bit was so simple nobody really noticed it happened, but the availability of far more RAM.

      Backward binary compatibility has always been excellent, unlike some Unix where you can't run applications on a newer system unless your recompile them because binaries won't work. Windows users do expect compatibility at the binary level.

      The only problem came when AMD dropped the Virtual86 mode in 64 bit mode so Windows 64 bit could no longer run 16 bit applications outside a full virtual machine.

      It was an AMD decision, nor Microsoft nor Intel took it.

      If you write everything only in a high-level language and libraries, all you need is a compiler/interpreter, and have been so for decades, regardless of the OS. And there are *nix applications available for a single instruction set as well.

      At the application level, the only issue you can have is where you have carefully crafted hand made assembler code - which may happen when you need high optimizations, or rely on processor specific libraries, for the same reason.

      For example Adobe lately worked with Intel to improve performance on multi-core systems, and a lot of the resulting code won't be portable to ARM but with a specific effort.

      1. Ken Hagan Gold badge

        Re: "However it is really difficult for them to change"

        "And there are *nix applications available for a single instruction set as well."

        Such as Android Studio.

        (I find that example *particularly* odd. I can imagine that Google wouldn't want to port the emulator portion, but the editors and build tools are surely written in a portable language.)

        1. bombastic bob Silver badge
          Devil

          Re: "However it is really difficult for them to change"

          "Such as Android Studio."

          well if you're emulating ARM on ARM, maybe it will RUN FASTER now? but yeah it probably means a re-write of the ARM emulation. You should expect any kind of virtualization and/or emulation to be like that. It's probably a lot of assembly code, and uses virtualization registers and other hardware-specific stuff.

      2. Baldrickk Silver badge

        Re: Going from 32 to 64 bit was so simple nobody really noticed it happened

        Somebody never tried Windows XP-x64...

        Horrible driver support, terrible application support. It was a terrible mis-step.

        MS did improve on this with Vista - though that for many was still a 32-bit OS, and it's only with Windows 7 that 64-bit became mainstream for Windows users at home. By that time MS had had many years to get device manufacturers on-board with driver support. Even now, how many apps are 32-bit only?

        It's a bit like the Millennium Bug. Basically, nothing happened, but only because a lot of work was done to prevent it from happening.

        1. Mage Silver badge

          Re: Going from 32 to 64 bit was so simple nobody really noticed it happened

          Some older 32 bit apps (esp VB6) will work on Win7 & Win 10 32biit, but impossible on 64bit Win7 & Win 10. Also why removal of WOW16 and NTVDM? Dosbox works.

          1. Colin 29

            Re: Going from 32 to 64 bit was so simple nobody really noticed it happened

            I have VB6 running on Windows 10 64-bit, no problems

            1. david 12 Bronze badge

              Re: Going from 32 to 64 bit was so simple nobody really noticed it happened

              VB6 runs on Win10 64. Some Apps don't run on Win64 (It really has nothing to do with VB6, which is, after all, the VS C++ compiler under the covers).

              In particular what doesn't work is some 32 bit OLE controls used by some Apps.

              Given that OLE is a technology for using independent and disconnected objects, there really is no technical reason why 32 bit OLE isn't supported with 64 bit OLE. It's a marketing and support decision from MS.

          2. Anonymous Coward
            Anonymous Coward

            "so why removal of WOW16 and NTVDM? Dosbox works."

            Once again: AMD removed the Virtual86 mode from CPU in 64 bit mode. Without that CPU mode, WOW16 and NTVDM can't work.

            DosBox is a software CPU emulator, while the Virtual86 mode allows for running real mode software directly on the CPU, once the CPU is in protected mode automatically trapping operations forbidden in protected mode and allowing the OS to perform them as required.

            If older VB applications use 16 bit libraries or installers, they won't work on 64 bit Windows exactly for that reason.

            With virtual machine easily available, it takes little to run DOS in one if you need to run 16 bit code.

            1. Richard Plinston Silver badge

              Re: "so why removal of WOW16 and NTVDM? Dosbox works."

              > Once again: AMD removed the Virtual86 mode from CPU in 64 bit mode.

              While the _Intel_ 64 bit design also removed Virtual86 mode _and_ x86 32bit mode.

          3. bombastic bob Silver badge
            Boffin

            Re: Going from 32 to 64 bit was so simple nobody really noticed it happened

            WOW16 can't work in x64 because it doesn't have 16-bit support. that's just the way the architecture is.

            NTVDM - there's no Virtual 8086 mode any more on x64. hence, can't have that, either.

            As I recall, in 32-bit mode, a code selector is marked as 'USE16' or 'USE32' with a bit. I haven't looked, but I suspect in 64-bit mode, it switches between 'USE32' and 'USE64' in a similar way. To understand how the GDT and LDT work, you can start HERE:

            https://en.wikipedia.org/wiki/Global_Descriptor_Table

            (it's incomplete and oversimplified but you'll get the 'gist' and there are links to better resources)

            I used to play with selectors and call gates and stuff like that "back in the day", even wrote a program for Win 3.x and '9x that would dig into the guts of the 'virtual DOS' (aka VxD) stuff and even make calls into the kernel via the interface. You could find out useful stuff about running processes, figure out what VxD's were loaded, dump the global and local descriptor tables, read the page tables, stuff like that, by playing with "all that" and then list it conveniently in a simple Windows application. But naturally it had to be a 16-bit application to do it's magic, and only worked with '9x and 3.x (because they had VxDs).

            1. bombastic bob Silver badge
              Boffin

              Re: Going from 32 to 64 bit was so simple nobody really noticed it happened

              oh, one more thing I just found out.

              from https://en.wikipedia.org/wiki/X86-64#Architectural_features

              apparently in 64-bit mode (which they call "long" mode) you can have 16-bit protected mode processes (which actually surprises me quite a bit). However, you can't have 'real mode' nor 'virtual 8086' mode. Interesting.

              So maybe dumping WOW16 had something to do with legacy 16-bit applications and 64-bitness i the kernel. Because, in theory, a "well behaved" 16-bit application SHOULD be possible to run in 64-bit mode...

              1. Anonymous Coward
                Anonymous Coward

                " in theory, a "well behaved" 16-bit application SHOULD be possible to run in 64-bit"

                The problem is not only the bitness of the application, but the whole environment it expects to run in. DOS applications do expect to be in real mode, access a given memory layout (interrupt tables, video memory, etc) and being able to change segment registers, Win16 applications do expect their memory layout and Win16 API entry points within their address space, etc. etc.

              2. Richard Plinston Silver badge

                Re: Going from 32 to 64 bit was so simple nobody really noticed it happened

                > apparently in 64-bit mode (which they call "long" mode) you can have 16-bit protected mode processes

                16-bit protected mode is 80286 native mode. The 80286 was brain dead so nobody cares.

                > Because, in theory, a "well behaved" 16-bit application SHOULD be possible to run in 64-bit mode...

                It may be that "in theory" a particular design could run 16bit V86 and 64bit together, but the AMD design does not. Programs run by executing instructions. Instructions have particular bit layouts. These have a number of bits assigned to the op-code. You cannot have more different op-codes than the number of bits allow. AMD-64 long mode requires additional op-codes so they reused some numbers that overlapped stuff that was 20 years out of date.

                Virtual86 and Real86 are for running 8086 programs, that design is from 1978 - 40 years ago.

            2. Joe Montana

              Re: Going from 32 to 64 bit was so simple nobody really noticed it happened

              NTVDM uses vm86 mode on 32bit x86, but on other architectures (mips, ppc, alpha) it would emulate the cpu... There's no reason they couldn't use an emulation mode for 64bit x86 too, dosbox works fine like that.

        2. Anonymous Coward
          Anonymous Coward

          "Somebody never tried Windows XP-x64..."

          Just, Windows 2003 64 bit worked flawlessly, and it run 32 bit code without issues. I know because we used it to run several 32 bit processes at once, each with its 4GB memory space, on a Windows machine that eventually can use more than 4GB of RAM without PAE/AWE trickery.

          XP 64 derived from 2003 64, but it is true it was somewhat "unfinished", and not many hardware OEM bothered to make drivers for it. But it is also true very few had a real reason to use it. On the server side, OEM provided 64 bit drivers for their systems.

          From desktop users perspective, especially consumer ones, there are very little benefits to move to a 64 bit OS until you have more than 4GB of RAM, and low-end machines with 8GB of RAM or more became common only recently. Very few consumer applications would benefit from 64 bit arithmetic, most benefit come from the larger address space, but you also need applications who can use it.

        3. Anonymous Coward
          Anonymous Coward

          Re: Going from 32 to 64 bit was so simple nobody really noticed it happened

          It is a problem for Apple as well. They have served notice that 32bit apps will not be supported but there are still holdouts. The Drobo desktop is still 32bit.

          Linux can be a PITA with lots of stuff still needing 32bit libraries. Come on Distros, get it sorted.

          I wish MS well in getting everything moved to 64bit (I probably need to go and sit down in a darkened room). The hardware has been 64bit for at least 5 years now so it really should be time to pension off 32bit binaries.

          1. Richard Plinston Silver badge

            Re: Going from 32 to 64 bit was so simple nobody really noticed it happened

            > Linux can be a PITA with lots of stuff still needing 32bit libraries.

            In what way ? If you need 32bit libraries they are still there with most distros. Install a 32bit app from the repository and the appropriate libraries will automatically be installed.

            > I wish MS well in getting everything moved to 64bit (I probably need to go and sit down in a darkened room). The hardware has been 64bit for at least 5 years now so it really should be time to pension off 32bit binaries.

            And you complained about losing 20 year old 16bit stuff !!!

            Actually, Microsoft is reviving 32 bit with its ARM/x86 hybrid that will only run 32bit x86. Users are going to be pissed when it won't run the software they use on their desktops.

        4. BinkyTheMagicPaperclip Silver badge

          Re: Going from 32 to 64 bit was so simple nobody really noticed it happened

          XP 64 was intended for very specific uses, not general purpose, as it was basically a neutered version of Windows Server 2003. It would run the apps that needed access to plenty of memory (as they would normally be designed for Windows Server) without the cost of a Windows Server license.

          To be fair, Alpha and Itanium were the only non Intel architectures that really succeeded. The other platforms didn't survive beyond NT4 SP3 (I think some gave up at SP1, i.e. immediately..).

          It's also worth mentioning, as has been noted above, that platforms are decidedly unequal on Unix. For the major commercial Unixes, I don't think Solaris Intel was *that* different to Solaris Sparc, except for functionality specifically enabled by the Sparc architecture. On the free Unix side there are many differences : the booting process varies wildly, some platforms don't have X, or support a very limited set of graphics cards. Then after finding that the platform does support X, and networking, none of the major browsers are supported as the build process and dependencies are considerable...

      3. This post has been deleted by its author

      4. Doctor Syntax Silver badge

        Re: "However it is really difficult for them to change"

        "Backward binary compatibility has always been excellent, unlike some Unix where you can't run applications on a newer system unless your recompile them because binaries won't work."

        Yup. It was an absolute scandal that Solaris binaries wouldn't run on HPUX.

        Wasn't part of WIndows' problem that sometimes they had backwards bug compatibility because stuff like use-after-free was used in "important" applications?

        1. Anonymous Coward
          Anonymous Coward

          "Yup. It was an absolute scandal that Solaris binaries wouldn't run on HPUX."

          No, it's an absolute scandal you could not run the same binaries on different releases of the same distro of Linux... because breaking changes in libraries didn't allow that.

          Or 64-bit only distro you couldn't run 32 bit applications on unless you reconfigured them explicitly.

          "Wasn't part of WIndows' problem that sometimes they had backwards bug compatibility"

          I guess you are referring to old DOS applications like Flight Simulator - anyway Windows has a lot of specific support for allowing old applications to run. Read Raymond Chen's "The Old New Thing" blog to discover how far Microsoft went to ensure backward compatibility with bad written applications because some "500 Fortune" company relied on them and MS had to support them.

          1. Hans 1 Silver badge
            Coat

            Re: "Yup. It was an absolute scandal that Solaris binaries wouldn't run on HPUX."

            No, it's an absolute scandal you could not run the same binaries on different releases of the same distro of Linux... because breaking changes in libraries didn't allow that.

            LD_LIBRARY_PATH is your friend. Now, this works on all ELF platforms, so Linux, as well as a bunch of UNIX systems, including the BSD's, Solaris and HP-UX* on iTanic .

            No such thing as DLL hell on *NIX.

            HP-UX has readelf: http://www.polarhome.com/service/man/?qf=readelf&tf=2&of=HP-UX&sf=1

            1. Anonymous Coward
              Anonymous Coward

              Re: "Yup. It was an absolute scandal that Solaris binaries wouldn't run on HPUX."

              Try to run on CentOS 6 anything that require an updated GLIBC - LD_LIBRARY_PATH *is* not your friend....

      5. Richard Plinston Silver badge

        Re: "However it is really difficult for them to change"

        > Windows users do expect compatibility at the binary level.

        Most Windows users do not know that there are computers that are not x86. 'Binary incompatibility' is not a concept they are aware of. When Windows RT was available they expected to be able to run their existing applications on it. When Windows 10 IOT was announced for running on Raspberry Pi they thought they would be able to use a $35 computer to run a full desktop and Halo 5.

        > The only problem came when AMD dropped the Virtual86 mode in 64 bit mode so Windows 64 bit could no longer run 16 bit applications outside a full virtual machine.

        > It was an AMD decision, nor Microsoft nor Intel took it.

        Intel and Microsoft were perfectly free to continue developing Itanium for their 64bit systems. Of course those didn't do Virtual86 either, and neither did it do x86 32bit (except by emulation).

        > Going from 32 to 64 bit was so simple nobody really noticed it happened, but the availability of far more RAM.

        That was directly the result of an _AMD_ decision !!! Microsoft and Intel had to change course from their 'Itanic' decision to follow AMD.

        The instructions added to make x86 into AMD64 overlapped with some of the old 16bit instructions. This was a technical issue because the instruction set has a finite number of different operation codes. Thus the chip can _either_ do V86 _or_ AMD64.

        The 8086 (and later) couldn't do 8080 or 8085 either*. That was an Intel decision. Old stuff gets dropped, get over it.

        * actually the NEC V20 and V30 chips could do both 8086 and 8080.

      6. Joe Montana

        Re: "However it is really difficult for them to change"

        Binary backward compatibility on unix is excellent too at the kernel level, the problems people encounter are due to distros not shipping the expected old versions of libraries but there is nothing stopping you adding those libs and having everything work...

        Microsoft ship with mountains of backwards compatibility libs, linux generally doesn't because 99% of applications come with source and can thus be recompiled against the newer libs.

    5. david 12 Bronze badge

      Re: Wedded to Intel

      Not to downplay the commercial success of wintel, but MS was a multi-platform company right from the start. Their product, MS Basic, was provided on scores of OS and hardware platforms. As was MS DOS:

      Like unix at the time, non-intel-MSDOS wasn't binary compatible, you had to compile for the specific platform. And non-PC-clone-MSDOS was only binary compatible if you used the OS primitives instead of PC-mapped hardware. That meant no DMA, no memory-mapped graphics, etc.

      So different platforms have always been part of the MS programmer expectation, part of the culture.

      1. Richard Plinston Silver badge

        Re: Wedded to Intel

        > MS was a multi-platform company right from the start.

        As was the whole personal computer and microcomputer industry from the mid 70s.

        > As was MS DOS:

        MS-DOS was Intel 8086/8088* only. It could be used on many different architectures (as long as they used 8088 or 8086) because each OEM had to write their own MS-BIOS to deal with the actual I/O hardware. This mechanism was copied from DRI's CP/M. It wasn't "provided" for score of architectures, the OEMs had to do it.

        > So different platforms have always been part of the MS programmer expectation, part of the culture.

        That was true a couple of developer generations ago. MS Basic on 6502 and 6800 was 40 years or more ago. The last MS-DOS that ran on non-IBM-PC-Clones was 4.x. Sure, if you're a developer now retired or moved to management then you may have dim memories of a time before.

        > So different platforms have always been part of the MS programmer expectation, part of the culture.

        Not for the current generation of developer and users it isn't. They were confused by RT and failed to buy it in droves, and returned it when they did buy it. Windows is Windows, if it doesn't run program x, then it is a failure.

        * There was MSX-DOS for MSX machines that ran on Z80 but it was a CP/M clone, not a version of MS-DOS. There was also MS-DOS 4.0 and 4.1 (not to be confused with the later 4.01) that was 80286 based but this was soon dumped.

  2. Blotto Bronze badge
    Facepalm

    Oh no not again...........

    did they not learn from last time?

    funny how the press are whipping this up to be great again..............

  3. Refugee from Windows

    Playing catchup

    This is a game that Microsoft haven't put much effort into, and I'm afraid to say that they are so far behind I don't think they'll ever catch up. Being tied down to x86 has turned them into a one trick pony, and their speciality is wearing thin. ARM is likely to become the most popular platform from both embedded through to servers, if it hasn't already overtaken already, and so they are primarily in a declining market. They're playing second fiddle to Google.

    They've not even woken up to this, let alone smelt the coffee.

  4. Andy Mac

    What really struck me with the folly of RT was calling it Windows at all. No-one ever complained that an iPad couldn’t do what a Mac could do, because they were clearly separate products (plus Apple had set the scene with the iPhone first).

    But calling something Windows which can’t run Windows applications seems a bit, well, dumb.

    1. Jay 2

      Can't agree enough. I commented at the time that no good would come of having three different (and non-compatible) products called Windows 8 and sharing the same interface. An interface that was an OK idea on touch devices, but was stunningly un-suited to a traditional keyboard/mouse setup. The most stupid thing is that as soon as the public saw/used Windows 8 on a PC in test/beta they said it was shit and Microsoft decided they were all wrong and continued as planned.

      Windows 8 x86/x64 != Windows 8 RT != Windows Mobile 8 (or whatever it was called)

      1. Richard Plinston Silver badge

        > Windows 8 x86/x64 != Windows 8 RT != Windows Mobile 8 (or whatever it was called)

        You missed Windows 8 IOT (later there was Windows 10 IOT) which was completely different yet again.

    2. John Smith 19 Gold badge
      Unhappy

      "But calling something Windows which can’t run Windows applications seems a bit,..dumb."

      Not at all.

      "Compatibility" is basically why people put up with Microsoft's s**t.

      I've lost count of the number of times Microsoft has claimed "Our new tablet/phone/watch is compatible with your desktop windows" (going pack to the "poquet PC" days) and turned out to be complete BS. That spreadsheet you wrote. Sorry, not going to run. That Word doc you wrote. Nothing you can't cut n paste from, eh?

      If they didn't call it "Windows" WTF would you buy it?

    3. Wade Burchette

      Agreed. People use Windows because their programs from 20 years ago still work. It is the legacy compatibility. Windows on ARM takes that away. The whole purpose of it, like Windows 8 and 10, is to pad Microsoft's wallet and not what the customer wants. With Win on Arm you are limited to the Windows Store which conveniently gives Microsoft a cut of all sales. That Microsoft wants it to succeed, because now you will have to buy new apps from them. Windows on ARM will never succeed unless I can still install programs released 20 years ago and 1 year ago.

      1. Anonymous Coward
        Anonymous Coward

        "Windows on ARM takes that away"

        Windows RT did - now they try to be able to run x86 applications, but emulation is always a risky approach. Not everything will work, not everything will work well, and still, as long as it somehow works, developer won't have much incentives for a full port.

      2. Anonymous Coward
        Anonymous Coward

        The legacy OS for legacy software.

      3. Hans 1 Silver badge
        Windows

        Windows on ARM will never succeed unless I can still install programs released 20 years ago and 1 year ago.

        Well, WIndows on ARM can only do 32-bit applications, NOT 64-bit Windows applications ... so ... really depends ;-).

        Who wants a resource hog like WIndows on ARM ??????????????

  5. ForthIsNotDead Silver badge
    Stop

    .Net

    Surely the answer to the ARM/Intel software compatibility conundrum is .Net, where your code runs in a VM on top of the native processor?

    Microsoft have been developing .Net continuously since, what, 2000, 1999?

    Maybe we'll see more .Net uptake in software shops IF the ARM version of Windows gains any traction.

    If you want your applications to run in the Microsoft world, be it Intel or ARM, then .Net is the obvious answer.

    If you want to run on Intel or ARM on Windows or Linux, then Java is the obvious answer.

    1. Anonymous Coward
      Anonymous Coward

      Re: .Net

      Just, not all developers are keen on drinking the .Net kool-aid only.

      A lot of Windows applications are written using compilers to native code because .Net, - like Java it has been copied from - has many limitations and adds overhead. Nor all applications are suited to be forced to use a garbage collector for memory management.

      Porting this applications to platforms when you have no choice but .Net would be very expensive, and without a successful platform, you have no incentives - just, a platform to be successful requires first tier applications.

      It's no surprise that MS made several U-turns, often returning to C++ from .Net/Silverligh/whatever. One of the reasons Windows Phone 8 required little resources and was snappier is exactly because of C++ native applications, instead of having to run virtual machines and garbage collectors.

      .Net IIRC was releases in 2002 or 2003, fifteen years later it didn't take the world by storm, just like Java didn't before, despite all the hype. MS itself does use .Net for some management applications/utilities (which became far slower and clumsier to use), but stay away from it for flagship applications like Office.

      Many high-end developers don't like to be constrained by virtial machines and neutered languages - they appeal mostly software sweat shops where managers can hire cheap developers who can't do much damage with bad code.

      Geez, even a simple applications like KeePass needs startup tricks or it takes ages to open.

      One of the reason Linux very low desktop penetration is the lack of good desktop applications - Java ones are always a pain in the ass to use.

      1. Frank Gerlach #2

        @AC / Efficient Languages

        Indeed, if you want good usability and soft-realtime response of an application, you cannot use fully automatic garbage collection. The GC run will come at the worst possible moment from the user's perspective.

        For example, you want to accept a call on the phone, but the UI freezes with a 3 second GC run. That will confuse the user and drop the call.

        That is why Apple uses Objective C and Swift.

      2. Richard Plinston Silver badge

        Re: .Net

        > One of the reasons Windows Phone 8 required little resources and was snappier is ...

        You are confused. Windows Phone 7 "required little resources". It ran using WinCE which was like the MS-DOS of phones: single task, no background tasks except a sort of TSR-like process and tombstoning. It was promoted as 'requiring little resources' because it couldn't handle more than one core, there was no point in giving it more. Windows 8 was "snappier", or appeared so, because it required a dual core and dedicated one core to the UI. This impacted on background tasks but most apps still did tombstoning anyway.

    2. mrdalliard
      Coffee/keyboard

      Re: .Net

      >> If you want to run on Intel or ARM on Windows or Linux, then Java is the obvious answer.

      Java you say? There is no question to which Java is the obvious answer. Everything about it is an exercise in frustration (and I speak as a Tomcat admin). And then there's that 5 second...

      .

      .

      .

      .

      ....pause that's never been eliminated.

    3. Steve Davies 3 Silver badge

      Re: .Net

      Has become just a set of patches on top of patches on top of patches.

      Once upon a time it was pretty lean. Bloat has made it unwieldy and slow.

      I developed on app on .Net V1. Runs slower on V4 despite a hefty CPU bump.

      Stopped writing code for .Net after that. Gave it up as a bad job. There were always better solutions.

      1. Warm Braw Silver badge

        Re: .Net

        Bloat has made it unwieldy and slow

        .NET core seems to have addressed some of those problems, but that might be because it's not yet finished...

    4. Zippy's Sausage Factory
      Mushroom

      Re: .Net

      Surely the answer to the ARM/Intel software compatibility conundrum is .Net, where your code runs in a VM on top of the native processor?

      Speaking as a .Net developer, that's not quite the case. It's more like Java actually, in that it's a bytecode that is JIT compiled (in theory).

      .Net Core is designed to run on Mac and Linux as well as Windows, and I suspect they will port it to ARM fairly swiftly.

      That said, the point of Core is to be portable, and I suspect it will use the genuinely awful Windows Presentation Framework for GUI apps on Mac and Linux. So you'll be able to write horrible looking apps that don't blend with the operating systems on three platforms instead of one.

      1. Anonymous Bullard

        Re: .Net

        I like .NET Core.

        C# is a nice language, it's just a shame it's been tied to Windows. That's a show-stopper for anyone outside of "Enterprise".

        .NET Core has allowed us to use a decent language on a solid OS, with a mature library/framework without the fat.

        Not requiring VS is the icing on the cake.

    5. Richard Plinston Silver badge

      Re: .Net

      > If you want to run on Intel or ARM on Windows or Linux, then Java is the obvious answer.

      You seem to be hammered with down votes. Java is obviously _not_ the answer, or not the only answer, because I have a RaspberryPi alongside my other Linux machines and it has all the software that I need without it being written in Java.

      I write in Python and that is all good wherever I want to run it.

    6. bombastic bob Silver badge
      WTF?

      Re: .Net

      "then .Net is the obvious answer."

      that was a joke, right? (you forgot the 'joke alert' icon)

      /me won't get trolled into ranting on '.Not' yet one more time, today

  6. disorder

    They could have had desktop apps on RT; they chose, specifically - to disallow that.

    I won't belittle those that accomplished it, by suggesting it is as trivial as a mere recompile, but the existence of a completely functional Quake 3 on RT (when in a semi-death jailbroken SW state), and MS's own version of Office RT (2013) shows a pretty full selection of API's all present and functional, including USB mouse/audio drivers.

    Office on RT - implicitly already acknowledged desktop apps as important - which shipped on-device. But oh - no. noone /else's/ software is important. make a store version (so we can have 30% of your sale).

    Would Adobe/whatever have ported their suites; probably not. They never had to decide.

  7. Anonymous South African Coward Silver badge

    Lack of apps will be a serious problem.

    This happened to OS/2.

    1. davidp231

      Native ones, yes. The fact it could run a full (or seamless, akin to how XP mode works in 7) Windows 3.1 session AND add programs from your existing Windows install alleviated things somewhat.

      1. LDS Silver badge

        "alleviated things somewhat."

        But only for a brief period - Windows 32 bit arrived too, and most new software was being written for it, so the pool of useful applications for OS/2 soon starved, because real 32 bit ones never materialized - and in those days new version added often a lot.

        Also, not all Win 3.1 application worked without isssues, I was never able to run Borland Delphi 1 under OS/2 successfully. Just like the issues you may encounter under WINE.

        Emulation may be useful, but if and when always running everything under emulation, you start to ask why not use the native environment....

        1. Richard Plinston Silver badge

          Re: "alleviated things somewhat."

          > Emulation may be useful, but if and when always running everything under emulation, you start to ask why not use the native environment....

          Windows 3.x on OS/2 was a real and full actual Windows 3.1. It also ran Microsoft's Win32s.DLL. What Microsoft did next was add a completely spurious memory access that was a greater address than 2 Gbytes. This did nothing useful except exceed OS/2's virtual memory limit and stop the latest versions being used. Then MS could break applications by requiring the latest version.

      2. Richard Plinston Silver badge

        > The fact it could run a full (or seamless, akin to how XP mode works in 7) Windows 3.1 session

        That was why it died. Developers could have developed for Windows 3.x or for OS/2 Presentation Manager, but when IBM added Win3.x to OS/2 then developers could target that and get their applications running on both Windows and OS/2. Then there was no point in having OS/2.

        When Windows 10 adds an X server to its Linux compatibility then, maybe, developers will target Linux to get it running on Windows and the system that they use.

    2. Anonymous Coward
      Anonymous Coward

      This happened to OS/2.

      And Windows 10 mobile. Remember that?

      1. Anonymous Coward
        Anonymous Coward

        Windows 10 mobile?

        I only vaguely remember WinPho 8 and 7.

        Version 10 was virtually non-existent, because the new CEO didn't commit to it.

  8. Charles 9 Silver badge

    Yawn. Call me when it can do a Crysis-type game at 1080p @ 60fps. Or when either Sony or Microsoft adopts ARM for its next console.

    1. Neil Alexander

      You're missing the point. It's not about being able to run Crysis. It's more about all the people who don't want to run Crysis.

      It's about the people who value portability and battery life over processing power, of which there are plenty.

      1. Baldrickk Silver badge

        But what if we want to do both?

        I could see a VM running via a hypervisor able to switch from low powered chips for browsing on the go to a fully fledged processor & graphics card as and when the power and demand are there being able to satisfy both sets of workloads.

        Some "gaming" laptops are getting thin now, and the addition of a low power processor would not really have a very big impact.

        At a high level, it could be no different than when a laptop switches from using the on-die graphics (for low power consumption) to using the discrete graphics card (for performance), only it's shifting the CPU workload over, instead of the GPU workload.

        1. Neil Alexander

          What you're referring to, effectively, is ARM's big.LITTLE architecture (and the various other equivalents). That's already in today's smartphones and tablets, shifting workloads between low-power and high-power cores and powering down unused cores when not needed.

          For a lot of people, that kind of architecture would work perfectly well and that's a big part of how we would get better standby times whilst remaining connected. It also means that time spent staring at an article or email isn't using up power on more expensive cores when it isn't needed.

          What it really needs is the support of developers to actually support the target architecture to get native execution performance rather than just lazily expect that the emulation layer will take care of it for you and then blaming the architecture when it doesn't perform as well as you want. That's the hard part.

          1. Richard Plinston Silver badge

            > What it really needs is the support of developers to actually support the target architecture to get native execution performance rather than just lazily expect that the emulation layer will take care of it for you

            From the late 70s through the 80s and 90s I (and my clients) ran DRI multiuser operating systems from MP/M, Concurrent-DOS-386, DR-Multiuser-DOS to Systems Manager. These could run MS-DOS programs and actual Windows 3.11 (in fact could run several simultaneously). The major problems was that DOS developers would use 'keyhit()' to know when a key had been hit and this sat in a tight loop waiting for a keystroke and used up all the CPU cycles it could grab - not good for a multi-tasking and multi-user system. They probably still do that.

      2. Charles 9 Silver badge

        "You're missing the point. It's not about being able to run Crysis. It's more about all the people who don't want to run Crysis.

        It's about the people who value portability and battery life over processing power, of which there are plenty."

        No, YOU'RE missing the point. Crysis is the benchmark for a VERY popular and VERY demanding program. There are lots of people, myself included, who have no choice but to stick to Windows because lots of applications are ONLY for Windows. That includes a LOT of demanding applications, of which there are plenty, too: probably more then the power-sippers. Indeed, there are probably plenty of intersects: people who want power-sipping at points and performance at others, all from the same device. Everyone wants everything yesterday, and they expect results.

        1. Neil Alexander

          "No, YOU'RE missing the point. Crysis is the benchmark for a VERY popular and VERY demanding program. There are lots of people, myself included, who have no choice but to stick to Windows because lots of applications are ONLY for Windows. That includes a LOT of demanding applications, of which there are plenty, too: probably more then the power-sippers. Indeed, there are probably plenty of intersects: people who want power-sipping at points and performance at others, all from the same device. Everyone wants everything yesterday, and they expect results."

          Sure, there's a whole demographic of gamers and power users out there and truthfully they are probably not going to be well-served by Windows on ARM. If you need to run demanding Intel-targeted applications that only run on Windows then obviously you would be better with a Windows machine running natively on Intel instead of a low-power-resource-constrained-Windows-on-ARM-emulating-Intel machine. Why would you think otherwise?

          The point of Windows on ARM is not to satisfy everyone. It isn't meant to be the perfect intersection - it's an entry point. It's to satisfy the people who want cheaper and more efficient and leaner mobile computers that Microsoft struggle to cater for. It's to satisfy the people who would quite likely otherwise go and buy an iPad. It's to satisfy the people who don't even know or care what Crysis is.

          There is no device that is going to satisfy everyone. That's why gaming rigs exist in a wholly different category to ultra-portables. It's a pipe dream to think that's going to change anytime soon.

          1. Charles 9 Silver badge

            Then why the push for ARM on servers? Servers are probably one of the areas of computing that happens to be MORE demanding than gamers. IOW, if you can satisfy enterprise workloads with ARM iron, you probably have an inroad into the gaming sector as well. Which is why I think the big move will be when Microsoft or Sony make the shift to ARM on their mainline consoles (Nintendo has been using ARMs since the Game Boy Advance and has transitioned its mainline consoles to ARM with the Switch).

            1. Neil Alexander

              "Then why the push for ARM on servers? Servers are probably one of the areas of computing that happens to be MORE demanding than gamers"

              The workloads we put onto servers are often quite different to those of gamers.

              To use a web or application server as an example, the majority of the work is being done using non-complex CPU instructions, the workload is mostly repetitive and, more often than not, is not architecture-specific. For this kind of work, ARM chips are fine - you can take comparatively inexpensive ARM hardware and ramp up the density hugely without consuming much more electricity and that's fine for generic server workloads. That's exactly what HP did with the Moonshot systems.

              A lot of computer games aren't general-purpose compute applications. They are far more sensitive to architecture-specific optimisations and countless extended CPU instructions, not to mention memory bandwidth, bus speeds, etc.

              Maybe Sony or Microsoft will start building consoles with ARM chips, but that doesn't bring us any closer to a "one-size-fits-all" ARM machine. They're going to have to make big changes and compromises to squeeze out the kind of performance they will want or need. We will just end up with high-powered-power-sucking-ARM vs low-powered-battery-sipping-ARM.

              Sounds familiar - ah yes, Xeon vs Atom.

            2. Richard Plinston Silver badge

              > Then why the push for ARM on servers?

              For the savings on power and cooling. Servers with dozens or hundreds of cores can switch off all the unused ones. It is about the money.

              > you probably have an inroad into the gaming sector as well.

              No. Gamers want all the processing power running flat out all the time and don't care about the cost. It seems they also want expensive and elaborate cooling systems for street cred.

  9. Milton Silver badge

    Suez?

    Suez gets mentioned a lot these days because it was the last occasion the British government did something breathtakingly, suicidally stupid and which resulted in national disaster. It's the only comparison available from the last hundred years that comes close to Brexit; which makes it a handy reference point.

    But I am not convinced it needs to be trotted out for every cockup in every walk of life, especially when repeated by journos whose words give rise to the teensy suspicion that they have no idea what the Suez Crisis actually comprised—beyond that fact that it involved a waterway.

    In short, you could swim in it: but that was not and never will be the point.

    PS: Yes, Microsoft betrays, as it always has, obsessively impatient greed and short term thinking. Some things are most unlikely to change.

    1. Frank Gerlach #2

      Re: Suez?

      Well, it looks more like that intelligent reasoning is frowned upon at MSFT.

    2. This post has been deleted by its author

    3. Anonymous Coward
      Anonymous Coward

      Re: Suez?

      I always got the impression that Suez's importance in historical terms- as much as the specific details of the conflict- lay in the fact that it really laid bare and made concrete for the first time just how seriously Britain's power and influence had declined since the end of the Second World War. That the Empire was over. That it couldn't do things that way any more.

      That despite the fact "we" had won the Second World War (conveniently forgetting for reasons of national pride how important the Americans were to that) when push came to shove, it was no longer able to pull this sort of thing off as it might once had done.

      That Britain was no longer top dog and that when the aforementioned Americans were no longer in support, but actively opposed to this latest military adventure, *they* were the ones in a position to dictate that Britain call off the whole thing or face a punitive financial response.

      The same blinkered arrogance that today's "Suez never happened" hard right Tory Brexiteers think will let them dictate terms to India- a country of 1.3 billion people rapidly progressing in economic importance. That lets them look back on Empire as shared history, as if British rule of India will be remembered in the same nostalgically whitewashed manner by them as it is by "us".

      As someone observed, it's the schoolyard bully at a reunion 20 years later assuming the same playground power dynamics are still in place, that it was all just a bit of fun and they'll be welcomed by others that have long moved on and are far more successful. The Little Englanders who voted for Brexit to control foreigners are going to be in for a shock when they find out what India wants in exchange for a beneficial trade agreement- spoiler, it's much greater access to the UK labour market.

      Another spoiler; "we" are going to find out why many Americans are so keen on negotiating unilateral trade agreements with a partner several times smaller than themselves, as opposed to the EU.

      Watch out for May- or, more likely, whatever self-serving hard-right Tory Brexiteer succeeds her- scrabbling for a trade deal, desparate for any spin to cover the fact that the Americans are able to dictate terms on imports- expect their notoriously shitty food standards to arrive along with a shipment of pink-slime-containing minced "beef" only to realise too late how well off we were in the EU- and the right for their corporations to get further entrenched in the NHS and public life, and sue if they don't get their way.

    4. /dev/null

      Re: Suez?

      Don't forget that Suez was a joint Franco-British-Israeli operation; it wasn't just a unilateral British intervention. It also led to France falling out with the USA and withdrawing military cooperation with NATO.

  10. Anonymous Coward
    Anonymous Coward

    Wake me up when Windows and all it's applications compiles natively on ARM with no emulation required. I guess that's my lie-in secured until at least 2040.

    1. Warm Braw Silver badge

      Wake me up when ...

      It depends what the end-goal is. When Apple transitioned between processor architectures the goal was to permit the legacy applications to run while new applications would be compiled for the new architecture. That seems like a reasonable decision for a user base that doesn't actually know what a compiler is.

      If this is meant as a way of transitioning to a new architecture without destroying customers' investment in software, it's perfectly reasonable. If it's just going to be yet another platform that application developers have to support in the long term, it's likely not going to happen.

      1. Anonymous Coward
        Anonymous Coward

        >If this is meant as a way of transitioning to a new architecture

        10 years too late though.

  11. Naselus

    Satnad remains the problem

    This'll be abandoned, just like everything else that doesn't fit into his 3 main products. Literally everything aside from Azure, Office365 and Win10 x86/x64 is constantly on notice and can be cancelled at any moment, which means buying into any of them is impossible.

    SatNad has done well on the core 3 - Azure is in a strong position, even if AWS still overshadows it, Win10 is growing fairly well and is on course to become the world's default desktop OS over the next 2-3 years as businesses undergo hardware refreshes, and Office 365 is doing surprisingly well considering that there are free products which perform all the same functions. But his ruthless disregard for everything else (seems to be a 2-strikes-and-you're-axed policy for all and everything) is causing MS to drop even competitive offerings after one or two bad years.

    It's a policy of permanent retrenchment, and it's hampering Microsoft's ability to grow or innovate, leaving them mostly just iterating yesterdays cash cows rather than pushing forward with new ideas.

    1. Dan 55 Silver badge

      Re: Satnad remains the problem

      He's only copying Google. Again.

    2. John Smith 19 Gold badge
      WTF?

      " and it's hampering Microsoft's ability to grow or innovate, "

      Hahahahahahahahahahahahahahahahahahahaha,

      I'm sorry, which universe did you say you were from again?

    3. Doctor Syntax Silver badge

      Re: Satnad remains the problem

      "It's a policy of permanent retrenchment, and it's hampering Microsoft's ability to grow or innovate, leaving them mostly just iterating yesterdays cash cows rather than pushing forward with new ideas."

      A little unfair. What he's doing is moving to a subscription model. That's the future's cash cow. Yesterday's enforced re-buying of products based on lock-in and periodic introduction of changes to data formats wasn't as predictable.

      1. Anonymous Coward
        Anonymous Coward

        Re: Satnad remains the problem

        moving to a subscription model

        If that means no more change for the sake of a new release, then go for it! I'm sure people would pay for that - just a shame it's 2 OS releases too late.

      2. bombastic bob Silver badge
        Devil

        Re: Satnad remains the problem

        "What he's doing is moving to a subscription model"

        and doing everything possible to leverage everyone into it.

        Micro-shaft's attitude towards their customers is their biggest problem. SatNad would be #2. Pun intentional.

    4. LDS Silver badge

      "is on course to become the world's default desktop OS over the next 2-3 y"

      Only because after the next 2-3 years Windows 7 support ends... and new applications may not support it anyway.

      Windows 10 user data slurping and continuous upgrades really pissed a lot of users. It gave a reason to hate Microsoft to a lot of users who didn't have issues with it.

      Windows 10 on ARM will suffer of the same stigma.

      And it's still interesting that despite having being first offered as s free upgraded, and then employed any malware tactics - but those that would have had Nadella jailed - to install itself surreptitiously, it's still NOT the world default desktop OS.... if I were Nadella, I would ask myself why - but that would require modesty, a quality people like Nadella don't understand.

      1. Sandtitz Silver badge

        Re: "is on course to become the world's default desktop OS over the next 2-3 y"

        "Windows 10 user data slurping and continuous upgrades really pissed a lot of users."

        Perhaps most of us technical people. The average Joe Sixpacks out there are oblivious to the 'telemetry' slurpage. People I interact with (end users) have never mentioned the whole thing when served with a new Win10 computer. Equally the same end users are buying Android phones en masse.

        "And it's still interesting that despite having being first offered as s free upgraded, and then employed any malware tactics - but those that would have had Nadella jailed - to install itself surreptitiously, it's still NOT the world default desktop OS..."

        I think the people who updated to Win10 were mostly home users and some users at smaller businesses. I sure as hell couldn't be bothered to upgrade any of my clients to Win10, because while the update was "free" it would still require lots of work and in the end Win10 wouldn't run the users' Office and other software any better. The Win7 death date has been pretty far in the future so more likely the computers would be replaced before that date anyway.

        Being 'free' did help Win10 market share at home users because people usually like the notion of having free stuff, whatever it is. Those home users are also more likely to invest in the Store app.

        1. Richard Plinston Silver badge

          Re: "is on course to become the world's default desktop OS over the next 2-3 y"

          > are also more likely to invest in the Store app.

          Software, especially Store apps, are _not_ an 'investment', they are a cost.

  12. Doctor Syntax Silver badge

    "Suez, did you say? Never heard of it. Is it a fish?"

    No, it's a waste disposal company. French but operating in the UK. Come Brexit will we have to tidy up our own waste?

  13. Anonymous Coward
    Anonymous Coward

    Sinofsky is a dick

    Not an intelligent comment, I know, but it has to be said.

    1. Anonymous Coward
      Anonymous Coward

      Re: Sinofsky is a dick

      Trashed Windows, and made off with millions. Smart guy, for those who don't need to endure his creations.

      1. bombastic bob Silver badge
        Unhappy

        Re: Sinofsky is a dick

        not just Sinofsky at fault here. THIS person too:

        https://en.wikipedia.org/wiki/Larson-Green

        inventor of "the ribbon" "the metro" and other horrible things.

  14. Anonymous Coward
    Anonymous Coward

    Not as bad as RT

    is hardly a glowing recommendation to use it.

    I would agree however that with comments above it is perfectly straightforward to run legacy programs in a VM.

  15. Sil

    Not so bad

    Windows RT was a bad idea.

    The communication on Windows RT was even worse, as was the confusion with Windows 8.

    Still, the Surface 3 was an outstanding tablet, which I've used daily until it died, and I never regretted its purchase..

    For the money, it was a very interesting proposition, with Office for free - Office for iOS & Android would come years later), 1 year of Skype communication, an outstanding screen, USB, microsSD expansion, the only good stand in the industry, a great keyboard cover (extra $).

    Would have I preferred to get a full Windows? Absolutely. At the time, I was hesitating to wait for a Dell 8 inch tablet.

    Still, for a mostly email-internet-Office usage, with goodies like Netflix, Audible, and other decent Windows Store apps, and the occasional game for young children, it was a great tablet.

    1. hplasm Silver badge
      Paris Hilton

      Re: Not so bad

      "... it died, and I never regretted its purchase.."

      It died. And that wasn't a regret?

      1. Anonymous Coward
        Anonymous Coward

        Re: Not so bad

        It meant (s)he could buy something different.

  16. John Styles

    I bought a very cheap Windows 8 tablet (not RT). It was baffling bad. Like someone had never seen a tablet, but had one explained badly to them when not really listening and then half-heartedly implemented it from that description whilst watching YouTube videos. Or something like that.

    The level of group-think required to think this was remotely a good idea beggars all belief.

  17. JimmyPage Silver badge

    68000

    Even at Uni, I remember thinking the 68000 family were much better thought out for memory management than the upgraded Intel 8080 which is basically what every Intel since has been.

    Who remembers the Sinclair QL ?

    1. Anonymous Coward
      Anonymous Coward

      Re: 68000

      Maybe, but since IBM chose the 8086/8088 for its PC, and couldn't stop it to be cloned... if it had happened to Apple PCs, probably 68000 descendants would rule rule the world today, but also Apple probably wouldn't exist anymore.

    2. Anonymous South African Coward Silver badge

      Re: 68000

      IIRC Linus started Linux on the QL... May be wrong though, if so, apologies.

      And Magnetic Scrolls' The Pawn was also written for the QL.

    3. bombastic bob Silver badge
      Unhappy

      Re: 68000

      doesn't the 68k have 16k "pseudo-segments" due to jump limits (or something similar)? I remember dealing with that while experimenting with PalmOS.

      once we got 32-bit mode on x86, those problems disappeared. 68k never really overcame that, as far as I know.

  18. karlkarl Bronze badge

    Microsoft... Just two hints...

    1) Don't make it a locked down piece of *sht* and this platform will dominate.

    2) Keep your terrible mobile build systems to yourself and let us developers use standards like CMake and us developers will do the rest and will churn out so many apps you could ever dream of.

  19. karlkarl Bronze badge

    Different CPU? Who cares

    Why do people keep mentioning ARM being awkward for them. Remember C solved this for us idiots back in the early 80s. If done properly, a C / C++ developer doesn't even need to know what CPU architecture their code is running on.

    The issue with Windows RT is that it was so crippled we had to fsck around cross compiling from the terrible Visual Studio.

    1. Anonymous Coward
      Anonymous Coward

      "If done properly, a C / C++ developer doesn't even need to know"

      Sorry, but that is not true for a whole class of application that need highly optimized code targeting specific processor - i.e. all those using SSE instructions or highly parallel code where optimizing sharing processing across cores is very architecture specific - and sometimes even CPU specific. You may code it yourself, or use specific libraries, but still you have a dependency on the CPU.

      Sure, if all you write are some simple command line utilities...

      1. karlkarl Bronze badge

        Re: "If done properly, a C / C++ developer doesn't even need to know"

        Sure, fair point... but what Window's store app really requires this kind of technical knowledge?

        Heck, most [cr]apps are just drag and drop Unity3D games half-arsed ported to WinStore by clicking the little windows logo in the build settings ;)

        1. Anonymous Coward
          Anonymous Coward

          Re: "If done properly, a C / C++ developer doesn't even need to know"

          That depends on what users and Microsoft expect from "Windows Store Apps". If they have to be replacement for actual Windows desktop apps they need to be far more powerful than the average mobile/tablet application, which may be limited by their input and output capabilities.

          If Windows on ARM wants to be a real alternative to Windows on Intel you may need applications like Adobe Lightroom, for example.

          And Adobe for the last 7.2 release asked Intel help to optimize its performance on multicore machines using Intel's TBB library. This is a C++ library which is portable across several OSes, but I guess it is designed around how multithreading needs to be implemented on the x86 architecture, taking advantage of specific instructions sets.

          In many ways, it was simpler to write portable code when CPU had only one core and applications were single threaded, and with no need of advanced specific extension like SSE and the like. You can still write them, and many don't need such advanced coding, but there is a whole class of applications that needs them.

          Are you going to accept far less features and lower performance on ARM machines, and use them only for simpler tasks? Doesn't look a recipe for success.

      2. Anonymous Coward
        Anonymous Coward

        Re: "If done properly, a C / C++ developer doesn't even need to know"

        How many fart apps require CPU specific code?

      3. bombastic bob Silver badge
        Meh

        Re: "If done properly, a C / C++ developer doesn't even need to know"

        "highly optimized code targeting specific processor"

        in the open source world, this is often dealt with by proper software design and the use of 'autotools' (aka the 'configure' script) when compiling.

        I guess this might be a problem if there are too many variations in the ARM world, though. you'd need a different binary for every architecture permutation. Given how floating point might be implemented/emulated on the different ARM platforms, this is NOT unrealistic.

        1. Anonymous Coward
          Anonymous Coward

          "dealt with by proper software design and the use of 'autotools"

          Evidently, you never wrote one.

          Autotools and configure won't help you when you need to use very different libraries with very different APIs and usage patterns which will impact the organization of you very own code as a result - even lots of #ifdef won't help you much.

          Do autotools help you to code for Win32 and X at the same time? Not at all, the API and usage patterns are so different they are of no use - or you use some libraries that attempt to hide the differences at the price of reducing performance and features, or you code natively to exploit all features.

          With CPU, is no different. But keep on believing in pink unicorns, especially the open source ones...

  20. Anonymous Coward
    Anonymous Coward

    It's the lack of commitment that will kill Windows on ARM

    MSFT management are a bunch of quitters, these days.

  21. jelabarre59 Silver badge

    Surface, but...

    Actually I wouldn't mind getting a SurfaceRT tablet/KB setup, presuming two factors: one, that I can get it dirt cheap and two, that it can be hacked to run Linux.

    1. Anonymous Coward
      Anonymous Coward

      Re: Surface, but...

      Even the microsoft fanboys are moving to Linux...

    2. John Styles

      Re: Surface, but...

      Better to support hardware whose manufacturer does support Linux.

      (Don't get me wrong, some of my best friends spend their time hacking Tesco Value Toasters etc. to run Linux. But).

  22. Anonymous Coward
    Anonymous Coward

    Microsoft has an identity crisis.

    Also, a case of ADHD ('attention deficit hyperactivity disorder'), but I attribute this to its CEO SatNad.

    1. bombastic bob Silver badge
      Facepalm

      Re: Microsoft has an identity crisis.

      "Also, a case of ADHD"

      ~groan~

      I'll avoid acting like an easily offended snowflake and simply state that, like "Ass-burgers", AD[H]D (aka non-linear mindedness) is actually an ADVANTAGE for engineers, artists, musicians, and other 'creative' types. Thom Hartman's "Hunter/Farmer" model explains it pretty well.

      no downvote from me, either. just a groan. your ignorance is forgiven.

  23. Paper
    Flame

    Useless gift

    I got the Windows RT tablet as a gift one Xmas. At first I thought, how cool. Then when I realised how limited it was, it was basically useless. It's sitting in my cupboard collecting dust.

    Microsoft quickly discontinued the product and pretty much abandoned its users. After that it pretty much put me off of buying anything Microsoft ever. Never was a customer, never will be. Congrats Microsoft.

    Microsoft refuses to open it up to allow people to develop apps openly for it, or to add Linux or Android onto it. When they find a potential whole in the RT OS that could allow people to Jailbreak, they release updates to patch it away. I hate them.

  24. aqk
    Megaphone

    Windows on a Snapdragon desktop/laptop?

    I'm a geezer with failing eyesight. As such, I tend to eschew doing much on a smartphone, except to USE the phone, and perhaps tinker with Android.

    I'm waiting for a snapdragon system on a LARGE SCREEN - A laptop (15"+ preferably) - that will run Win10 and/or Linux. And not go dead after 2 hours.

    Heck even a mainboard that I could plug my own peripherals into and run a couple of 26" monitors.

    Yes I know about that little ASUS qualcomm-powered tablet. But it is still too pricey.

    Anyone have any news on projects that will satisfy me?

  25. The Sharpinator

    Ahh the hopes and dreams

    About spot on!

    Anyone remember 2001? Looking back at the old headline news of "United States vs Microsoft Corporation" is actually comical. Poor poor Netscape, if it only hadn't been bullied by the big bad MS! Yea I think not! The browser was a pile of steaming dung back then and is now a dead steaming pile buried under a boat load of fail.

    It is not really that cut and dry developing software at that level. Lets create a system that will do everything awesome and sell it to the majority. That is what MS did "Windows XP + Office" and then got slapped like a harlot wife by the government. So after having lived that once one would suspect a tad bit of apprehension to just jump in and try taking over the world again.

    So with that in mind one would suspect that it is not so much of ability as it is desire. The mobile world is still evolving much like the PC world did in the 80's. I could see that in perspective of MS, we are still setting on a boat load of cash... do we really want to re live that again??? How about we sit back and let someone else fail! As dynamic and ever changing the tech industry is, there is one thing for certain that MS has lived and learned the hard way, success is not permanent and failure is not fatal.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019