AMD: Windows-8-on-ARM app compatibility is relative

COMMENTS

This topic is closed for new posts.
  1. JDX Gold badge

    .NET

    Is it reasonable to assume any .NET application will port just fine?

    Is it reasonable to suggest a tool could recompile x86 .EXE - > ARM .EXE?

    1. Ian Yates

      Certainly, in theory, pure .NET apps will be fine assuming MS release an ARM-based .NET framework.

      I know a lot of companies that just wouldn't bother considering Win8 on ARM if they don't.

      WOROW (Write Once, Run On Windows)

      1. confused one
        Stop

        .NET on ARM already exists

        .NET on ARM already exists on Windows CE. Microsoft already has a fair amount of experience with the ARM platform.

    2. Ru

      Mostly

      Any mixed assemblies (eg, native C or C++ and C++/CLR in the same binary) would need to be recompiled, and any assemblies performing unsafe operations might well fall over because programmers suck.

      Pure CLI stuff should be freely portable.

    3. JEDIDIAH
      Linux

      Porting...

      Of course stuff should "port" just fine. The real issue is whether or not the relevant software vendors will bother with that process. It's really not an issue of how feasable it is but whether or not the vast majority of publishers will ever bother.

  2. Tom 7

    This isnt Windows 8

    Its Windows 'please please please buy a whole new set of software right from day one rather than discovering you have to upgrade everything later'

    I do feel that MS inability to understand how their old software works and transfer it to the ARM begs far too many questions about their ability to write decent code in the future.

    1. Anonymous Coward
      Anonymous Coward

      Have I missed something? I'm sure this is about non-MS applications not working due to the arch shifting from x86 to ARM - which isn't too unreasonable.

      I'm sure most people were/are expecting a x86 VM, though, similar to what Apple did in the reverse.

      1. rav
        Thumb Up

        yes you have missed something

        All the software that you know enjoy will not run on Windows 8 if you buy a notebook with an ARM cpu. But if you buy a notebook with an AMD APU or Intel CPU and run Windows 8 it will probably run fine. Maybe. Who knows what evil lurks in the hearts of the boys from Redmond.

      2. cloudgazer

        an intel VM on arm was never an option

        Not given the fundamental lower performance of ARM CPUs compared to Intel. For that kind of emulation to work you need the new platform to have a significant raw performance advantage.

        MS will have to chivvy their developers into either recompiling for the new hardware or switching to a bytecode based language that can run on both.

    2. Anonymous Coward
      Anonymous Coward

      @Tom

      Eh? One thing that MS are very good at is making sure that your old software runs on your new version of Windows. They are very good at understanding how their old software works, sometimes to the detriment of the new OS.

      1. Tom 7

        But to get the sopftware to run

        they just copy the code and make sure the new version of the OS behaves like the old - hence the performance hit.

        Also consider why they spent so much money fixing the OOXML standard so that the standard would allow for x86 binaries to be run within the standard. Also consider why, if you get some old Word Documents out of storage they don’t format well if at all and excel spreadsheets generate random numbers.

        If thats understanding how your software used to run

  3. Gordan

    For the umpteen bazillionth time...

    ... this can be done without the software vendor porting the code, and not only can it be done, but it had been done before with good results. And not only has it been done, but it has been done three times in commerecial products:

    1) x86 programs running on DEC Alpha: http://en.wikipedia.org/wiki/FX!32

    2) x86 programs running Transmeta: http://en.wikipedia.org/wiki/Code_Morphing_Software

    3) x86 programs running on PowerPC: X-Box 360 runs X-Box (x86 CPU) games

    And that's just the examples of non-emulated solutions that achieve near-native speeds by using real-time cached binary cross-compiling.

    Now can we please stop hyping the "your old Windows apps won't run on ARM" stories uttered by technologically ignorant corporate stooges? It's up to MS - they'll either provide a caching binary cross compiler or they won't. And since they have already clearly demonstrated on the X-Box 360 that they have the required technology available, the evidence is strong that they can provide this with the ARM version of Windows 7.

    1. Voland's right hand Silver badge
      Devil

      You are missing the point

      The power of ARM is in the offload.

      Arm SOCs have offloads for anything and everything - media, network, security, encoding, decoding, etc.

      These have been designed _WITHOUT_ a common software architecture in mind. There is no way in hell to abstract them to a common API which high level programs designed for Windows can use. It is a throughly balkanized platform and this is the reason for it being so popular - you can create whatever obscenity you want to satisfy reqs from business development.

      That may not matter for things like Word which need little or no offload, but it will be a serious hindrance for the biggest market driver in the windows world - Games.

      In any case, whatever AMD, Intel, etc are speaking here is irrelevant as they are not the ones who are facing the exact part which makes ARM a throughly balkanized platfrom. It will be more interesting what people like Carmack, Romero and the like say about it. They are the ones that are actually facing what makes ARM powerful and nightmare at the same time. They are funnily enough strangely silent on the matter...

      1. TeeCee Gold badge
        FAIL

        "...the biggest market driver in the windows world - Games."

        Gosh, really?

        All those millions of people in the world sitting at their desks are all playing games? Who knew?

        Meanwhile, on *this* planet, if there is actually one gaming machine for every 1000 "bog" Windows installations I'd be really rather surprised.......

      2. Gordan

        Re: You are missing the point

        It seems to me that you don't actually have a clue what you're talking about.

        1) Regarding "offloads": The only "offload" engine reasonably commonly available on ARMs is for AES crypto (Marvell Kirkwood has it, and it is pretty effective, see here:

        http://www.altechnative.net/?p=174

        Tegra2 also has it, but it isn't as fast (about 1/3 of the performance of doing it in software).

        2) Binary cross compiling has nothing at all to do with abstracting anything to a common API. It is about translating from binary machine code for one architecture to binary machine code for another architecture.

        3) Gaming machines constitute a very small minority (albeit a vocal very small minority) of Windows deployments. The vast majority of Windows machines is used for corporate desktops or by people at home who never play games. To corroborate this statement, consider that Intel's GPU market share (as measured by the number of shipped GPUs) exceeds that of ATI/AMD and Nvidia combined. Now consider what Intel GPUs' performance for gaming is and you can draw your own concludion what fraction of Windows users are actually gamers.

      3. JEDIDIAH
        Linux

        Acceleration APIs

        > These have been designed _WITHOUT_ a common software

        > architecture in mind. There is no way in hell to abstract them

        > to a common API which high level programs designed for

        > Windows can use. It is a throughly

        You mean no stuff like OpenGL, OpenCL, and VDPAU?

        The fact that such "common software architecture" exists is why cheap PCs can run circles around ARM gear. PCs use the same tricks that ARM does. Everyone has been using the same tricks as ARM does for not just years but decades. It's pretty standard industry practice across the board.

    2. rav
      Thumb Down

      ... and therein's the rub!

      NEAR NATIVE SPEEDS. That translates to significantly less.

      1. This post has been deleted by its author

      2. Gordan

        Re: ... and therein's the rub!

        Not necessarily significantly less at all. Transmeta Crusoe was very favourably comparable in performance to it's contemporary Pentium III (excluding floating point calculations). Similarly, the performance of PowerPC 970 in the X-Box 360 isn't actually all that much greater than the Pentium III, depending only what you are doing with it (I did some comparisons between the 360 Xenon CPU and the Pentium 3 CPU using some home-brew highly optimized code, and the Pentium III actually beat the Xenon, but that could have been in part because GCC sucks even more on non-x86 than it does on x86). Anyway, the point of the mention of X-Box is that the original X-Box games run at least as fast on the 360 as they did on the original one without a huge amount of extra CPU power required to make that happen - which proves that this process can be made to be very efficient.

    3. CheesyTheClown
      FAIL

      Glad you said it, but insanely wrong

      Let me respond to each item .

      1) FX!32 was utter trash, it didn't work very well at all and that's possibly the #1 reason why people chose far less capable processors from Intel than to buy Alphas. Additionally, the fact that smaller developers couldn't afford Alpha's to develop on pretty much killed the platform. On top of that, development of applications for Alpha required the Visual C++ 4.0 subscription model which cost substantially more than a copy of Visual C++. This completely eliminated the possibility of mom an pop vendors slapping together a blind port of their software hoping it would work.

      2) Transmeta's processors were entirely different than this circumstance. Think of them as running an entire emulated system from the very start. Their code recompiler started during boot and ran translated all code to the new architecture. This would be more similar to running QEMU as opposed to providing an application compatibility layer. This is a SUBSTANTIALLY EASIER TASK than to make an app for one processor run on the OS of another processor. Apple managed to do it using Rosetta Stone and still had tons of problems with compatibility, but by including the PPC system libraries with the x86 OS as well made it much easier. It also allowed Apple to manually code some of the more difficult thunks.

      3) The X-Box emulator on the X-Box 360 is more similar to a full system emulator than it is to an app emulator. It is still difficult as hell and the little-endian to big-endian conversions still take massive amounts of the XBox 360 system's capabilities, but the fact is, this is still a full system emulator. There is an entire copy of the x86 system software with many hand coded thunks (possibly a tremendous amount due to the possibility to automate much of it) to make this emulation smoother. There were still problems with the emulation anyway, but Microsoft did do a great job of it anyway. However performing full dynamic recompilation of an entire system when the host system is a speed demon in relation to the emulated system is trivial in comparison to developing a subsystem.

      There are some issues involved with making x86 work on ARM which should be addressed.

      1) ARM is the same endian as x86 which allows high end dynamic recompiler optimizations to be performed. In fact, it is entirely possible to treat an x86 executable or DLL as a source code listing and recompile the file to ARM without needing dynamic recompiling in many cases. This will be a total flop in the cases of hand coded instruction level parallelization code (or code generated from the Intel compiler for SSE). But endian conversion is the #1 limiting factor when emulating another system.

      2) ARM handles unaligned memory access far worse than Intel does. Hell, x86 is the only processor that doesn't suck on a biblical scale at this task. It has to do with Intel always trying to run 8, 16 and 32-bit code faster than the previous generation. So they designed their cache subsystems and prefetches to handle this beautifully. Sadly, the ARM really sucks big old ding dong on this task. But if you have code written for and tested on ARM, this isn't an issue. However when emulating an x86 on an ARM, it will be necessary to handle this gracefully or performance will suffer terribly.

      3) Code compiled for instruction level parallelization will not function properly using dynamic recompilation. The parallelization will most likely need to be unrolled (similar to unrolling loops) which can cause chains of instructions to be as many as 16 times more instructions per task than otherwise. NEON does not map even remotely close to SIMD. IMNSHO, NEON is often more similar to the less complex Altivec instructions than to SIMD and therefore dynamic recompilation would require that the recompiler is a long instruction chain auto-parallelizing compiler for it to not destroy performance completely. The good news is, auto-vectorizing code which was previously vectorized is not too hard in comparison to autovectorizing code which otherwise was linear in its nature.

      So, there's an option here. Unlike in the says of FX!32 which as I mentioned earlier sucked... as does the x86 emulation on Itanium, Microsoft shipped Windows 7 with a full copy of Windows XP in a virtual machine. This design allowed applications that ran better on Windows XP to run, almost fully integrated with the desktop using RemoteApp via RemoteDesktop on the virtual machine. Using the same technology, it would be possible to install Windows XP or Windows 7 on an emulated virtual machine and integrate the same way. Performance will still be utter rubbish compared to an app running natively and applications like video players will be awful, games will suck BADLY, but pretty much all applications will run nicely this way.

      Now this brings us to the last issue.

      Developer seeding.

      I'm not going to write the same bloody application for two architectures. I'm not going to carry around two laptop computers so I can hand optimize my code on both chips before releasing anything. Therefore, NVidia or whoever decides to emulate and x86 to allow legacy apps for x86 to run on ARM better also provide a similar ARM emulator for x86 laptops. The laptop I'm coding on right now (though I'm writing this while I download something) is a Core i7 with 16 gigs of RAM and a GeForce GT540M. I would be in shock if I see anything that comes close in performance to this machine in the ARM world for a while to come. I buy a new development laptop once a year and try to nearly double performance over the previous one if possible each time. So, until the ARM guys put out a competitive development computer, I'll need some way of coding for ARM if I'm going to write anything for ARM. I do not code .NET, I write native code and often count clocks. .NET apps are nifty for the surrounding applications, but when it comes to video codecs, video filters, etc... those have to be coded in native code.

      So, while you made some interesting remarks regarding what you though you knew about emulation, you were REALLY far off.

      1. Gordan

        Re: Glad you said it, but insanely wrong

        1.1) You are discussing why FX!32 was a commercial failure on an economic and political basis, not technical basis. So I'm going to ignore this argument entirely.

        1.2) I fundamentally disagree that it is easier to run everything in cross-compile mode than to run just one process in cross-compile mode. What are you basing that on?

        1.3) My experience of the Xenon PPC CPU in the XBox-360 is that it is actually very poorly performing. Judging by it's performance at running natively compiled C code compared to x86, I would say that it hasn't got a snowflake's chance in hell of running the original XBox Pentium 3 code in full emulation at same or better pereformance compared to the original console.

        On the issues that need to be addressed on ARM:

        2.1) ARM has it's own SIMD engine (NEON), and it should be possible to translate SSE assembly to NEON assembly in a reasonably straightforward way.

        2.2) x86 performance suffers if your data is traddling cache lines just as much as it does on any other CPU. The difference is that x86 has always had transparent non-word-alignment fix-up transparently done in hardware. ARM has only got that with ARMv7 - but the same performance penalty still applies. The difference is that with the transparent alignment fix-up means that your dodgy code (and yes, it is a coding/compiler error, not an architecture issue) will actually work rather than read garbage for it's data on unaligned dereference and crash. On ARMv5 the alignment issue can be fixed in software but that comes with an additional performance penalty. In general, on ARM non-word-aligned memory dereference is treated (rightly!) as a bug. On x86 the mentality is that you can't see it therefore it's not a bug, something that is, sadly, coming to ARM with ARMv7, but that is no doubt purely to pander to the masses of incompetent programmers who do things like allocate arrays of char (byte aligned) for use as buffers, read in a data packet, then cast it into a struct (word aligned) and then wonder why the data they are reading back is garbage.

        Finally - if your idea for what an average (and optimal for average use) machine is a Core i7 with 16GB off RAM, then you are welcome to continue being the bleeding edge power user that thinks that the faster and more powerful their machine is, the better programer they are. This is akin to chess players who think that the harder they bang the chess piece on the board, the stronger a move it is. So perhaps you need to grow up before partaking in this conversation. Personally, I make sure that my programs build and run correctly on multiple architectures with multiple compilers, and make sure that the performance is acceptable even when running on a SheevaPlug. Well written programs don't require a 4GHz CPU and 16GB of RAM to run.

      2. Tom 7

        Core i7 16 gigs of ram gt540m I wish...

        While ARM would be of little use to you a dual core arm at 1.2 gig should be more that equivalent of the 400Meg Pentium that is more that adequate for most peoples office requirements - once you get past Wx. I've seen a 800Meg ARM handle OpenOffice with positively lively performance compared to a friends Vista on a dual core Intel 1.7 gigs attempting office2003.

        I would guess w8 for arm would be aimed at hand helds which would not even be used for generating documents merely viewing them.

        When our finance officer gets one and expects the whole world to upgrade to w8 we will just put use a linux box to generate OpenOffice docs from his originals - its very easy to 'optically' compare the Woffice and OpenOffice docs and send any (if there are any) that show any noticeable difference for manual correction!

      3. asdf
        Coffee/keyboard

        lol

        > sucks big old ding dong

        That is all.

    4. JEDIDIAH
      Linux

      Now for the zillionth time...

      Emulation of x86 binaries work on something like Alpha only because at the time the Alpha architecture was MUCH BETTER than what it was emulating. Typically, there is always a huge performance hit for emulation. If you have enough extra capacity/performance then this is not a problem.

      This is why Alpha emulation of x86 binaries worked and why stuff like vmware works somewhat.

      ARM is on the other side of the equation. So "emulation" will likely be painful and unpleasant.

  4. John Smith 19 Gold badge
    Happy

    @Gordan

    "It's up to MS - they'll either provide a caching binary cross compiler or they won't. And since they have already clearly demonstrated on the X-Box 360 that they have the required technology available, the evidence is strong that they can provide this with the ARM version of Windows 7."

    So it won't because they can't.

    It's because they don't *want* to.

  5. rav
    WTF?

    Of course x86 apps will have to be recompiled for ARM...

    ...thats a no brainer.

    Lets define some nomenclature first.

    Software runs on x86 or computers for real world design and gaming, etc. Applications are tiny pieces of code written for ARM RISC.

    In fact they are so tiny and insignificant that everybody in the world can't bring themselves to use the word application to describe them. So they are "apps". Which is an appropriate shortening of the word as that act of shortening further emphasizes an "app's" insignificance. Nobody is calling CATIA or MATH CAD a "soft". Hmmmm, it would be interesting to see how ARM benches Super-Pi. Not likely as it would have to be recompiled and debugged first.

    Software like AutoCAD civil 3d which is barely stable on Windows 7 will never be compiled to run on an ARM RISC cpu.

    What x86 developer is going to recompile and debug a few hundred thousand lines of code so that it can run unstably on a tiny little cpu in a hand held device that is barely readble and has no keyboard?

    How many x86 developers recompiled their code to be PC and APPLE compatable?

    And why is the industry so willing to allow ARM; this insignificant, puny, little ant of a cpu that status of usurper? Why is the industry so willing to go to the additional expense of compiling comaptible O/S's?

    The thousand's of Legacy titles out that can run on Windows XP will be no longer viable in a Microsoft ecosystem native to ARM.

    So basically the consumer is getting screwed again. At least Mac buyers chose to get hosed.

    So the software that you now own running on whatever computer you use to read this will be dead in the water on Microsoft 8 if it's feeding an ARM RISC cpu.

    So much for STANDARDS.

    1. Ru
      Facepalm

      Point: Missed!

      x86/x64 is great for desktop systems, the sort of high power big box workstations that will run content creation software that requires hefty resources. It sucks for the sort of small, low power, easily portable devices that content *consumers* are more interested in. Who the hell wants to run an IDE or a CAD package on a pocketable tablet?

      This single-entity 'industry' you're grumbling about doesn't exist. And even if it were to pop into existence, it wouldn't care. You seem to be searching for some sort of 'WinARM' conspiracy here. There isn't one.

      As for hosing consumers, that's a little tricker. Most people use remarkably few bits of software, and the bits of software they do use are generally under current or recent development, pretty far from the sort of legacy software that is at risk here. If you want to continue running all your old crap, feel free to keep running it on your desktop. MS can't afford to simply abandon intel/amd; the platform will be around for years.

      1. rav
        WTF?

        RE: Point: MISSED

        I do run CIVIL 3D on my notebook in the field and many other surveyors and Engineers do also, as well as other civil design, GIS and GPS post processing software (NOT APPS). ARM can't handle that workload that at a minimum requires a 2.5 ghz multi-core cpu and at least 2 gigs of ram (more is much better), not to mention a decent video cpu.

        And it all runs fine on XP. Crashes constantly on Vista and not so much on 7 which only emphasizes the point that another O/S will not make life simpler and less expensive for the consumer.

        And when AMD APU's start appearing in smaller more efficient notebooks then I'll load up my OEM WinXP (I've got a few stashed) and all my software and take an even smaller notebook into the field. And when I decide to upgrade my software at my schedule not when MS marketing tells me too, then I'll buy a new O/S. And you can bet that ARM RISC will not take a moment of my time. ARM RISC competing with x86 in the real world? Laughable.

        THAT IS MY POINT.

        And besides ARM is running off at the marketing mouth about how good it will be for laptops and desktops: NOT.

    2. CheesyTheClown
      WTF?

      Like sentence fragments much?

      "How many x86 developers recompiled their code to be PC and APPLE compatable?"

      Autodesk for example does this in their new applications. They're developing now using Qt which allows them to easily compile for both platforms.

      You pooch screwed with technical gibberish better than I ever have in the past. I wonder if in person you are able to assembler coherent thoughts with greater depth. BUT....

      You don't compile an app for Windows and Mac (PC apps run just fine on APPLE btw if you boot to Windows on it). You PORT and application to run on Windows and Mac OS X. It requires developing a great deal of new code to support the new user interface and system libraries. Developing cross platform code has become a great deal easier over time, but in reality, the differences between GCC (and LLVM) and Microsoft Visual C++ are substantial. System headers have nearly nothing in common. Code accessing the network is a mess especially when you consider that the BSD sockets library and the Windows Sockets library are similar but not directly compatible. The UI has absolutely nothing in common.

      On the other hand, your comparison would make more sense if you asked "How many Mac PowerPC developers compiled their code for x86 when the tools came out?" the answer would be about 90% and 99% of them did it successfully and the same happened in the reverse direction.

      Recompiling code isn't the problem. The problem is chip level optimizations. The problem is architecture specific profiling. The problem is memory alignment. And a few others. The fact is, most applications will port quite easily between platforms. But just because the code has been ported doesn't mean it will perform well.

      AutoCad will probably run pretty well through emulation. Also, it uses almost no chip specific code. It doesn't need it. If Autodesk judges there will be a market for Autocad on Windows ARM, I'm sure they'll port it to the new platform. But, NVidia and the others will have to prove there is a market base for it.

      I'm a bit amazed that NVidia isn't trying to see developer systems to developers to start getting code ported.

      The thousands of legacy titles you're referring to in your posting may not be an issue. Most of those thousands of titles most likely are not processor intensive application anyway. Running them in an emulated virtual machine (QEMU style) and integrating them with the desktop similar to how Microsoft does with the XP emulation for Windows 7 through virtual PC will be just fine. With a decent set of drivers for the guest OS, it might even be good enough for autocad.

      1. rav
        Thumb Down

        @cheesy

        Actually AutoDESK just recompiled for APPLE O/S. How many years has it been?

        Since you are NOT a CAD user you don't know how unstable AutoCAD C3D is running on WInVista and Win 7. It crashes constantly. So a new operating system running on CPU's that are not qualified by AutoDESk or Dessault Industries or any of the other real world design software developers is only asking for trouble.

        Minimum qualification for Civil 3d is a 2.5 ghz cpu with discrete graphics and 2 gigs of ram (4 is better) and a 300 gig hard drive. There is no ARM cpu or ARM bus that is this capable.

        Your criticism of my writing does not support your argument. It is only the criticism of a pompous butt.

    3. Anonymous Coward
      Anonymous Coward

      The more of your posts I read

      the more I realise you have no idea what you're talking about.

  6. Mage Silver badge

    DOS

    DOS programs will run better than on NT4, Win2K, XP, Vista or Win7 ...

    As long as you install the 3rd party Dosbox for ARM. the x86 Dosbox VM runs DOS programs better than NTVDM on x86 XP. Also runs them on Linux.

    16 bit windows will run *IN* DOS box on ARM quite well too.

    Java, Flash, Air, .NET applications will all run.

    It's 32bit and 64bit Native Windows applications that either won't run at all, or like a pig in a VM.

  7. takuhii
    Paris Hilton

    I don't see what all the fuss is about. and no, I'm not trying to be funny or clever.

    Basically, Windows 8 will run on ARM, but it will still be primarily aimed at x86 type processors, right?

    ARM processors tend to be system on chip, so surely Android and Apple type hardware. This means more people can branch out into Windows Tablets and other portable devices without relying on Windows Mobile or a hacked version of windows.

    I just don't understand the whole My x86 version of MS Word won't run on ARM. Did it run on ARM to begin with? Fair enough if MS is abandoning x86 (Intel (how would you refer to the new chips 586, 686?) chips altogether, then fair enough, kick up a stink, You don't abandon your core chipset for an "inferior" one

    REALLY, what is all the fuss about? Someone enlighten me ;)

    A Paris tag cos she's just as confused as I am?

  8. Anonymous Coward
    Anonymous Coward

    Is this a modern day version of

    Advanced RISC Computing?

  9. Sean Baggaley 1

    Blimey.

    Anyone would think nobody had ever done anything remotely like this before. Except they have: Apple have transitioned their platform from Motorola's MC680x0 CPU series to the PowerPC series.

    And, more recently, they did it again, moving from PowerPC to Intel.

    The solution was to provide "Universal" builds for applications, including code compiled for both the legacy platform and for the new one. The OS knew what platform it was running on, so it ran only the code it knew would run on the user's system. For popular legacy PowerPC applications that were unlikely to be upgraded in time, Apple also threw in a PowerPC emulation system. (They even had a full-on emulator for their pre-OS X operating system for a while.)

    Now, Apple made it clear that their approach was a transitional one. (They killed off their PowerPC support completely with the recent Lion release, but PowerPC Macs haven't been manufactured for some time now.)

    However, there's no reason why Microsoft couldn't adopt a similar "Universal Binary" approach for their own applications. My only worry is that Windows' install / uninstall system may need some major surgery to make it work—perhaps moving to a system closer to that adopted by OS X where an application is really a special folder with all its resources inside.

    Those who might complain about storage space should consider that code requires a fraction of the space taken up by a typical application today. The rest is assets—graphics, icons, sounds, etc. So you're not going to see a huge increase in application sizes overall. Games are almost entirely assets, so the code size increase will be negligible for most of those. And if Windows 8 does catch on, expect to see much more asset-rich non-games applications.

    As mobile device sales increase, flash manufacturers will catch up with demand and prices will start to fall again. SSDs are also finally becoming affordable to the high-end consumer market, and will likely be big business too, so don't expect these devices to be stuck at a maximum of 64GB + whatever SD card you can throw at it.

  10. Mr Young
    Coat

    I wonder?

    So, are Microsoft looking at a clean(ish) slate for ARM development or are they just going to add even more to the existing x86 code?

  11. Anonymous Coward
    Anonymous Coward

    isn't it about time that they forked Windows?

    Windows have a lot of old binary that they forced to try to support. Plus some relic DLLs that they are stuck with. Shouldn't MS consider releasing a new Windows that remove all the old stuff (maybe support it as a subsystem). And also release another fork which would be based on Windows XP.

    Both system will share the very same drivers, and all the new binary would work on both. But the old binary is only guaranteed to work with one fork while the other one _might_ support it via an interrupter that will attempt to convert the old calls to new ones.

    they could do what Apple have done with OSX, then dump the old system few years from now!

  12. Richard Plinston

    It is about control

    Microsoft putting W8 (Wait?) on ARM is not about selling tablets to users, it is about controlling the industry.

    When netbooks were first made they were simple and cheap devices built from DVD screens, Atom CPUs and ran Linux. This was a threat to MS so they went to the OEM and 'suggested' that they only use WinXP otherwise they would lose their 'discount' on all copies of Windows.

    Later they had to install Win7 'starter' which increased the hardware requirement and the price. Naturally the netbook market has been nearly killed off as proper laptops are not much more and run better versions of Win7.

    Now with ARM tablets and netbooks (or transformers) MS are not able to strong arm (ahem) the OEMs into installing Windows, ...yet.

    It seems that the plan is to ensure that ARM tablets or netbooks must only be sold running Windows, or the OEM will lose their discount on all Windows computers they make.

    Whether an ARM tablet runs the user's XP programs is irrelevant because MS want to kill off the ARM machines and have the OEMs make Intel/AMD tablets that _will_ run Windows programs.

This topic is closed for new posts.