After Microsoft announced that the next release of its Windows operating system would run on ARM chips, Chipzilla headman Paul Otellini and his staff must have received a memo from Intel Thought Central. That missive would have said something along the lines of: It's not news. We welcome it. We will grind the bones of our …
It's not news
There is more to making Windows work for tablets than simply porting to ARM.
You have to assume that both MSFT and Intel know this - unless they are both total idiots.
It's like the joke from 20years ago, MSFT invent a voice controlled computer.
At the demo Bill walks upto the microphone and says " see colon slash command dot com enter"
Of course there is more than that
Windows has tried porting before and while in theory it works pretty well the problem is the 3rd party software doesn't follow. It's a pain in the backside to support multiple platforms and always has been. Even if it's a straight recompile, you still have to QA it all. And usually it isn't a straight recompile with dependencies on a 3rd party library or package dooming the effort from the get go. Without robust x86 emulation built-in, any Windows port immediately finds itself in a seriously disadvantaged position.
Compare to OS X where Apple basically burnt their ships to the old world but provided tools to devs and users to ease the transition. Fat binary support and G3 emulation made the virtually seamless for users. If anything Apple are better positioned if they decided to jump ship again to ARM.
Compare to Linux where virtually all the apps people use are source code to begin with. Moving architectures has been part and parcel of Linux from the beginning and usually requires recompiling all the apps to the new target. Some flies in the ointment would be WINE, Flash and some other bits and bobs but largely it's painless.
On Windows it isn't painless unless MS implement a somewhat decent emulation layer. Without that any port will find itself in a niche. Perhaps Microsoft's gameplan this time around is to use Windows as a shell for tablet-y apps but allow a desktop when the user plugs in a dock. The desktop would at least allow users to run MS office or something and maybe some other apps that bother to produce a native port.
I wonder why MS doesn't embrace LLVM or similar to reduce some of the pain for C++ apps. If devs compiled to low level byte then it largely doesn't matter what architecture is underneath. It doesn't help the vast uncountable swath of useful x86 apps work but it's better than the current situation.
There seems to be no mention of Microsoft as to why they're doing it. Is it just for some publicity or is there a clear vision?
If they're finally going to properly re-engineer the GUI for the touch screen market, then *finally* they're getting with the programme.
If it is for mainstream Windows then what do they do for existing x86 software? we all know Microsoft is the king of backward compatibility.
Re: Of course there is more than that
"If devs compiled to low level byte then it largely doesn't matter what architecture is underneath."
Binary translation of non-self-modifying user-mode x86 code is no harder (and no less efficient) than JVM or CLR code. (The hard part about full virtualisation is in handling kernel-mode code and self-modifying code. The former just isn't an issue for applications and the latter is both detectable and extremely rare, apart from a handful of well-known framework libraries.) Therefore, we already have an architecture-neutral machine code format. It's called x86.
Obviously I don't know if that's what MS plan to ship, but it has been done quite a few times in the past and at least once by MS themselves, so the only reasons not to offer it would be political ones. It will be an interesting test of whether MS actually want Big-Windows-on-ARM to succeed.
I didn't say JVM or CLR bytecode
LLVM is far more low level than JVM or CLR bytecode. You use the same C/C++ toolchains and libraries but they generate an abstract machine language. At runtime a generator or JIT translates the abstract machine language into the equivalent ARM instructions. These can presumably be cached somewhere to speed up subsequent calls. The intent of LLVM is that while you're writing to a neutral platform at runtime the generated native code is extremely fast and efficient.
By contrast emulating x86 would be incredibly difficult. The instruction set is massive and complex. Emulation would be painfully slow although Windows might be able to thunked x86 Win32 calls to their native equivalents so at least that portion was native.
I think both techs have their places. I think MS would do well to think of the future here. If ARM is to take off then it needs apps. Some of these will obviously be written in .NET, but others need to be ports. Devs hate porting and maintaining 2 platforms so it would make sense to take that obstacle away. If someone has to port code it may as well be to a platform neutral bytecode that subsequently runs over any architecture including on x86.
RE: DrXym -- LLVM is not panacea
"LLVM is far more low level than JVM or CLR bytecode."
Lover level doesn't mean faster. LLVM can't effectively compile C++ to machine intendant bytecode.
LLVM works same as GCC at this point, except it is slower. It compiles to machine code. They claim that is possible to make arbitrary C++ code work platform independent as it was Java, and that they will do it in future. But that is pipe dream. C++ and C are not made for that, (unlike Java which is exactly made for that). To make C++ run at acceptable speed on VM, you need to modify the language. So why not just use Java? C++ was never ported to JVM for that exact reason. It just can't work at acceptable speed, and some programs can't run at all. Microsoft made some C++/CLI mutant that is supposed to run on .NET , but that is not C++ at all, it is different language. Which also proves my point. Soma C and C++ programs just can't run on VM. Hence, some language constructs must be removed to be able to run, or some things need to be emulated. So it can't be much faster than emulating x86.
"Bill how do I format a disk?"
"Bill are you sure?"
All PCs in room start whirling....
one BIOS to rule them all
Do you know when I'd start to be afraid, in the shoes of Intel? When ARM publishes a comprehensive, uniform and open BIOS-like interface, for the ARM architecture = a software compatibility standard that would allow you to boot any operating system on a broad range of ARM machines, without you having to customize a bootstrap loader for the particular hardware model (i.e. read HW docs, modify obscure C/ASM code, compile, flash over JTAG or some other hardware probe). Another condition for being afraid would be: it would have to get adopted by the gadget makers.
*that* would be competition to the x86 PC's - to their core virtues: universal compatibility and openness.
Is that the kind of feature any of the gadget makers want? No no, quite the opposite :-) Security (vendor lock-in) by obscurity and "architecture fragmentation" reign supreme. ==> the uniform commodity x86 market is still safely in the hands of Intel (and its formal x86 competition).
Oh wow, hahahaha
"and at one point, [it was slated for] ARM on the Vista program that they dropped. So this is nothing really new from that perspective."
ARM procs in 2007 maxed at what, 620 MHz? Not trying to slam ARM here, but Vista? M'F'ing Vista? The king of bloated, inefficient OS... on ARM? No surprise at all that didn't work out.
Business speak crap
"our proven ability to create broad ecosystem support"
Does Otellini, (read my lips there will never be a 64bit XEON), not realise how stupid and incomprehensible that phrase is?
Intel got the 3rd part right - they just recently ground AMD's CEO into dust and cast him into the four winds!
@"one BIOS to rule them all"
You do know that outside the Wintel world, EFI (or its UEFI successor) has replaced BIOSes now, as on Macs, right?
Did you also know that UEFI and ARM already go together?
As others have already said, if Windows running on ARM has any worthwhile impact, it will be because it gets ARM systems to market with a consistent underlying hardware (and firmware) platform for ARM system builders to base their efforts on. There are lots of ARM chips and boxes around now, many of them will run a Linux, but they're not exactly a consistent underlying hardware and firmware platform.
Mind you, the Advanced RISC Computing consortium (Alpha/MIPS/PPC) had such a common platform spec, and boxes to match, but back then it didn't do them much good.
We'll find out soon enough.
Otellini: "there will never be a 64bit XEON"
Tee hee. See also "Itanium is the answer for industry standard 64bit computing".
Or, as per the title of one of Todd's relatively recent LPs: Liars.
I preferred that old track of yours, "Sons of IA64" though.
@"one BIOS to rule them all"
Good, thanks for that link :-) ARM has joined the "UEFI Forum" in March 2008. It's about time for some open hardware with ARM+UEFI to start to appear on the shelves.
In the PC World, I haven't been aware of UEFI very much. Some name-brand BIOSes do contain that interface, but as far as I can tell, mostly I still walk the PC BIOS side of things... Or maybe it's just that I'm not aware that Windows actually call the UEFI stuff, rather than legacy BIOS. Maybe when drives >2TB become common as system drives, I will notice :-)
As for MAC hardware (EFI), that's a bit of a special case - the hardware is legcuffed to MacOS, and it takes a bit of tweaking to get e.g. Windows running on that - provided that the proud Mac owner would ever want to dual-boot into Windows, which somehow counters the purpose of buying an Intel Mac :-)
Anyway I don't mean to argue with your good point that attempts at "ARM BIOS" have been there for quite some time...
Windows on ARM isn't about selling systems, it's about controlling the OEMs.
MS controls the OEMs through discounts and co-operative marketing projects. In principle this 'saves' the OEM millions of dollars. In practice it means that if the OEM doesn't do as it is told then it is 'fined' millions through losing the discounts.
When netbooks first appeared they ran Linux. Vista couldn't run on them so the OEMs thought their discounts were safe. MS brought back XP which would run on netbooks, but they restricted the machines it was allowed on so that the OEM couldn't sell it on real machines.
The effect was that Linux disappeared which is exactly what MS wanted.
Now netbooks and tablets are coming out with ARM and once again run Linux. MS desperately want an ARM based Windows solely to threaten the OEMs into dropping Linux. If ARM Windows is useless then no matter, as long as it is there to control the OEMs into dropping Linux. If ARM machines don't get made then that is a benefit to MS too.
"threaten the OEMs into dropping Linux."
This worked last time with netbooks, but there is something (maybe two things) different between the netbook picture a year or three back and the ARM picture a year from now, things which will benefit customers but will not help MS this time round.
The first is hardware; back then we were comparing a standard x86 with a lightweight x86 and really the differences weren't vast. So when MS said "move back to Windows, or else" it didn't actually change a great deal in hardware terms, especially as they had cut-down XP in the toolbag.
Once ARM is in the picture, that changes. There will be bigger differences in hardware e.g. performance/battery life/weight/cost tradeoffs. If the benefits to the OEM from staying with ARM/Linux rather than x86/Windows outweigh the impact of the Microsoft arm-twisting, the ARM netbooks will still be a success. ARM/Windows may well mostly be a distraction, despite its theoretical technical merits.
The second is visibility. Linux back then was barely known outside the geeks, and it could (almost) plausibly be argued that if it didn't run Big Windows, it wasn't a proper computer and that users might reject it. That picture has changed, largely courtesy of iStuff and Android.