There must be more to it than this. The article just makes is sound like they've just put an MMU on it. Hardly masses of silicon required there.
If thats the case, than this is less of a triumphant announcement and more of about b*ddy time.
The ARM RISC processor is taking a few baby steps closer to being a credible alternative to x64 processors for servers, according to ARM Holdings, the British company behind the popular chip. According to notes for a speech delivered at the Hot Chips conference at Stanford University on Tuesday, ARM Holdings architecture …
The whole point of a RISC is you don't need 'masses of silicon' to get things done, it just needs to be the right silicon. It also helps if you are designing the hardware before the OS that will run on it is already built, and therefore don't need masses of silicon to emulate legacy behaviour in every absurd detail.
I am a server programmer in the financial industry and I know many applications need 64bit addressing, as the process sizes are already well above 4Gb. Yes, single processes with 7Gbyte memory consumption are used quite often in very central places.
Some applications could be split up on more parallel process instances, but some applications like OLAP need as much RAM as they can get. All the legacy software needs as much memory as possible and they are already well in the post-4Gb range. Modern applications are desgined to be spread over many process instances and physical machines, so maybe they can deal with the 4GB limit. But things like CAE and CAD also need as much memory they can possibly get to design a new CPU or something like the Airbus A400M or the Boeing 787.
ARM better come up with a clean solution to address at least 1000GByte of process memory with simple C/C++ pointers. Whether it must be full 64bit pointers I don't know. Maybe they can devise something that needs less storage. After all, current 64bit pointers carry at least 3 byte of redundant data EACH.
I remember the launch of Windows NT. The Hardware Abstraction Layer - yeilding the choice of PowerPC or Intel. It never worked.
So, while I wish for:
"it will be much more fun for the rest of us if ARM becomes a credible alternative to the x64 architecture for servers, netbooks, and maybe even other PCs at some point. Stranger things have happened."
... I fear there there will never be real choice unless masses away from Windows to Linux / Android.
...Linux is already the dominating Operating System. At least for the non-trivial stuff.
For example, Deutsche Börse (the German Stock Exchange in Frankfurt) currently runs on Solaris and VMS, but their next generation system will be all Linux, x86-64 and C++. They can easily port to any 64bit Linux platform.
Google is all Linux. Facebook also, IIRC. IBM is pushing Linux servers. New York finance is using Linux heavily. Most web servers are Linux. Most scienctific numbercrunching is done on Linux.
I can smell the Angstschweiss of Mr Ballmer.
** Largest computer of all: http://en.wikipedia.org/wiki/National_Center_for_Computational_Sciences#Jaguar
** NYSE and Chicago Mercantile Exchange run their servers on Linux:
(besides Amazon, Peugeot, and Union Bank of California)
** CERN's Large Hadron Collider runs on Linux
Also, Linux is part of Android, PalmOS and MeeGo. Soon there will be vastly more CPUs running Linux than Windows, simply because Linux will proliferate in phones.
This must scare the living devil out of The Chairthrower of Redmond.
Never underestimate the Carnivourous Penguin !
Didn't we finally get all that address-obstuficating-workaround-hack clutter *out* of the x86 arch? I'd prefer something very ARM-like but with a re-structured instruction set around a 48-bit flat address space. Or an x86-64 with all legacy instruction/addressing modes stripped might be interesting.
I'd like to see ARM make a dent in intel's monopoly.
It's interesting to note how ARM is tackling the 32bit limitations by using techniques very similar to 32bit x86 large page extensions intel used on the pentium where an individual process had a 32bit limit, but multiple processes could have completely different 32bit address spaces up to 64GB.
So it would seem that 64bit is difficult to achieve. I'd like to point out that even on x86-64, the full address space isn't implemented in silicon:
address sizes : 36 bits physical, 48 bits virtual
People assume more bits is better, however one should not ignore that the extra transistors consume more power, cause extra heat, and incur a performance penalty, ARM's decision to keep 32bits may have been along these lines.
Best TPM article for yonks. Best of luck to ARM. But they've got an uphill battle on their hands, and not just at a technical level.
"I remember the launch of Windows NT"
But maybe you don't remember it very well.
"The Hardware Abstraction Layer - yeilding the choice of PowerPC or Intel. It never worked."
It may never have worked on your PowerPCs, I don't know.
In the rest of the world, it was also supposed to work on any conforming box from the Advanced RISC Computing consortium, whose members used not just PPC but MIPS and Alpha.
The HAL concept worked fine on the NT/Alpha boxes I used, from Multia to AlphaStation 400 to DIGITAL Personal Workstation PWS500a and indeed various DIGITAL Server boxes. I probably still have the relevant MSDN CDs somewhere, paid for by my (not my employer's) money.
At least, it worked as far as Gates allowed it to, and as far as the pre-virtualisation miracle that was FX!32 allowed it to. FX!32 was DEC software for NT/Alpha that allowed on-the-fly translation (not emulation) of x86/Win32 apps to Alpha/Win32 apps, as had been pioneered earlier by their VAX to Alpha translators.
Other ARC players didn't have equivalents of FX!32 so they were even more reliant on recompiled-to-native versions of Windows apps, which of course Gates never really provided except for x86.
Then after a little while Gates said to DEC: "You can either be Larry's friend or you can be my friend" and DEC's top folks didn't choose Bill. The rest is history.
Speaking of history, now that the history of massive anti-competitive backhanders and/or commercial blackmail (call it what you will) over many years between intel and Dell is finally emerging into daylight, it would be interesting to know what kind of sweetheart deals went on between Intel and Microsoft so that Intel didn't have to face any real competition in the chip market. I know from personal experience that Intel were quite happy to twist people's arms if they were threatening to look at AMD in any significant way.
Fortunately that kind of rubbish doesn't (can't) go on in the open source market.
I'm sure ARM is working on a full 64-bit architecture, but this MMU extension is a reasonable interim measure. Many servers run a huge number of processes each of which requires only modest memory (web servers are one example of this), and having an MMU divide a large physical address space into a lot of smaller logical address spaces for a lot of processes makes good sense. Having several cores share this MMU does too, and since ARM already has multicore CPUs, this is likely to happen.
But in the longer run, ARM will need a 64-bit architecture. When Acorn Computers started to design ARM, most were transitioning from 8-bit to 16-bit processors, but Acorn decided to skip the 16-bit stage and move directly to 32 bits. While none of the original designers now work for ARM (that I'm aware of), I'm sure this story is not lost on the present designers. But skipping the 64-bit stage and making a 128-bit processor is, perhaps, too much to expect. I can't really imagine 64 bits ever becoming too little to address physical memory, but there might be something to say for having a logical address space that is much bigger than any physical address space will ever be: Data structures can be allocated far apart in the logical address space, so they have space to grow almost without limit. And for such things as cryptology, 128-bit arithmetic would be nice. Still, I find it more likely that the next generation ARM will be 64 bits rather than 128 bits.
I now wonder if NT on Alpha/PPC was done solely to prevent intel from thinking they were getting MS' monopoly by proxy. As soon as AMD had a credible alternative CPU this game was no longer required. I eval'd an Alpha machine once with NT 4.0, it was weird because it had IE (and I ran pov-ray on it) but no dice getting a Netscape build for that. It also ran the 8086 code in VGA BIOS roms, by emulation, so that video mode switching would work. See intel? that's how you deal with old binaries: run them, but no faster than before, so they will go away sooner and not plague you for decades with layers of legacy garbage. Of course then you can't sell a CPU into a market which wants to move their entire OS and apps unchanged onto the new machine and have it go faster.
Lou, don't forget there's a difference between what a chip architecture supports in principle, and the subsets which may be implemented in any particular implementation of an architecture. x86 isn't an architecture, it's a collection of engineering kludges with an extensive set of supporting software.
PDP11s were nominally a 16bit architecture but you could have 4MB of memory on some of them, so more physical address space than logical address space is not exactly new to the world. The magick lies in doing it cleanly so that transitions between different implementations don't cause nightmares (eg if "clever" people have found creative uses for "unused" address bits, e.g. "to save space").
AMD64 seems reasonably clean in this respect (it's not really x86-64 is it, whatever Intel would prefer).
The 22 bit addressing extension on PDP11s were completely transparent to application programs. The address mapping was handled by the OS (as long as you weren't running early versions of RT/11 IIRC) , and every application ran in a private address space that was either a 16 bit 64K combined address space, or one 64K Instruction space and one 64K Data space on separate I&D systems like the PDP11/70. The OS would control the segmentation registers to to the virtual page to physical page mapping, so the programs knew nothing about it.
There was an additional complication on UNIBUS (rather than MASSBUS or QBUS) systems, where the DMA disk adapters (like RL11s) could only write into an 18 bit real address space managed by the Unibus map, limiting where the OS put it's disk buffers. This meant that if you were wanting to have mapped DMA I/O from a disk directly into your programs address space, you had to be very careful about asking the OS to set up your address space correctly.
I had a real oddity, a system called a SYSTIME 5000E, which was, as far as I know, the only system based on a PDP11/34 which had 22 bit addressing (they normally only had 18 bit), but it did not have separate I&D. All other systems, mainly from DEC themselves, were either 16 bit non-I&D, 18 bit non-I&D, 18 bit I&D, or 22 bit I&D. SYSTIME could do this, because the processor was made using TTL logic chips on 5 or more boards plugged into a backplane, and it was possible to buy the basic processors from DEC, and build your own memory management unit, disk controllers and backplane.
This gave me real problems when I was trying to get BL Unix V7 to use the full 2MB (we could not afford the other 2MB, it would have cost about £40K) on this system! I eventually worked out that I needed to turn on 22 bit addressing and 18 bit Unibus addressing, and had to use the Calgary disk buffer modifications, and fix the start address of the buffers at 64K physical, otherwise I would get an address wrap-around during a DMA disk transfer, and wipe out the I/O vectors in the first 256 bytes of real memory about 5 seconds after booting the system. Panic!!
It's rather sobering to think that I ran a 12 concurrent terminal multi-user system on a 16 bit machine with 2MB of real memory, 64MB of disk space (and that was quite a lot for this class of machine in the early '80s), with each program being limited to 64KB of memory, supporting a community of over 100 users. And more than this, it ran Ingres as well (albeit slowly). My phone has much more resource than this now!
..under your definiton. There is not a single CPU out there which can physically address 64bits or memory. All of them have something like 36 to 42 (or so) wires for addressing.
x86-64 is quite clean in this respect. the 8086 to 80286 "real mode" was a pain in the proverbial, though. But today everbody uses "Protected Mode" with a clean, linear address space. ARM suggests something less optimal, actually.
along with Cornish Pasties and Peas Pottage since they no longer have domestic manufacturing as they once did in Morris, Leyland, Plessey, Marconi, etc. Even screw, nuts and bolts are imported.
Even my Wellington Boots have 'Made in China' on them, as do 'Saville Row' suits (most are made in ShangHai nowadays).
Let's hope some some overseas conglomerate doesn't buy up ARM Holdings and that it continues with it's technological prowess. A British winner.
The Reg had something to say on this matter a few months back... http://www.theregister.co.uk/2010/02/22/manufacturing_figures/
It isn't worth making consumer goods anywhere but China, really. But they're not the only products in the world.
Back in the mid 90s the PowerPC CPU was apparently going to kill off x86 in the consumer PC market. Even Apple migrated the Mac to it. Didn't happen though and eventually IBM lost interest and concentrated on high end server development for the chip. ARM may scare Intel a bit in the Windows PC market but I think far more likely is that x86 will continue on PCs but the PC platform itself will become less and less relavent as hand held devices and netbooks take over from it.
When a CPU is called 64 bit that number is generally referring to the size of the data registers, not the size of the address or data bus. Eg - the 68000 was 32 bit internally but with a 24 bit data bus and 16 bit address bus. Some refered to it as 16 bit but really it was 32 bit though it could technically just have accurately been called 24 bit!
Yes people have been confused about this since the days of '16-bit' (8088) and '8-bit' (8080) software, since both of those machines had 8-bit data bus, and 8-bit and 16-bit registers (but the 86 squeaked in with a few more 16-bit arithmetic ops) And the 68K was 24A/16D on the pins.
So what's your point? A convention that was barely meaningful when started in 1982 needs to be still followed? Bear in mind you still don't need 64-bit arithmetic for much, but 64-bit addressing is of critical importance for getting more ram online, and if you have 64-bit user addresses then you need 64-bit arithmetic to calculate addresses. If you have 64 bits only in the MMU, you can run a vast range of apps sharing terabytes of PM as long as each one doesn't need more than 4G of VM. Like everyone running all those x86 apps under Win64. So, at this point in history focus has shifted from the size of the data registers to the size of the address (I could also point out that a fair number of '32-bit' processors have had 64-bit data busses, and 128-bit vector registers supporting 64-bit arithmetic).
Bottom line, people will continue to be confused about what ' 64 bits' means wrt to CPUs, unless it's provided with some other info, e.g. AMD64, or iA64, or '64-bit physical address'. Do you think it would be a good thing if 'Oh, it's got a 64-bit CPU' actually told you everything you needed to know? Confusion is OK when it forces clarification.
"When a CPU is called 64 bit that number is generally referring to the size of the data registers, not the size of the address or data bus."
So, what's a 64bit OS then, in this context?
You might find that a convenient definition might be something that supports a nominal 64bit address space (perhaps not fully implemented yet) on a 64bit-addressable CPU.
You can have 32bit registers in a 16bit chip, and vice versa.
Care is indeed needed to avoid confusion.
Biting the hand that feeds IT © 1998–2019