Might make porting Android between phones easier too.
ARM lays down law to end Wild West of chip design: New standard for server SoCs touted
Brit processor core designer ARM has forged a specification to smash through a significant barrier to the widespread adoption of its highly customizable chip architecture in data centers. That barrier-smasher? A specification that aims to standardize how ARM system-on-chips (SoCs) interoperate with low-level software, and in …
-
-
Wednesday 29th January 2014 19:19 GMT Charles Manning
This particular spec is for servers, not phones etc. Undoubtedly they will make phone etc specs too at some stage. Canonical would want that if their Ubuntu-phone goes anywhere.
This is not the first such spec though. ARM have already done something similar for the Coretex M0 and M3. It really helps portability as you don't have to fiddle so much with linker scripts, debug scripts and the like to move from one device to another.
The current Linux kernel uses a device tree to allow a single kernel to boot on multiple different platforms. That is handy for making packages which will run on a wide variety of hardware. Like a PC, it does blow out the code footprint as you end up with drivers for stuff that is not even present on the SoC you are using. This is very handy for the distro-oriented folks who want to generate a single OS that can be booted on, say, both a RPi and a beagleboard.
Very few of the real embedded systems running Linux will run a full-fat kernel though. Most will trim the kernel down to just what is needed for the task at hand. While this takes a bit more time to do, it reduces footprint dramatically and speeds up booting.
-
Friday 31st January 2014 18:28 GMT Anonymous Coward
It doesn't take much time to trim a kernel
I compile my own kernel images for my x86/64 systems and it's not terribly onerous or time consuming. They're typically about half the size of a stock kernel image and still include a lot of usb drivers for stuff I haven't actually got. Whilst the reduction in size is nice the main reason I roll my own is that less code = less potential to go wrong (I also always install a stock kernel as well but that's really for just in case).
-
-
-
-
Wednesday 29th January 2014 23:20 GMT FrankAlphaXII
Huh?
What does a UEFI feature have to do with a processor architecture? IIRC that kind of thing is up to the motherboard manufacturer, and I seriously doubt that many of them are still going to produce BIOS based motherboards for much longer, which is bad news for a number of Linux distributions and potentially BSDs if they don't follow Red Hat's example and get signing keys.
-
-
Wednesday 29th January 2014 23:08 GMT FrankAlphaXII
Color me unconvinced
It may be cynicism, but I still think ARM in the data center is, at this point anyway, Itanium all over again, given the level of hype over an architecture that is nowhere near proven outside of the mobile and netbook use cases. It may not be a popular opinion but the parallels are certainly rather striking when you really think about it.
-
Thursday 30th January 2014 09:25 GMT Charlie Clark
Re: Color me unconvinced
Well, apart from being "new" architectures the two are very different: IA64 was completely new, ARM isn't new - lots of the software already exists for ARM-32 and moving it to ARM-64 won't be difficult; IA64 was only ever going to come from Intel meaning they could dictate prices and roadmap, ARM for servers is going to come from at least 4 (AMD, nVidia, Samsung and Qualcomm); price, competition and the small size of the chips (meaning higher yields from wafers) will keep prices at a fraction of those of Intel. Together these are a very different value proposition. Whereas going IA64 or, before that Alpha, was such a daunting prospect that HP effectively had to strong arm customers (and software vendors) into adoption and even then it remained a niche market. ARM boxes will fit neatly into existing infrastructure and, depending on the workload, and allow a gradual migration as older boxes come up for retirement.
The market is different: IA64 was left targeting large servers with custom installs, ARM is commodity targeting IAAS data centres rather than telcos and banks. The big problem is going to be: are the margins sufficient for vendors to make it worth their while? Though, given the ubiquity of the architecture and, therefore, the ease of getting into the market, they may not have a choice as customers will buy boxes that cost a tenth or less of equivalent Intel ones.
-
-
Friday 31st January 2014 03:29 GMT Jason Ozolins
Re: Color me unconvinced
The number of ARM processors shipped vastly outstrips the total number of x86 processors shipped in the same time. I guess that wasn't one of the many broken promises. It would help if you gave some detail on who promised what.
A RISC versus CISC debate, absent any engineering or business considerations, is about as deep and thrilling a dispute as hatchbacks versus sedans, without reference to any real cars. Most of the interesting differences are between particular models (ISAs), not the abstract classes of car (architectural style).
It happens that the Pentium Pro and its many evolutionary descendants decode the more complicated x86 opcodes into RISC-ish uops internally. Seems to work okay for Intel.
-
Friday 31st January 2014 12:21 GMT Anonymous Coward
Re: Color me unconvinced
"ARM/RISC has been making these kinds of promises since Intel introduced the 386 [...]Why should this time be any different?"
Back in the days of the 386, Windows was not ubiquitous and other architectures were relatively widely deployed. Not all of them survived the onslaught of Windows NT in any meaningful way.
Before too long, Windows and x86 will again not be ubiquituous, as non-Windows clients will have changed the technology and the economics of client computing. Non-x86 kit long since made x86 irrelevant in the embedded (non-IT-department) market anyway.
That's what's different.
-
Monday 3rd February 2014 14:43 GMT Tom 13
Re: Color me unconvinced
By the time of the 386, Microsoft was already ubiquitous in the consumer computer market (you need to stay focused on the company, not the OS which at that time was MS-DOS not Windows). Apple had already relegated itself to a niche market. True at that point they weren't the server market, but back then that was still specialized hardware. But on what we now call the PC market, it was Microsoft all the way.
Not claiming this was a good thing. In fact I think it has stifled innovation. But it was the reality.
-
-
-
-
Thursday 30th January 2014 11:00 GMT P. Lee
Re: Color me unconvinced
The demand may not be end-user driven. HP might want to punt ARM and take intel's slice of profit. I'm sure Apple would like to drop back into proprietary hardware if they could. I'm sure Asus would rather sell an ARM tablet than an Atom one.
It isn't always about performance, sometimes it is about controlling the whole stack or saving a few dollars on millions of devices. Think how many ARM-based ADSL routers + switch devices are out there. Now think how nice it would be if there was a standard architecture which allowed some SATA interfaces to turn them into NAS boxes too. Since these are SoC's you've got networking built-in and you could just have a socket for a NAS card, run by another ARM chip. Instant converged networking and storage for the home. You really want a standard linux install for something like that, not have every ADSL manufacturer try to ship their own linux build. It isn't DC, but it is server-based.
You may also have latency-sensitive applications such as voip which don't require much processing per user, but do require dealing with quickly.
Then there's the whole hypervisor thing. If your workload doesn't require a mega-server, might it be cheaper to dispense with the hypervisor costs and run smaller CPUs? If HP can take half of VMware's income and provide individual blades on its own hypervisor on custom (HPUX?) hardware, it would probably be quite happy. TCO might be in its favour when cutting out intel and vmware.
-
Thursday 30th January 2014 13:40 GMT Anonymous Coward
Re: Color me unconvinced
"an architecture that is nowhere near proven outside of the mobile and netbook use cases."
Are you serious?
It it's not IT department kit and it has a computer of some flavour in it, what do you think it has inside it, and has had inside it for the last few years? And not just the obvious "mobile computing" either - "smart" TVs and other consumer electronics, automotive electronics, anywhere you look, the odds are you will find ARM (probably with Linux too). Just because you've never noticed it doesn't mean it isn't there.
-
Thursday 30th January 2014 19:01 GMT Tom 13
Re: Are you serious?
Smart TVs and other consumer electronics are essentially sub-cases of the mobile and netbook cases. Not a lot of computing power required relative to the requested task. PCs have always been the other way around, which is the case not proven.
The only way ARM works is if the mean for its chip production exceeds the power needed for the consumer PC. Maybe we are at that point and the non-standard architecture is the only roadblock. But I'm doubtful on this point.
-
Friday 31st January 2014 14:07 GMT Anonymous Coward
Re: Are you serious?
"Not a lot of computing power required relative to the requested task. PCs have always been the other way around, which is the case not proven."
Maybe mass market PCs started life underpowered for typical Windows tasks. However, for the last few years, very little of what most people do on Windows PCs has really needed the power of a 2+GHz multicore x86 (frequently with a rarely-used massive 3D graphics setup).
"The only way ARM works is if the mean for its chip production exceeds the power needed for the consumer PC"
I'm having trouble understanding that, particularly 'the mean for its chip production'. So I'll guess.
The range of compute power available from today's ARM SoCs is massive, and the current higher end stuff offers a choice of products more than powerful enough for the vast majority of activities historically and currently performed on x86 desktop Window boxes, laptops, etc (and on the very occasional mobile x86).
-
Monday 3rd February 2014 14:51 GMT Tom 13
Re: range of compute power available from today's ARM SoCs
This is the bit I'm not familiar with and where I hedged my statement.
I suppose more precisely what I am say is that the mean of chip production across all suppliers has to:
1) meet or exceed the standard computing power requirements of a consumer PC
2) substantially exceed on the cost savings front
in order to disrupt the current market.
While meeting the cost savings front might seem more logical, the established distribution channels give a cost savings to the CISC architecture for the consumer pc market. Note that I am saying consumer PC not consumer electronics. I put iPads and Android tablets in the consumer electronics market, not the consumer PC market. The distinction is the PC can produce things for the end user whereas tablets are consumption devices.
-
-
-
-
-
Thursday 30th January 2014 08:36 GMT Anonymous Coward
What's with these managers?
MBAs, or business magazines? Whatever, something is killing off Bullshit Bingo by making it just too easy nowadays:
"If you think about the history of the ARM play... try to think about it as an appliance model ... As we've evolved in the 32-bit space at the time, as we've evolved towards 64-bit we see that standardization is going to help the rapid deployment of ARM-based solutions."
-
Thursday 30th January 2014 14:15 GMT -tim
Lock in the insecurity?
The ARM chips can switch modes which is great for hackers. The current compilers only use one mode so the others are nothing but a waste and a security risk. I loved that sparc had a hardware stack that would never run code and while that was a small thing, it protected my machines in the past so I'm happy for small features that make hacking harder.
-
Thursday 30th January 2014 17:56 GMT ganymede io device
Re: Lock in the insecurity?
ARM architecture reference manual ISBN 0-201-73719-1 page A4-65 "Any writes to CPSR[23:0] in User mode are ignored (so that User mode programs cannot change to a privileged mode)."
You might want to read chapter B3 Memory Management Unit section on access permissions too.
How do you think Linux VM works?
-
Saturday 1st February 2014 13:24 GMT -tim
Re: Lock in the insecurity?
You know there are other ways to change flags[1]. Hackers have been using them for decades. If the hardware can not do a function at all, you don't have to worry about what happens if controls for some security bit can be bypassed some other way.
See talks at blackhat, breakpoint, CCC etc.
Arm is young enough that it could take the option of "set this bit and the feature is off until the chip is reset" and it wouldn't have a problem. Otherwise you might find something like BCD registers can be moved to somewhere with a brand new meaning decades after anyone used that instruction in a popular application.
-
-
Thursday 30th January 2014 18:38 GMT ganymede io device
Re: Lock in the insecurity?
I answered about changing modes already using an Architecture Reference Manual from 2000 (so for 32-bit chips up to architecture 5TE pre Cortex).
As for "I loved that sparc had a hardware stack that would never run code",
the more recent ARM Architecture Ref Manual says about version6 of the architecture "APX and XN (execute never) bits have been added in VMSAv6 [Virtual Memory System Architecture]",
So setting the stack pages in the MMU to be read-write-execute_never (with a no access guard page for demand driven growth maybe) gives a hardware stack that will never run code like your beloved sparc.
-