Isn't it possible to license SPARC or the PowerPC for less money? They offer the same flexibility as ARM, I would have thought - and presumably the low power goodness mostly comes from the fab.
In the last days of 2013, Calxeda, the ambitious startup that hoped to design ARM processors for data-center servers, imploded. Now El Reg has sifted its ashes, and pulled out some advice for the silicon upstart's contemporaries. The demise of Calxeda caused many to ponder the viability of general-purpose ARM-powered …
The fab certainly helps, (and indeed Intel's fabs are about the only reason they can compete in this space), but ARM is about a third less transistors than SPARC (500,000 as opposed to 855,000), and that has an impact as well. The Design would also have an impact (In modern digital IC's, Heat is mostly produced when transistors switch, so if you have many more transistors, but only very few of them switch at any one time, that would reduce power as well, but personally i dont know about that one.
There is also a bigger base of design engineers familiar with the workings of ARM, and that is a factor as well.
In a perfect world you would be correct, transistors would only draw power when switching. In practice there is significant gate leakage, which gets worse as your process size decreases. This is the reason for technologies like High K metal gate. Fewer transistors is definitely better, or power gating where entire logic banks are powered down while not needed.
These are very much different market segments - both POWER and SPARC compete with high-power chips like Xeon. Also, licensing POWER is only very recent addition to the market. I hope IBM does well with this, since POWER8 has some amazing capabilities and it would be great to see it outside of IBM gear.
MIPS is the only other mainstream design with a similar processor licensing arrangement that I'm aware of. If SPARC/POWER can be licensed, the costs are likely to be significantly higher than ARM - to the point where they are not likely to be economic options.
Regarding the low power goodness, it comes from a variety of choices:
- the design of the CPU and the target frequency you plan to run at. Compare a Pentium 4 to a Core based Intel processor to see what they had to do to try and get useful work out of the CPU as they increased the clock speed on the Pentium 4.
- the process node used, where small nodes generally require less power. However, this is balanced by the cost of smaller process nodes which usually leaves low power CPU's being built on older processes. The choice of process node may dictate what fab has to be used (i.e. Intel has the smallest nodes, followed by Global Foundries, followed by TSMC followed by pretty much everyone else.)
- cache: cache helps you keep your processing units busy, but is relatively power hungry
- I/O: the more I/O you have and the faster it is (including memory buses), the more power you require.
- power saving tricks: can you offload tasks (i.e. video decoding) to a custom processor and power down other parts of your SoC?
I'm interested to see how the 64-bit ARM performs from both a performance and power usage perspective. While Intel/AMD x86 processors are power hungry in comparison to current ARM processors, large chunks of that power usage are allocated to I/O (i.e. PCIe), a fast memory bus and cache - things that ARM will need to compete in server space.
" If SPARC/POWER can be licensed, the costs are likely to be significantly higher than ARM - to the point where they are not likely to be economic options."
You've not come across the freely available OpenSPARC then? (this is not a recommendation, btw, I just know of its existence).
"In March 2006, the complete design of Sun Microsystems' UltraSPARC T1 microprocessor was released-in open-source form, it was named OpenSPARC T1. In early 2008, its successor, OpenSPARC T2, was also released in open-source form. These were the first (and still only) 64-bit microprocessors ever open-sourced. They were also the first (and still only) CMT (chip multithreaded) microprocessors ever open-sourced. Both designs are freely available to anyone under open-source licenses. These downloads include not only the processor design source code but also simulation tools, design verification suites, Hypervisor source code, and other helpful tools. Variants that easily synthesize for FPGA."
The difference is really between few big multithreaded cores in case of Intel vs lots of singlethreaded cores on same dye and experience says that multithreaded are much more effective until you can make all you memory work on cpu speeds which is not just hard, but so hard that no one managed to pull it off yet.
And nothing stops Intel from packing gazillion of Atoms or Pentiums into one chip and position it to server market, at least it won't need specially ported software for it.
Intel has a more sophisticated form of parallelism, using parallel functional units So you get parallel operations, and even room to run hyper threads, were two threads make interleaved use of the functional units. This seems inherently more efficient than duplicated whole single-thread cpu cores.
Intel's mistake was not having a low-wattage part for phones. But the economics of servers is about watts per operation.
"Expect to see cheap Xeons soon."
Expect to see Hell freeze over first.
Oh wait, that was in the paper yesterday.
Intel have got a lot of cash, certainly enough to cut Xeon prices significantly (though it's more likely to be hidden as Dell-style "marketing incentives" rather than an explicit admission that Xeons are going to be worth much less). But that would also cut Intel profits significantly, at least while they were in "eliminate the competition" mode.
Indeed - given that Intel have managed to mostly see-off AMD with their Tick-Tocks after a nasty fright from Athlon/Opteron I suspect their default approach will be the same with ARM, the difference being that that Venn Diagram of use cases is merely overlapping on Intel vs Arm and whereas AMD were a near total subset of Intels use cases.
If you look at it in cold revenue terms the battle is not Intel ($12bn) vs Arm Holdings ($500m) but its Intel vs Arm + Fabs + Oems which is a very disparate target to hit/buy. I suspect plans for either buying ARM and starting/buying an Xscale2 get mulitple repeated dust offs at Intel but in reality they aint actually hurting that much as yet.
Time will tell if they go the route of a declining Microsoft (desktop vs mobile) or if they are ultimately shielded from the Desktop vs Mobile effect and just go on churning out chips and cash. If indeed Microsoft are declining. Stupid Balmer inspired purchases to the contrary MS havent actually done a Nokia-like implosion yet........
Sort of. The problem there is that Intel has such depth of cash, facilities, and technical knowledge that they can ALWAYS undercut on cost.
I was one of the people rooting for AMD when they went head to head with Intel on x86 CPUs. In the time I've worked with tech (and I cut my teeth on a TRS-80 model III), AMD are the only company that has ever really caught Intel with their pants down. At the time I had gone to a conference where their VP of Tech pooh-poohed the fast AMD chips because they wouldn't be able to dump the heat. He was correct up to a point, but AMD just put bigger coolers on the chips. Intel didn't really have anything ready to go, but within a year we were back to a horse race. Ever since then Intel has had at least two functional chip designs still with their developers so that if they need to, they can pull one fresh out of the oven. It'll cause some wiggles in their planned cash flow, but it won't seriously disrupt their market position and they will remain profitable.
Re Cheap Xeons .... AMD can sell a 438sqm 28nm chip for about $100.
Small Ivy Bridge Xeons are about 160sqm.
Throw in the yield increase with a smaller die and you find that Intel could sell Xeons for about $20 without breaking any rules. Anyone hoping to break into this market has to show demonstrably lower energy use than Avoton/Rangely and sell below $20.
As Clint says ... "Do you feel lucky, punk? .... Do you??"
Many of us have been saying for a while that it is ARM's customisation and low-cost - you get chips that are customised to your workload and they are cheap - that make it attractive. Getting stuff done directly in silicon rather than software automatically reduces the power draw. Intel has done great things getting power draw down in the Atom range but the chips are still significantly more expensive than comparable ARMs which are now becoming available and you can't get custom builds of the Atoms. To compete Intel will have to change its business model.
What is it that you need in the server farm, that you can do with hardware integrated on the same slice if Silicon as the CPU, that you can't do with a separate piece of silicon attached to an Intel CPU's external bus?
I appreciate that at the consumer device end, there are serious economies to be reaped by integration of a system on one chip. (Serious economies means maybe a few tens of dollars per system). In the server room, I don't think a $50 cost advantage will win any arguments. It needs to be a technological advantage, or a price advantage at least one order greater.
Ultimately, I do expect Intel will be fabbing the world's ARM CPUs with their world-leading process technology, but in the first instance for mobiles, not for servers. ARM will conquer the server room last, if ever.
"ARM will conquer the server room last, if ever."
"In the server room, I don't think a $50 cost advantage will win any arguments" (and the rest)
Agreed in general. Now look at the economics based on current market realities, and look at what *might* happen in the next few years.
Desktops and laptops are undoubtedly at risk for x86, especially where there is no explicit requirement for Windows. Just how low will sales volume go is a bit unknown.
The current volume x86 market revenue (and profit per chip*number of chips sold) helps spread the massive one-off costs of new chips and new fabs. The expensive to make and even more expensive to buy Xeons are effectively subsidised by the volume market.
If the volume segment stops being so huge, the development cost of new chips and new fabs doesn't change much, thus the same one-off costs are spread over a smaller number of chips sold, thus there is a disproportionate increase in cost per chip. Which either leads to an unpleasant increase in price (for customers) or an unpleasant decrease in profit (for Intel) or maybe both.
There's perhaps more to this than initially meets the eye.
The problem is that demands change and today's custom silicon may be landfill a year after installation whereas general-purpose servers can be loaded with new or updated software and reused profitably for a few years more. The minor cost savings in power consumption per task completed are going to be eaten by the extra development costs anyway and the risk of having the investment written off because Facebook takes a dive or similar and dedicated hardware needs to be ripped out and skipped because it is optimised for one job and one job only is probably a bit too much.
And further to that thought, if the custom Silicon is on a card on the bus in a conventional server PC, you can yank it out and plug in this year's model. Rather more work than upgrading pure software(*), but not nearly as much work as replacing the entire server farm.
(*) that's after you've sorted out all the reconfiguration issues, and just have to do the tried and tested same over and over again.
I've been hearing that same line since Intel put out the 386. Like my fusion powered air car, it's still in the future.
The technical reasons for this seem to change every few years, but the fact remains that Intel is always the big name in computers and servers. I'd like to see it change, but that doesn't mean it will. And hope is not a plan.
Intel hasn't always been the big name in computers and servers. In the 80's, there were microcomputer companies (anyone remembers names there other than DEC VAX ?), and a big bunch of mainframe peddlers (anyone remembers names there other than IBM ?). In the early 90s, the various UNIX vendors replaced the microcomputer ones, and the mainframe bunch shrunk. Late 90s / early naughties, [LW]Intel dug their way into the server space, and "UNIX proper" shrunk. Intel today isn't more dominant in the server space than IBM was in the 70s (huge but not without alternative nor without competition), nor do they (Itanic, wink !) always succeed in big-iron projects they start.
The memorial halls of the computer industry are littered with former "industry kingpins" that missed the next key trend, or invested too much money into the wrong projects.
"See my works, ye mighty, and despair !" - the last one is self-referential, in the end, always.
"nor do [Intel] always succeed in big-iron projects they start."
Intel, the x86 company.
Try finding a non-x86 product where Intel have had a significant success. It's not that easy.
IA64 is actually one of their more successful non-x86 efforts - at least lots of folk have heard of it, and quite a few have bought them.
"Canonical is all right, but where is Red Hat? We were too early."
Not RH per se, but a 3rd party build based on RH sources RedSleeve is probably close enough if you don't need a support contract, and it's been around for quite a while now.
"and will likely want some kind of software ecosystem to be present as well."
Pretty much all packages used by most distros today build reasonably cleanly on ARM these days.
>Not RH per se, but a 3rd party build based on RH sources RedSleeve is probably close enough if you don't need a support contract
The whole point of RedHat is the support contract, I know you can get them elsewhere, but Suse and RedHat have these acceptance levels in the industry for CTO's that ubunuutu could only dream of, let alone who ? RedSleeve???? lmao.
BTW, watch the vid of Mark Shuttleworth presenting Ubuntu Mobile, it's like Linus meeting Walker Texas Ranger, I cannot believe anybody is taking him seriously ... ;-)
Debian, go, go, go!
If the whole point of RedHat is a support contract, then how come that CentOS, Scientific Linux and RedSleeve have a thriving community? Last time I checked Facebook uses CentOS, rather than RedHat, as do many, many other very large companies with thousands of servers.
People run RH clones for reasons or familiarity and stability (as in no chasing of continuously teleporting goal posts), and many companies want to do things that aren't supported by RH and/or have good enough sysadmins in-house that they don't need vendor support.
The problem with ARM is exactly what is being put forward as it's strength here.
It's not really a single platform.
This is because there is not discoverable buses as standard, and device tree isn't stable and working properly. Things have got better since Linus's "ARM is a mess" rant, but it's still not there compared with x86.
I'm not the only one thinking ARM, or at least Linux on ARM, has a way to go yet.
Custom kernel per device isn't scalable or maintainable. It's only acceptable if you think over the wall products is acceptable. If you think it is, you are missing the point that "over the wall products" end up not "over the wall products" at all and get based on. Ending in a big pile of closed drivers forking poo.
This all makes me really sad because I grew up on ARM desktops! I would love to turn away from x86, but standardization is important.
"Only people who want to customise their own hardware would consider ARM."
Translation: "Unless you are google or friendface, you have no use for that ARM stuff!"
I suspect the lack of 64bit is not the real problem, unless you have a low-power, high memory workload. Often, memory + grunt requirements go together.
I suspect the issue is actually software availability for a large market. Licensing is set up for x86 and randomly powered small CPUs make for complicated SKUs. Until we get a few more contributors to FLOSS systems which are easily ported, it will be slow going.
Since flexibility on ARM means that every single ARM computer model is completely different operating system wise.
Today you have hundreds of different x86 server models out there in a data centre of a hosting company. They can easily offer you a handful of operating systems for each of them, since on x86 the hardware is similar enough so you just need to worry a handful of installation images.
On ARM don't have portable install images yet. If you take 2 ARM SoCs and connect a CD-Rom drive to both of them, you won't be able to make a disk which boots on both and installs an operating system.
The problem is simply that ARM does not have a BIOS. On x86 (PC) you have such a set of routines allowing you to access graphics, keyboard, disk and hardware enumeration in a consistent way across all computers. You can, and I've tried this myself, write a little boot sector which can display graphics on the screen without knowing what sort of graphics card you have. On ARM this simply is impossible.
So what you're saying is that because generic ARM is
. flexible enough to not require the same basic support chipset architecture everywhere whether it's a $5 box or a $5000 box
. flexible enough to run little endian or big endian code with little endian or big endian IO
. that'll do for now
Because of that flexibility, there's absolutely no scope for "datacenter ARM" vendors and the volume OS builders to decide on some kind of common standard amongst themselves and have a go at Intel, while the mass market sector-specialised ARMs continue to dominate the non-Windows world as they have done for many years?
Not convinced I agree.
Biting the hand that feeds IT © 1998–2019