If there were any lingering questions as to whether Intel would use its 22-nanometer Tri-Gate process technology to bake chips based on the ARM architecture, the company's CEO has put them to rest. "The short answer is 'No'," Paul Otellini told his audience at Intel's Investor Meeting 2011 in Santa Clara on Tuesday in response …
Otellini wants his hand back
Best in class processors, bwa ha ha ha.
"Best in class processors" is a good joke.
Methinks the real reason they won't use their advanced fab to make an ARM chip is it would show exactly that point, that x86 is a piss-poor underlying design.
Well it looks like ARM has a business plan?
According to Otellini, one server is required for every 600 smartphones or 120 tablets. And since Intel and its Xeon processors rule the server roost, as he put it, "the money is in the infrastructure."
Go ARM go!
Beholden to ARM?
Raspberry Pi reckons it will be able to put a working ARM based thingy out for $25 apiece. What's the going price of Intel's preferred range of silicon?
Look at that price - who is making money on it?
ARM is doing very well but overall profits and profits per worker are *much* lower that Intel. nVidia, Qualcomm, TI, Samsung are the rest are all busy making ARM chips that they can't sell for very much: the hardware is becoming commodified. Which premium manufacturer wants to be in on that?
Intel will now be spending heavily convincing people that their software really needs those Intel cores. They might even succeed with notebooks et al. where aggressive power management and the lack of real computing show them in a reasonable light - Intel's process engineering is second to none. If any data centre is able to get a reasonable TPC workload from an ARM based server they will probably never look back.
Na Na Na Na Na I can't hear you...
Said the head of Intel
as the world moved to ARM Based End User devices.
Na Na Na Na I can't hear you
said the head of Intel when the first ARM Server System with 32 cores was released.
Na Na Na Na I can't hear you
said the now ex head of Intel
mines the one with a PDP-11/23 in the pocket
Not really surprised.
Question is will the *workload* on those servers *need* Intel architecture?
Only if they don't and the software is *reasonably* well designed for portability it'll be a question of recompile and rebuild the app.
The critical bit seems to be is Windows on ARM?
And if not can it be kept off it?
Intel know they could *never* charge x86 prices for ARMs.
Time will tell if this is a WIN for Intel or just a FAIL.
Speaking of ARM servers....
I'm about to start evaluating one. This could be interesting.
The problem with ARM 32 is.. it is 32 bits, so you are stuck with 4 gigs of RAM.
But, if you can live with that (compiled PHP, non imemory intense APPs..), it sure beats intel (and AMD) on watt per MIPS/price/space,
It would be even better to have A15 cores instead of A9s, as these are not only 40% faster, but have 64 memory management., good ThumbEE, etc. You could use it as a JAVA APP server!!
You've got to wonder why AMD don't get in on this trick - they have a great mobile graphics core, but lack a decent mobile processor. Put their GPU together with an ARM CPU and they are on to a sure fire winner. And best of all - the open source drivers for both already exist.
Would this be the same Otellini who (with his colleagues) was telling the world that IA64/Itanic would lead the world in industry standard 64bit computing?
Who can name me any home-grown Intel success outside the x86/Windows marketplace in the last five to ten years? C'mon, there might be a prize.
where's your troll pic AC ?
let's see .. as of January 2011 .. 71% of web servers ran Linux
of all servers .. 2010 .. 7% AMD .. 93% Intel .. in 2006 that was almost 25% AMD - 75% Intel
what's the prize ? .. a purple hair dye job ?
granted AMD is up to 7.5% so far this year .. however E7 10 core Xeons are fairly new, and Bull'd is *expected* 3rd Q 2011 .. about the same time Intel goes 22nm tri-gate in the server market
we'll see who actually performs and gains cpu web server market share
those Intel Linux webserver boxes
they'll be running on Intel's clone of AMD64.
You may class AMD64 as home grown Intel.
Others may not, given that Intel denied for years that it could be done and then denied they were doing one and then when IA64 was stillborn they did the sensible thing and released an AMD64 clone.
Other suggestions please: An Intel success in a market where Windows is irrelevant (like it or not, Windows has some relevance in the generic x86/x86-64 server market). Y'know, something like maybe routers (consumer, big iron, in between), supercomputers, peripherals, whatever.
Remember, there might be a prize.
Re: those Intel Linux webserver boxes
"You may class AMD64 as home grown Intel. Others may not, given that Intel denied for years that it could be done and then denied they were doing one and then when IA64 was stillborn they did the sensible thing and released an AMD64 clone."
I doubt that Intel seriously meant that it couldn't be done. The IA32 stuff has had various extensions bolted on over the years, and even the 80386 supposedly had capabilities for greater than 32-bit addressing (according to Tanenbaum whose book I don't carry around with me, so I can't check). It's more about that it wouldn't be done: the future was VLIW and Itanium, remember? When no-one agreed, Intel had to change its corporate mind.
AMD's success in the server and HPC space in the middle of the 2000s apparently had a lot to do with HyperTransport and bandwidth. Intel were apparently really lagging until they brought their Core series stuff online.
Not quite the denial
We were looking for a denial that Intel would fab ARM for Apple. This was not that.
If the clients need matching servers we know what to do about that too.
"Intel won't build ARM chips..."
...again. No, serioulsy.
Of course, Apple was not moving out of PowerPC and into x86 or building a phone, nor was Google releasing an office suite, or an OS (or two). Until they did.
"it is 32 bits, so you are stuck with 4 gigs of RAM."
Wasn't true for some PDP11 ("16 bits" and >>64KB of RAM (up to 4MB max, some caveats apply) back in the 1970s.
Wasn't true for some IA32 pre AMD64 ("32 bits" and >4GB of RAM, some caveats apply).
Won't be true for some next-gen ARMs ("32 bits" and >4GB of RAM, some caveats apply).
The main caveat was and will be that no individual process can concurrently address more than 32bits worth of memory address space. But the system as a whole can have more than 4GB (32bits worth), or maybe any individual process can address >32bits (e.g. the whole 4GB and more) by remapping its address space, and that's usually what matters (some caveats apply), at least until 64bit virtual address spaces become worthwhile in general.
Those who ignore history are doomed. In particular they are doomed to look silly and/or repeat history.
I'm sure he is correct in the short and probably mid term but I suspect not in the longer term!
Also will Intel let their foundries be used to make ARMs (I think they do use there process for other peoples designs already) or will someone else start offering a Tri-Gate process?
- Ex-Soviet engines fingered after Antares ROCKET launch BLAST
- Hate the BlackBerry Z10 and Passport? How about this dusty old flashback instead?
- NASA: Spacecraft crash site FOUND ON MOON RIM
- Google's Mr Roboto Andy Rubin bids sayonara to Chocolate Factory
- Review Pixel mania: Apple 27-inch iMac with 5K Retina display