One thing's for sure
You're not going to run Windows on a 48 core chip with Microsoft's regressive per core licensing costs.
Qualcomm says it has started shipping to customers samples of the Centriq 2400, its 10nm 64-bit ARMv8-A general-purpose server-grade system-on-chip. The mobile chip designer, based in San Diego, California, has recruited engineers from AMD, Intel and Broadcom, as well as tapped its internal pool of techies, to work on the …
This post has been deleted by its author
> W.I.N.D.O.W. S._S. E. R. V. E. R.
Not even once.
Been on 2012R2 just this afternoon.
It's a playground of crazy. "Notepad"? "My videos"? "Windows Explorer"? Click Click Bing Bing???
As if a millions of consumer minds had suddenly clamored for a "professional" system, then been utterly silenced with Disney-level shit. I fear something terrible has happened.
"Been on 2012R2 just this afternoon.
It's a playground of crazy. "Notepad"? "My videos"? "Windows Explorer"? Click Click Bing Bing???"
If you beat it enough, you can get it into something that resembles a usable desktop. Add some missing shortcuts for things, and copy some EXEs (Notepad, Paint, HyperTerm, etc) from earlier versions of Windows, and you'll get something that will only make you sick to your stomach every other Thursday. The lack of a real Start Menu is still a PITA, no matter what the M$ apologists say.
OTOH - not sure why we have to go to these lengths to get something we like and can use in 2016. Hell, Windows Server was more malleable and user-friendly in 2000. Logic would imply it should be getting better, not worse...
I wonder what could these large companies be running that can change architecture easily at little cost?
I dunno, something like a free OS with a "mostly" free userland that anyone could adapt to their needs?
If only such a thing would exist... Now if you'll excuse me I have to feed the aquatic, flightless birds.
I wonder what could these large companies be running that can change architecture easily at little cost?
You can't. I've been marketing microprocessors and microcontrollers for most of my professional life and there is tremendous cost in major manufacturers changing processor architectures for complex products like servers.
You must convince every level of the company from the President of the company down to the guys doing QC. Quality Control needs at least a year to approve and fault grading can take that long. QC can take the form of "run three lots and test" which takes over a year and costs hundreds of thousands of dollars. By that time your competitor has their next gen on the market.
The overhead cost is staggering - new compliers, training, new documentation, new approvals and procedures to a degree that can affect bottom line and stock price. There is always an errata and the more complex the device, the more incomplete the errata.
Then there is the people. If your present supplier always answers the phone and has a track record of working hard to quickly solve problems it is almost impossible to dislodge them because a history of reliability always comes first (it took me a long time to learn that). Sure, the new supplier's people are friendly but how are they in a crisis situation? You'd be surprised how many supplier professionals turn into children when stumped with a stubborn problem.
The risk is enormous. Usually a company will pretend to evaluate a new architecture just to frighten their existing supplier to lower their pricing. There must be a compelling reason for changing architectures, such as product line survival or significant cost savings over some time. This is not a simple device change, it is a change in the company strategy.
A new device that is "better" is scary compared to the comfort and familiarity of staying with a known architecture.
"a history of reliability always comes first "
Lucky you. For many (most?) suppliers and purchasers, cheaper beats better.
The irony of someone called BillG talking about the value of reliability will not escape readers either.
"A new device that is "better" is scary compared to the comfort and familiarity of staying with a known architecture."
I don't know what your experience is, but there were processor chips before 8086, and some of them (actually, many of them) were arguably 'better' than 8086 (and its eventual successors). But the non-x86 kit didn't survive in the volume market, largely because 'cheap' generally (but not always) wins against 'good', especially when companies like Intel and Microsoft get involved.
"a history of reliability always comes first "
Lucky you. For many (most?) suppliers and purchasers, cheaper beats better.
An unreliable supplier can cost millions of dollars in cost overruns including extra engineering work. It's not just the cost of the device, it's the cost of everything.
Listen, things go wrong. Things always go wrong. The assumption that a server design can be built using these new ARM chips is wrong because things always go wrong. Things I've seen: A redesign of the chip breaks something else. The chip locks up and it takes dozens of people and six months to figure out four specific assembly language instructions in a row create a freak race condition. The chip fails at 120°F, not at 119°F or 121°F. The on-chip cache fails once every 30 hours of operation. The device overheats when in SLEEP mode. The guy that knows how to clear a particular fault condition in on his second honeymoon and has no cell phone or email for the duration. All things I've seen happen with new devices in new systems.
I don't know what your experience is, but there were processor chips before 8086, and some of them (actually, many of them) were arguably 'better'
Different world, my friend. Quality and complexity for a sub-10MHz device is significantly different that quality and support for cores that run at the GHz level. Also sourcing procedures are more stringent now than they were 30+ years ago.
"Different world, my friend."
In the mid 1990s there were already lots of chips clocked way over 10 MHz, even some at over 100MHz, and they were the ones lots of non-Intel people were working with, even if you weren't aware of them. This at a time when 33MHz and 66MHz was "industry standard" for 486.
"[...] All things I've seen happen with new devices in new systems."
Indeed, I've seen similar too, with various vendors kit.Your examples could equally happen in an x86 system too, especially from one x86 implementation (or chipset implentation) to the next. Even Intel have hiccups, as their published errata make clear.
Proper engineering takes time and effort, and too much modern kit isn't based on proper engineering (even if kit has a seemingly reputable name on the badge). Fortunately Intel don't have an exclusive on decent engineers.
Buy cheap, buy twice (or more). Lots of people seem to have forgotten that, but why would it matter when the whole caboodle (be it IT "estate", consumer electronics, or whatever) is expected to be thrown away every two or three years.
In the mid 1990s there were already lots of chips clocked way over 10 MHz, even some at over 100MHz, and they were the ones lots of non-Intel people were working with, even if you weren't aware of them. This at a time when 33MHz and 66MHz was "industry standard" for 486.
Your timescale is way off. I bought a 100MHz Intel system in 1995 for my home and it wasn't even the fastest Pentium model.
DEC Alpha (if that's what you're referring to) had double the Hertz (and performance), but that was also reflected in the price tag. Mere mortals didn't even consider buying the home. Uni Compsci students extoled the virtues of the Alpha workstations and downplayed Intel but none ever bought one with their own money... ;-)
"[...] All things I've seen happen with new devices in new systems."
Indeed, I've seen similar too, with various vendors kit.Your examples could equally happen in an x86 system to
I think BillG wouldn't refute that since Intel has had their plentiful share of erratas as you mentioned.
The question is just whether and how a "newcomer" with their whizz-bang CPU can work it all out when the shit hits the fan. The FDIV debacle didn't reflect well on Intel since at first they downplayed it and just offered poor workarounds until they gave up and started a replacement program.
"The question is just whether and how a "newcomer" with their whizz-bang CPU can work it all out when the shit hits the fan."
ARM SoC partners are more often than not licencing an ARM core which has not only been through ARM's own core design verification, but also some other chip/system vendor has usually already shown the cores cope with real work. The SoC vendor takes the known good core design and re-uses it "as is", just adding their own IO and stuff around it on the same chip. Re-use which rather reduces the risk of falure. Obviously someone has to be first to prove any particular core/SoC combo actually works right. It's a tradeoff, as most engineering decisions are. Most sensible engineering decisions are based around things other than faith.
... cheap ARM chips to undercut Intel. But this could easily turn into a cosy duopoly. What needs to happen is for the users, that is Apple, Google/Alphabet, Amazon etc to get together and create on open source design. Then the chips could be turned out by all-and-sundry pushing the costs down.
"this could easily turn into a cosy duopoly. "
Not that easily.
"the chips could be turned out by all-and-sundry pushing the costs down."
You mean like what happens with ARM and ARM licencees for the last couple of decades or so ? Softbank as new owners may change that, but Softbank have promised to play nice, so what could possibly go wr
bus error (x86 core dumped)
They buy Intel because there's a platform around it. It doesn't matter if you have an x86 processor from Intel or AMD or Cyrix, and it doesn't matter if you have a PC from Dell, Supermicro or IBM. You can use the same OS image everywhere.
Unless there is a decent stable common hardware platform, ARM will not get into the PC or server business. Nobody there can tollerate being limited in what OS you can use.
Everything Linux works on Arm. Its really no hassle to swap to arm linux for almost everything.
all Java, python, perl, ruby, C/C++ just needs recompiling usually.
The platform is definitely there. And tested and running for years thanks to Android and rpi and domestic routers and switches.
The platform is not x86 for most code the platform is linux.
all Java, python, perl, ruby, C/C++ just needs recompiling usually
And there's the rub. Intel have spent a lot of effort on developing a compiler that will produce really good x86 code - it's better than GCC & CLANG/LLVM for a lot of things. An ARM server ecosystem really needs an equivalent highly optimising compiler.
"Everything Linux works on Arm. [...] The platform is definitely there. And tested and running for years thanks to Android and rpi and domestic routers and switches."
If you say so, but this guy reckons that's a huge steaming pile of crap: -> http://www.theregister.co.uk/2016/10/10/linus_torvalds_says_arm_just_doesnt_look_like_beating_intel/
I have to say that my own experience of running Linux is more in line with his. I have several ARM-based machines and my options for putting the latest Linus kernel in each of them are (i) ha ha ha ha, (ii) learn about this particular machine's boot-loader, grab the vendor's kernel from a git repository, figure out what kernel options are required, try some builds and debug the results.
On x86, the proces is download an ISO, burn the CD (or write the stick) and boot. Job done.
"On x86, the proces is download an ISO, burn the CD (or write the stick) and boot. Job done."
That's the process *now*, on a PC. It's that way not because of x86 (remember the days when DOS applications on x86 frequently had to be hardware-specific?) but because PC vendors (hardware and software) worked together to define common specifications so that OSes could be somewhat hardware-independent. Think back to the days of (e.g.) the Lotus Intel Microsoft spec for additional memory beyond 640kB, to VESA graphics for anything better than 640x480, to PC98, generic sound cards, CD drives, and so on.
Absolutely nothing to do with x86, absolutely everything to do with wanting to sell more stuff that works with DOS and Windows and doesn't need to be platform-specific. ARM's people can do that too.
The equivalent standards for ARM are being quietly worked on, and when ARM-based kit looks like selling enough relative to x86-based kit (e.g. as it already has done for years where DOS and Windows aren't a necessity), there will be harder times ahead for Intel's x86 product line.
I don't care *why* is it that way for x86 now. I am merely pointing out that it *is* and that those equivalent standards for ARM are only "being quietly worked on" and not "working". As and when the ARM folks get their shit together I look forward to the "harder times ahead" for the x86.
"You can use the same OS image everywhere."
Really? Has something changed recently (e.g after Windows 7)?
My experience was always that you can use mostly the same reasonably current Windows applications everywhere, but unless something has recently changed, you *cannot* deploy an arbitrary Windows image on an arbitrary x86 desktop laptop server etc and expect the OS to work right. Frequently the OS will not work at all when moved from system to system without extra-special precautions in advance, and therefore the ability to use the same x86 *applications* is rather pointless.
Correction very welcome.
"you *cannot* deploy an arbitrary Windows image on an arbitrary x86 desktop laptop server etc and expect the OS to work right."
Well that's actually a problem with recent (since 2000) versions of Windows. With other operating systems or even Windows PE, the version the installer runs on, this is no problem at all.
"a problem with recent (since 2000) versions of Windows."
Are you serious?
To the best of my recollection, Windows 98 and Windows NT (3.1?) , to name but two, both had the non-transportable installed OS image issue, which would be mid to late 1990s. IE the OS as installed generally reflected the hardware present at install time (apart from anything else, there usually wasn't enough disk space to keep *all* the available drivers on the system, users wanted that space for more important things in the 1990s).
Windows PE isn't an OS, it is (as you clearly recognise) little more than an extended boot loader, a grown up version of DOS that can run a subset of Windows apps and drivers, and (once again) all it needs is to support the hardware present at install time.
Or maybe I'm misunderstanding something?
Erm.
Cheap chips? Not Intel's market. Unless you mean useless cut-down crap like celerons or atom stuff. Not where they're banking on much market. They've gotten stomped in phones and comparable classes of stuff, but pretty much can name their price in the high end. (And they do, good lord. Look at some of the Broadwell-E and the like.... sheesh!)
"pretty much can name their price in the high end"
And Intel have to make sure that carries on, because when it stops being the case, there's nothing much (except a cash mountain) to fund the ongoing R+D and to keep the previous generation mass market x86 chips affordable. And that's a death spiral even with a cash mountain like Intel's, as IA64 demonstrated all too clearly.
10nm is frankly amazing!
At IEEE Spectrum (Leading Chipmakers Eye EUV Lithography to Save Moore’s Law) EUV (13.5-nanometer light) we read:
Today, GlobalFoundries uses triple patterning when it makes its 14-nm chips, the most advanced ones currently created in Fab 8. This means that, for certain critical layers, a chip takes two extra passes through a scanner—and every other tool that is used to make those layers. And the company anticipates going to quadruple patterning at 7 nm, its next chip generation, says George Gomba, who is leading the company’s task of evaluating the technology at a SUNY Polytechnic Institute facility in Albany, along with colleagues from IBM.
For now, GlobalFoundries plans to roll out its 7-nm chips in 2018 without EUV, but it is reserving the option of pulling the technology in when it is ready. A key question for Gomba and his colleagues is when the cost of EUV will break even with multiple patterning. And it’s a very tricky question to answer because it depends on a number of unknown factors, including how bright EUV light sources will become and the uptime of an entire EUV lithographic system—the percentage of time it’s actually available to be used.
So, how good will the production capacity for 10nm be in mid-2017 (for a rollout in 2018)?
EUV is not yet fully ready. Maybe in 2018+.
(Compare with the optimistic prediction of EUV Chipmaking Inches Forward: ASML says extreme-ultraviolet-light machines should be bright enough for commercial production by 2015 from of July 2013.)
TSMC isn't using EUV for 10nm or even 7nm, so issues with it won't affect their rollout. They are ready to go full speed ahead with mass production of 10nm early next year. They will have risk production of 7nm next April, with full mass production expected in early '18.
I counted *about* 61 pins by 53 pins on the sides (rough). I'm quite scared to multiply these two numbers together. Even accounting the missing pins in the middle.
Ref: https://www.qualcomm.com/news/onq/2016/12/07/meet-qualcomm-centriq-2400-worlds-first-10-nanometer-server-processor
"This pressure squad lures and cajoles organizations that are considering non-x86 gear to forget such fanciful notions and just buy more Xeons"
yeah, IBM used to behave like that. Then the monopolies people came calling .. then again, when was the last time a government monopolies regulator did anything useful in tech? MS still ties things together, and faced no meaninglul sanction for years of market abuse ...