This is great news...
The CPU market has long needed re-balancing away from Intel for a long time, and whilst AMD has made Epyc steps (see what I did), this can only expedite the process.
Intel has issued an open letter apologizing to punters and partners alike for its inability to meet demand for processors. The missive [PDF], penned by Michelle Johnston Holthaus, executive VP and general manager of Chipzilla's sales, marketing and communications group, says it still is struggling to get enough microprocessors …
I somewhat recently got a Ryzen CPU (6 core) and motherboard to build a new workstation. All good so far, running FreeBSD 12-STABLE on it, "all ZFS" system. Hyperthread gives me 12 'cores' and makes compiles go very very fast.
Intel can't compete with that price-wise. Looks like they actually stopped trying!
Where I am currently employed, I have decided to move to AMD for our servers. We are currently refreshing some of our older virtualisation servers. Creating new AMD clusters, they currently offer far better value for money and power consumption.
Good that AMD has a chance again, they need to ensure that they can capitalise on this, get money flowing in and get their install base increased along with their reputation.
Then we can keep the competition going.
What hurt AMD last time was, not corporate incompetence, but illegal Intel tactics. I well remember when the Athlon 64 was first released how motherboard manufacturers would not put their name on an AMD motherboard for fear of Intel. At one time, Intel paid Dell billions just to not sell AMD. And there are more. Do not put it past Intel to play dirty again.
When Opterons were new, they were quite something. Keep in mind that Intel did not even have x64-compatible chips back then... (quite natural since this was an AMD innovation) I don't recall how they did in 32-bit benchmarks, but if memory serves me right, Intel didn't gain traction again until they coughed up the Core 2 Duo architecture (I believe the Xeon version of that arrived later still). Intel's Core design completely ditched the power hungry Pentium 4 architecture. Up until that point, Opteron and Athlon64 were the best CPUs for most users.
It is quite likely that Intel will bounce back this time as well, but in the meantime: Lets just sit back and enjoy the competition. :) AMD pulled us out of the 4-core rot that Intel put us in. It will be interesting to see what 16 cores on the desktop will bring us. (I'm still "stuck" with an 8 core 2700X and loving every minute of it)
I have used AMD for ESXI for several years as they have the necessary cpu hardware virtualisation instructions enabled for almost all their chips. If you wanted to you could run EXSI on a Ryzen 2400G or even an 8350 bulldozer, not saying you should but it does work absolutley fine.
Intel were disabling these features for a lot of consumer grade CPU'S and you had to pay a lot more for the ones that supported it.
Anyway Ryzen 2600 in my boxes now, I can afford 3 for the same price as the Intel offering and have spares ready to go. 32 GB Ram each and a reasonable NFS Qnap storage server that gets replaced every 2 years. Still not had one fail though.
"Essentially, this confirms our previous reporting that Intel shortages are set to run into 2020, as the silicon slinger focuses on churning out high-margin, high-end Xeon server chips as a priority over low-margin desktop and small-server parts, and shifts assembly lines from 14nm processes to 10nm and 7nm."
Reading between the product announcements/products shipped, 10nm is capable of making low clock speed (sub-3.5GHz - maybe less), low power CPUs in low volume. Its not going to fundamentally address any shortages before Intel's 7nm process is available. So only very limited innovations in the desktop space until 7nm.
7nm is targeted for late 2021/early 2022 for CPU's and supposedly they will lead with a GPU in 2021 rather than CPU's.
The parts Intel is making on 14nm+++ are now reminiscent of the P4 Extreme Edition-era where Intel's architecture hit a brick wall but their process technology still allowed them to innovate a little and keep within reaching distance of the competition. This time around, Intel's innovation has produced 14nm++ and 14nm+++ but there's no working process to get them to the next step yet.
I guess the excuses will continue to get more and more creative.
while in the world of AMD, TSMC's 7nm process is producing the speeds and feeds from the Zen3. if Intel do ever get their 7nm process working, they might have a chance to keep the market, but with ARM and RISC V both producing for the enterprise, and making gains on the speed and power side, inless they sort it soon-ish, they are going to have to resort to the dirty tricks box again.
It's about damn time AMD got it's moment in the sun. Intel has been a shadow on that very capable chip maker since forever. I am happy that AMD is finally going to be able to eat a part of Intel's lunch. Intel will survive, but this is going to help AMD surge ahead like never before.
Competition for the win !
News to you, AMD used to lead Intel on private PCs, everyone had a K6 clocked to 2GHZ!! gaming a VooDoo on "OC drivers". Then AMD really shined when on the Athlon+ you could solder 2 little pins to get SMP!!
All of that will sound foreign to younger people. Hey, did you know at one time the internet wasn't a nuke to privacy?
Yep, it wasn't until Intel hit their stride with the Core 2 Duo that they started to take back control of the desktop estate.
Historically, AMD were operating as a "clone" chip manufacturer, competing with Intel's 586 (Pentium) chip and utilising the same socket as Intel. Intel started moving onto a proprietary socket (Slot 1? The sidey-ways one) to try and shut down AMD as a competitor, and that ironically forced AMD into their own architecture and chipset business that went on to dominate for close to a decade..
When did they start the investment in new fabs and how long does it take from the start of the investment in a fab to producing chips in volume?
It could be well past 2021 until their supply problems ease.
By then the "buy Intel as it is the safe choice" thinking could be replaced by "buy AMD as it is cheaper, better and available".
Icon for Intel's senior management ====================>
3 years is a good estimate - it's about 2 years plus whatever the lead time is for step-and-scan systems from ASML which is 6+ months and usually around a year for Intel/TSMC/Samsung but varies as orders fluctuate. While you wait for the equipment, you can do any preparation work (building, AC, power etc) but that is unlikely to affect time frames.
You then have equipment installation, cleaning, testing and finally production. Installation and cleaning will take around a year - all going well testing and production will take 6 months each which is basically double the length of time it takes to get a wafer through from start to finish.
If you find significant issues (i.e. contamination/a process issue/design rules that needs addressed) the 3 months starts again. There are many additional tests run while the wafers make their way through all of the processes to ensure success, but they are focussed on ensuring test/production runs are successful rather than necessarily adding time to the process.
Hugely simplified of course, and glosses over many of the tests used to try and quickly identify issues before test runs discover a problem, but as they rarely add to the timeline
"By then the "buy Intel as it is the safe choice" thinking could be replaced by "buy AMD as it is cheaper, better and available"."
Don't forget "safer against Spectre, Meltdown, etc." At least, that's my understanding; didn't they basically say AMD processors don't have those particular problems?
Don't forget "safer against Spectre, Meltdown, etc." At least, that's my understanding; didn't they basically say AMD processors don't have those particular problems?
Spectre affects both Intel and AMD (and some other) processors. Meltdown affects Intel (and some others) but apparently not AMD designs which use "privilege level protections within paging architecture" (Wikipedia).
...and both Intel and AMD force low level closed source operating systems running underneath the main CPU (Intel ME and AMD PSP). Both have a CVE record that is not very impressive to put things mildly.
If you really want security, look into the open ISAs. OpenPOWER is even in the running now for BSC LOCA -- seems they've had a belly full of x86 and its high handed tactics.
IME and PSP are implemented differently - IME as a way of providing remote access AND a trusted enclave while PSP only looks to provide a trusted enclave. PSP is an x86 equivalent to Samsungs KNOX or Apple's SEP.
Intel's IME approach was aimed at addressing practical issues, but was not secure and is best avoided. For the others, you have two options:
- CPU's without a way of verifying trust are dependent on the code they run to be trustworthy, from the initial bootstrap to the OS to the application.
- CPU's with some form of secure or trusted enclave allow the CPU to verify the bootstrap code and potentially the OS and application for things such as disk/memory encryption.
Both approaches are vulnerable depending who you trust - assuming the manufacturer is trying to protect you, secure enclaves should provide some additional security if you trust the hardware manufacturer and the OS provider but they can also be used to lock you out of functionality you may wish to use (i.e. early UEFI restrictions).
PSP is not just a secure enclave -- it manages large parts of the boot process and has complete low level access underneath the OS, same as the ME. Just because it's advertised as more of an enclave doesn't mean it isn't implemented similarly to the ME.
Put another way, if it's just an enclave, why can't anyone strip the little bugger's firmware off their AMD system and still have the rest of the system boot (sans enclave)?
This is just Palladium Mk II, embedded edition. Given that neither Intel nor AMD will post any financial guarantee of customer data integrity or security for that firmware, and in fact like to hide behind firmware as being patchable therefore not guaranteed to be correct, I simply have no reason to blindly trust either of them or their megabytes of embedded, black box, unremovable operating systems.
Meltdown exploits a weakness in Intel's design that allows a small number of instructions to continue to be executed with no further protections in the event of certain access faults. It is an optimisation that hasn't been found in any other CPU designs.
Spectre covers any CPU that performs branch prediction - protecting against all forms of this attack are difficult as some of the ways information is disclosed are very subtle (i.e. timing attacks). The practical attacks have been patched but the general case will likely remain an issue for the foreseeable future with classical CPU designs.
Spectre affects both Intel and AMD (and some other) processors.
Spectre is not a bug. It is a class of bugs. Some Spectre bugs impact both AMD and Intel. Others affect Intel but not AMD.
On balance, Intel's exposure to the Spectre class of bugs is currently greater than AMDs, and the performance impact of workarounds for the bugs across both Meltdown and Spectre has, so far at least, been significantly higher on Intel than AMD. Since Spectre is a class, and members of this class are still being found, it's possible that future members could impact AMD more - or not.
All content management systems are as bad as each other.
Its just sloppy work to let someone else do the work for you.
Drupal, Joomla, Magento, Wordpress, they are all full of holes, if you compare the cve count, its roughly relative to the popularity (i.e. how much efort is being put into breaking it)
"Its just sloppy work to let someone else do the work for you."
If you use anything more abstracted than machine language, you already do that.
"Drupal, Joomla, Magento, Wordpress, they are all full of holes, if you compare the cve count, its roughly relative to the popularity (i.e. how much efort is being put into breaking it)"
Fallacy of the false alternative. The alternative of you writing hole free code by yourself actually isn't available to you. You just think that it is.
The Platinum 92XX range is basically just 2 x 82XX glued together - sure it doubles performance but it also doubles power/cooling requirements so its both expensive and fulfils a very niche requirement. On the mobile side, lowering clockspeed and increasing the size of the GPU (which can easily have parts fused off to address defects) also gives Intel a premium product.
Intel would like to be able to produce mainstream mobile/desktop/server CPU's that were competitive but they are at a significant disadvantage to AMD and many of there planned performance upgrades require the density increases that a process shrink provides. And that Intel don't have. What they do have is hand picked parts that clock much higher than average and they can sell at a premium.
So Intel has the choice of sitting out the 10nm game (functionally equivalent to TSMC/Samsung 7nm) and wait for their own 7nm parts (to compete with TSMC/Samsung functional equivalent 5nm) or produce "unicorns" in the meantime.
I can understand it being viewed as chasing the biggest profit margins but look at reviews of 9th generation CPU's vs 8th generation CPU's. The only real difference is cores and a small clock speed bump and nowhere near the core numbers the competition is offering. All the surrounding IO/memory speed/etc are unchanged because Intel can't do much more than they have already done. Until they have 7nm and can increase their components counts...
... the Intel alternatives on the same price range of the Ryzen 5 (1600 and 1600x then) were 10% less powerful and 10% cheaper, but their motherboards were 2x or 3x more expensive. I would spend a bit less in the chip, and several times more on an intel motherboard. FROM THE SAME VENDOR.
The motherboards were either ASUS or Gigabyte, and in BOTH, the Intel mobo was absurdly more expensive, offsetting any cost advantage on the CPUs.
...as the silicon slinger focuses on churning out high-margin, high-end Xeon server chips as a priority over low-margin desktop and small-server parts, and shifts assembly lines from 14nm processes to 10nm and 7nm.
Hey, it at least benefits *both* of my brothers, who each work for companies that make chip-fab machines (that's right, two different companies).
In my 20+ years of building my own computers (and friends' and relatives' computers) I have always used AMD. I just couldn't justify the cost premium with Intel processors. The AMD processors have been rock solid. The only processor that I was ever not fully satisfied with was the FX 8350 which did not outperform the Athlon II x4 that it replaced. My latest build is a Ryzen 7 2600x which benchmarks at about twice the CPU score of the FX 8350. And yes, I am staking my claim for the AMD FX processor settlement ...
The Ryzen architecture is a real game changer.
Yield is process dependent. Change the process node and you create a new process - you can compare yields between processes but make some statistical assumptions but having a 80+% yield on a mature 14nm node does not guarantee success on a smaller node.
In addition, Intel's problems at 10nm are the result of taking too many risks (easy to see in hindsight - it wasn't so obvious at the time). While this leads to low yields (reportedly single figures climbing to ~30% for chips with defects after 2 years - defect free chips are rare although new equipment may address some of those issues), a similar process using newer equipment and less quad patterning steps at TSMC is reportedly producing yields similar to their 12nm/16nm processes at a similar age (i.e. >65% and improving).
If I were to speculate, I would suggest that Intel are more likely to have a successful 7nm process than their competitors as they have already discovered the techniques necessary to produce smaller process nodes while attempting to get usable 10nm parts. That is just speculation...
For the first proper PC I built for myself, I used an Athlon XP 3200+. It worked out cheaper than the Intel equivalent at the time.
I outgrew it and ended up with a Core 2 Duo which later got upgraded to a Core 2 Quad equivalent Xeon. Now I'm running an i7-6700K as AMD weren't competitive 3 years ago.
For customer builds, I really want to buy more AMD, but a couple of niggles keep steering me back to Intel:
1. Stability. It's taken a whole years worth of BIOS and AMD chipset driver updates to yield stable performance for serious work like AutoCAD. And I've got my fingers crossed that we're out of the woods now.
2. Reliability. In my 35 years with computers, I've seen more completely dead AMD motherboards than I have Intel. And that's despite AMD having less market share over the years.
3. IPC. AMD still lags Intel when it comes to single core performance and raw clock rates. The gap is much narrower now with Ryzen so I'd be prepared to look past this were it not for the more serious points 1 & 2 above.
I also can't help but wonder why so many WiFi adapters don't maintain stable connections running on Ryzen boards as they do on Intel.
Don't just take my word for it, look at some reviews of Ryzen based laptops.
I know AMD have really done a lot to catch up with Intel when it comes to bang per buck and I really want AMD to do even better. I just hope they can improve on the areas I've mentioned above, because these really are crucial factors for long term success.
Thus always with AMD.
They usually produce really excellent hardware and then cripple it with firmware/software written by a drunken hobo smashing a keyboard with his empty wine jug.
It made the pairing with ATI seem more than appropriate as they also never produced a driver that didn't contain at least one major fault.
If the supporting software by some miracle actually proves viable the only solution is to farm the chipset production and bios off to a different company so they can destroy whatever actually worked in AMD's reference build.
I'm also dropping Intel for AMD, for the first time since the celeron 300 came out, but I'm waiting to see which motherboard maker comes out on top for reliability first. Also what's with the pricing? X570 is about 20% higher than z390 same maker, same model type... Is the shitty chipset fan really worth that much?
It isn't just the fan. X570 gives you PCIe 4.0 support. As I understand it, PCIe 4.0 dictates better quality tracing on the motherboard for the PCIe lanes.
I believe newer motherboards have made a stronger effort to support higher RAM speeds as well.
A little tip: If two memory DIMMs is sufficient to support your memory needs, then go for a smaller motherboard with only two sockets. It is cheaper and the simpler layout on the board should help in case you feel inclined to overclock your memory. Since the memory clock speed impacts the infinity fabric clock speed, there are some gains to be had here. (https://www.techpowerup.com/review/amd-ryzen-memory-tweaking-overclocking-guide/ is the ultimate guide to Ryzen RAM timings. Granted, it applies to the earlier generations of Ryzen, but the core concepts are still useful)
"10nm is capable of making low clock speed (sub-3.5GHz - maybe less), low power CPUs in low volume. Its not going to fundamentally address any shortages before Intel's 7nm process is available. So only very limited innovations in the desktop space until 7nm."
This ~universally recited dogma is very wrong.
Intel have many problems, an important one is being at a process disadvantage to AMD, but they could all evaporate tomorrow, & their outlook would remain grim.
Their hoped for 7nm "response" in 2022, does not compete with TODAYS AMD CPUs on the inarguable metrics that decide sales: IO lanes, bandwidth, ram capacity, cores, efficiency, heat dissipation, costs, ...
What intel lack is far more fundamental than a mere fab process, and there is little hope of them having it in a realistic time frame to recover dominance, no matter what their R&D budget. Processor validations e.g. cannot be rushed and take years.
Their problem is architecture. They have no response to amd's Fabric bus and its very cost effective family of compatible modular co-processors.
AMD have effectively changed the duopoly playing field to one where they are just getting into stride as Intel begins to need expensive compromises to boast similar metrics - ~at & beyond 8 core.
7nm IF they execute ok this time in 2022, will not suddenly mean they have a 64 core answer to TODAYS 7nm Rome Epyc, let alone amd's 2022 lineup.
The only true solution is to completely demolish and rebuild, yet they barely even acknowledge the problem. They cant. Wall street & others wont allow it. It is an admission that vast sums for IP & goodwill on the books are worthless.
The good news is it needs doing anyway - to fix their unfixable security holes.
To match the current AMD high end processors will require Intel to adopt the chiplet design of the AMD processors. There is a limit to how large a chip can be made before the manufacturing yields become unacceptably low. To match the current EPYC 7702 with its 64 cores and 128 PCIe lanes using the current monolithic Intel design would make a chip so large that it would be unprofitable to build (as so few working chips would be obtained per wafer). This will involve a complete redesign from the current CPU designs. I assume that they have already started this process - however it is likely to take years from start to working designs. When the new designs are ready depends on when they started and on how good their design teams are.
By the time that Intel has effective competition for the current EPYC range, the new EPYC Milan range should be out for over a year.
As AMD is not locking off features in its EPYC chips (with the exception of having the the P range being single socket only) , by the time that Intel has effective competition it may find that the market will no longer accept its artificial product segregation (eg ECC on Xeon CPUs only).
A little discussed aspect of this, is the potential for ructions in the AIB/OEM industry.
Those rats too slow to abandon the sinking Intel ship, will suffer the financial pain of no 10nm as planned & repeatedly promised, and lost market share to more fleet footed OEMs able to deliver what customers are demanding.
Biting the hand that feeds IT © 1998–2019