Re: Well well well
Spoof or otherwise, I guess we'd better clean up the language. :P
1529 posts • joined 21 Sep 2010
Spoof or otherwise, I guess we'd better clean up the language. :P
You beat me to it !
I really enjoy reading about Cocke's work and the machines he worked on - fascinating beasts. H&Ps contributions have become so ubiquitous that I take them for granted. Nice to see H&P getting some recognition - they changed things for the better in a big way. Perhaps RISC V will clear out the last vestiges of CISC. :)
The big expensive chips still come in socket format... Although I imagine Intel will be changing the socket again...
Fair play on running OpenBSD on an O2. :)
"by accusing someone who disapproves of Mrs. Clinton of being a 'couch misogynist', you are making a ridiculous accusation, and in so doing, are basically 'crying wolf' with the 'misogyny' label."
The record will show that I made a wisecrack in the form of an unpleasant leading question. The accusation was implicit, an answer could have been given that showed that misogyny was not the root cause. So far we've had assertions that women don't get shot and misogynist views & behavior is not a factor in these attacks.
Your essay seems to be a round-about exercise in denying that there is a problem because (in your view) some folks cry wolf too much. Fair comment - but I think there is an equally strong case to say that folks have accepted public acts of misogyny for so long that they are desensitized to it or simply in denial.
"So Giffords, who was shot by a grammar nutter who was concerned about how she wouldn't answer his questions about English Language usage is somehow a specific attack on women. "
That's half the truth leading to an unsupportable conclusion. The whole truth is that Bryce Tierny (the original source of that claim) also mentioned that Loughner asserted that women should not hold positions of power, and spent several years attacking her amongst his circle of friends to the attack.
Glad you agree that female politicians get shot, and it seems that you may agree to some degree that politicians get shot as a side-effect of over the partisan foul mouthing that goes on.
"I'm sure your OK with that -"
FWIW I'm not OK with anyone getting shot for stuff they aren't responsible for or have no control over.
"And attacking Trump the way you and others do DOES get people shot - like Scalise for example."
That would not be a valid excuse for anyone to behave badly, I reckon we could make more progress by rising above this divide and conquer bollocks.
The stuff I have *personally* posted in relation to Trump has also been placed in the public record by medical and law enforcement professionals acting in their professional capacity. Their statements carry more weight and have more evidence to support them than the cheapshot one-liners targetting an unsuccessful presidential candidate from pseudo-Anonymous posters on El Reg.
"Did you notice the nick?
He's either trying to be ironical and not signalling very well, or describing himself."
In fairness neither of those came to mind when I looked at the history of posts. Probably just another lost soul like the rest of us.
"Literally everything you have said is a lie."
Plagiarizing Trump won't magically unshoot female politicians such as Jo Cox & Gabrielle Giffords. You don't have to look very far to find other examples.
"Do you need a safe space?"
Everyone needs a safe space in order to thrive. Case in point folks who live in war zones are more likely to die or get wounded in them than folks who stay out of war zones. I am no different, and I reckon you are no different in that regard.
That was a nothingburger of a post served with a side of women hating.
A number of (female) politicians have been attacked and shot by nutjobs who cite that same bullshit and name calling you are pedalling as justification for their attacks on women (fatal and otherwise).
Will you be appearing on the News anytime soon - or are you a couch misogynist ?
"Basically, if there's a way for a human to interact with it, there's probably a way to pwn it, and that's true even of black boxes."
There are degrees of badness.
"Basically, if there's a way for a human to interact with it, there's probably a way to pwn it, and that's true even of black boxes."
I do share your assertion that no box will ever be secure, but I don't see security as an end goal. It's a continuous process, one where you will usually be one step behind, and it can appear that you are so far behind that all your efforts are futile... However I derive some satisfaction in finding a good fix for a vuln and making my patch a little bit better than it was yesterday. :)
I see the whole vuln-discovery + vuln-remediation cycle as a an opportunity to broaden my knowledge of the systems I work with/against.
That said, I do admit that I do find the burden of supporting / using the crud I deal with on a daily basis pretty horrible...
I confess to feeling overwhelmed by the weight of despair that descends upon me when I find another JVM with wide open JMX sockets lurking in a dark corner, or some 9.7 CVSS score vuln in the 9000000 zillion .jars that Spring has pulled in because someone wanted to create an instance of an object using XML rather than a simple new.
Minding the C/C++ and Python stuff really is child's play by comparison.
"Seems like turtles all the way down, if you ask me."
Nice to have a choice of turtles better suited the job at hand though. :)
I welcome the return of our LSI-11/03 console overlords...
It turns out that recent SPARCs are also vulnerable to SPECTRE attacks... Relatively "easily" solved in this age of multi-core dies... Shoehorn a slow but secure core onto the die and run sensitive code on that core alone. The question becomes whether the user has enough "non-sensitive" code running to make the performance hit acceptable.
Posted like a typical numpty who doesn't understand the words and labels that is written on their "distract the proles from the clusterfuck going on in the Whitehouse" script handed down to them by folks who really couldn't give a stuff if their shills live or die, only whether they'll get to keep a few % worth of tax which would pay for shills outgoings for a couple of million years.
"What part of Spectre being a hardware bug did you fail to understand? If a chip is vulnerable it doesn't matter what software you are running on it."
It appears to be theoretically possible to defeat those attacks with suitably crafted software, but that's a case of running new binaries - and likely some kind of hit in performance. The big.little boxes out there could run sensitive processes on an in-order processor - and the less sensitive workload on faster OOO cores.
Looks handy to me.
I dreamt of this kind of throughput when waiting for a compile to complete off a Fujitsu Eagle back in the day (shared with 30 other people). Kinda fun to see it happen even if it's not quite the way I predicted... The TaihuLight boxes are hooked up with PCI-Express 3.0, so presumably they have a way to integrate NVMe drives directly into their fabric. Could be a fun OCCAM platform. :)
The PCI-Express 3.0 fabrics remind me of the some of the ideas floated for IEEE1355 back in the day, but much quicker and ubiquitous. It's fun to see (some) things get a lot better despite everything else falling apart. :)
"That would mean everyone would have to get replacements for any and all legacy apps, which is nigh on impossible for many companies.
ditching X86-32 might be a a good solution, it's not a viable option I'm afraid."
Folks were running x86-32 apps on UNIX with SoftPC in the 80s.
Folks were running x86-32 apps on DEC Alphas with FX!32 in the 90s (I found that a very low end Alpha PC166 most apps were *quicker* than they were on a PPro-200 - and the stuff that wasn't was only 5-10% off).
There is no technical barrier to emulating x86 at decent speeds in 2018, the only blockers are ignorance, politics and lawyers (licensing).
"That's my understanding too. X86 is basically an emulation running on the RISC core."
I think misrepresents what goes on. I'm not an authority on the topic, but here's my take on it:
The (CISC) instruction decode stage(s) breaks the commonly used instruction sequences down to "micro ops".
Breaking down a multi-cycle 'CISC' instruction into lots of little u-ops then executing it in parallel with lots of other multi-cycle 'CISC' instructions poses some problems how to convey the illusion to the kernel & user that the instructions are executed in an atomic way... That entire set of quite gnarly gotchas is simply not an issue for a true RISC style design - by intent and design.
Some operations won't fit into that nice model - and for those we have microcode... Even 'RISC' chips can have microcode to handle the stuff that just doesn't fit. The Alpha had something slightly different called PALcode to handle those cases - where essentially the CPU was using a library of routines with access to implementation specific instructions... The ISA remained clean and it gave the DEC engineers a shot at implementing the machine specific crap in a RISC friendly way while keeping the details hidden from the users...
For a giggle I recommend tracking all the volumes describing the current Intel x86-64 ISA and then compare to the equiv. DEC Alpha ISA reference manual (its' much shorter)... All available for free and locatable via Google... The page count gives you a measure of how much more 'challenging' it would be to validate an x84-64 derivative... If you actually have a crack at digesting both you'll probably give up long before you get through the x86-64 manuals so I recommend starting with the Alpha first. ;)
"it's not exactly a simple and cheap task to build a high-performance high-security CPU."
Agreed, but folks following RISC design principles find it a lot cheaper and easier than building a fast x86... The design team sizes and benchmark results from the days when RISC vs x86 was a thing speak volumes for that.
"Corporate IT Managers will not order silicon with a known flaw (regardless of the patch) unless they absolutely have to, because people get fired over this kind of serious shit."
Few of the folks making the purchasing decisions read the errata let alone wait long enough for the showstopper errata to be discovered. Errata such as ECC failures leading to undefined behaviour didn't stop or noticeably delay folks buying the last few gens of Xeon in for example...
"Seems unlikely - a modern hybrid microkernel has several advantages. More likely Linux when needed will run as a plugin to the Windows kernel. In fact you can already do that under Windows 10."
IMO your strengths lie in FUD and bullshitting, best to keep out of the OS design biz. :)
Microsoft have already failed to assimilate POSIX with a plugin approach repeatedly. Running the code on a real UNIX/Linux was always cheaper, faster and more reliable - and didn't require a "porting" effort... MS are best to stick with running Linux under their Hypervisor and be happy that they get an OS license for running workload under someone else's OS.
As for Windows having a "hybrid microkernel" architecture, that is just marketing shite. The point of a microkernel is that the subsystems are isolated. It is fraudulent to attribute the term 'Microkernel' to something that shipped with vulns that allowed TrueType Font rendering to pwn ring0.
I don't mind folks talking up the benefits of Windows, but I draw the line at them rendering useful terms and concepts such as "Microkernel" meaningless by association.
"Why in the world do you think Meltdown is something the NSA etc. would care about? It allows reading kernel data, big deal"
I reckon the NSA should care.
Meltdown can totally compromise the vast majority of desktop/server class Intel hardware out there, it's relatively awkward to fix, it has a very big exploitation window (22 years and counting if the P6 core really is vulnerable to it), it doesn't require much code to implement and it is relatively easy to hide from virus scanners. If they weren't interested they really should consider moving out of the spook biz.
Not really sure why you bothered with the asterisk, Apple don't get a pass because they still shipped vulnerable hardware just like everyone else... :)
"They advertise gluten free black pudding and haggis. Catering to the post-modern psychosomatic illness crowd is a sure way to let standards slip."
Sir, I think you are being somewhat churlish. Another way of looking at it is that the Butcher is making the joys of Black Pudding & Haggis available to all. :)
"do not mock the smoked Grützwurst"
Quite frankly this Black Pudding enthusiast is salivating rather than contemplating mockery...
However I might be tempted indulge in a bit of mockery if I honestly believed I could convince someone to give up their Grutzwurst - allowing me to swoop in and scoff it before they realised their mistake. :)
"What the f* is "older organizations" supposed to mean? Basically +90% of the Fortune 100 use Oracle,"
The world doesn't owe Oracle a living and it is legacy gear now... The only folks who care enough are wannabe Greybeards tending the grave.
The Oracle fan boys get to know what it felt like for the VMS or OS/400 enthusiasts a couple of decades back - although in fairness at least those products were well engineered and well documented so their day jobs were more enjoyable.
"but the effects would be to significantly slow development"
I suspect Intel's "Tick/Tock" development model with releases being pegged to a particular date in time years before they are even developed contributes to the problem. Intel been pushing stuff out of the door before it's been fully baked to meet a marketing deadline for a while now.
"If it can be shown that intel manglement knew about the bug and yet kept on baking/selling chips regardless then I'd suspect they wont have a leg to stand on"
There are plenty of published show-stopper errata that show Intel doing exactly that over several decades. Customers typically decide that the expense of the lawsuit combined with the publicity that shows their products/services are impacted by it would do more damage than the errata...
"a real lawyer with IT knowledge would have known that there is practically NO SUCH thing as a CPU on the market these days that is not affected by Meltdown and/or Spectre"
A real commentard with CPU architecture expertise would know that there are CPUs on the market that are not affected by those bugs... :)
"A whole new architecture was already tried."
Indeed, many many many times over and I suspect it'll continue for a while yet as the wheel of reincarnation makes another revolution... With respect to your close relative they should be paying attention to the folks in the ocean boiling business, the #1 HPC system uses a fairly unique CPU architecture - and it has been delivering better FLOPS/W (YMMV) than it's competitors running state of the art Intel + GPU combos out there for some years now...
Sometimes folks using different tools get better results...
As it turns out (and in fairness to Intel) I did actually find the Core 2 Duo errata Theo referred to back in 2007 after a bit more fiddling around with search criteria...
The closest issues to Meltdown that I found (maybe someone smarter can find more) were AI56, AI91 and AI99:
AI56 "Update of Read/Write (R/W) or User/Supervisor (U/S) or Present (P) Bits without TLB Shootdown May Cause Unexpected Processor Behavior"
AI91 "Update of Attribute Bits on Page Directories without Immediate TLB Shootdown May Cause Unexpected Processor Behavior"
AI99 "Updating Code Page Directory Attributes without TLB Invalidation May Result in Improper Handling of Code #PF"
"Seems Theo was looking at this a decade ago so I guess OpenBSD is already okay."
AFAICT those OpenBSD fixes related to an unpublished change w.r.t bits of page table being cached when previously they were not. I think it would be dangerous to assume those fixes also cover Meltdown.
The points Theo made about the errata preventing people from implementing secure software remain valid.
As I've said before folks really should look at the errata before purchasing a CPU - it is shocking just how broken some of them really are. That won't always help though - case in point try tracking down all the errata that Theo talked about (eg: AI90) 10 years ago... You may well struggle - because Intel's policy is to unpublish errata after they've made a fix/spec change... If anyone does find those errata - let me know. ;)
Your bafflement is entirely justified.
Intel are very lucky that their unique to them and trivially exploitable Meltdown bugs are being conflated with Spectre, they should be getting an extra roasting for that one.
In terms of Spectre that seems to be a very generic label for a bunch of quite different vulns when you dig into what info is leaked and how you would exploit them usefully.
"Lots of fundamental development process rethinking required in the semi-conductor world required...."
Broadly agreeing - but I don't see this as an industry wide problem. There are plenty of well established tools and techniques in place that would catch this kind of thinko - but they all require a precise, complete and self-consistent definition of how the chip is meant to work. The x86 doesn't have such a definition in the public domain, and given the nature of the errata over the years there is plenty of evidence that Intel doesn't have one (or make use of one) in their design process either.
If that is by design then they have intentionally broken backwards compatibility with their in-order CPUs... Well played Chipzuki.
Apparently early Atoms before they decided to bless them with OOO execution are OK.
"Ps AMD never copied Intel, the had tp do a "clean room" to do the microcode themselves."
That's flat out wrong. :)
Back in the day various entities such as the "Defence" contractors and big vendors required a chip to have a 'second source' vendor. AMD entered into a licensing agreement with Intel to be the second source for x86 parts - thereby enabling Intel to tender for those contracts. At one stage AMD were literally given a set of masks by Intel, and AMD used them to punch out identical - so strictly speaking they did in fact copy the Intel parts, but quite legally as per their second sourcing agreements.
As time has gone by AMD did some tweaks (eg: faster 286s, 386s, 486s which inspired Intel to unleash the lawyers at various points). Eventually they rolled their own in house designs (K5,K6,Opteron et al) - on the back of those second source agreements. Intel & AMD have continued to spend money in court wrangling over those agreements - but I think that's been settled for a good few years now.
" It's a basic need to ensure caches are kept filled."
Speculative execution keeps pipelines filled, filling caches is down to the memory controllers... ;)
"We never worried about "security" in the old days of processor design"
How old is old ? MMUs have been around a long time now.
"We never worried about "security" in the old days of processor design, we were far more worried about incorrect access causing a crash and that took priority - with the result that modern security issues were mostly nonexistent."
Seems to depend on where you worked - some vendors never embraced KISS. The protection features of the DEC Alpha were far easier to understand, use, test and verify than the equiv plumbing on the much older i386 for example.
"Although Intel seemed to have turned a corner since Core 2 Duo came along, they've made loads of previous muck-ups."
"I think that you should say were plenty of non-x86 processors out there."
There are still plenty out there, not all of them will be a viable alternative for your application...
"There really aren't any more, with just AMD (which is an x86 derivative, but may not be affected),"
In my view AMD share the same problem as Intel: The x86 ISA (64bits, extensions, warts and all) are simply too complex to test properly. It's a scalability limit in the design space - and this isn't a new problem - it goes back decades. We are seeing bugs span multiple steppings AND generations of product line as a matter of routine. The x86 vendors are physically unable to sell us a fully functional chip even if we pay top dollar for it.
As I see it, as customers, we have no alternative but to go to other ISAs over the long run - simply to get a working chip without the "feature-itis" imposed by 30+ years worth of workarounds.
"I think we need to return to PDP11, where you had an alternative set of memory management registers for program and supervisor (kernel) mode."
There are already plenty of non x86 derivatives out there that don't have this bug, all that's required is folks to make the move. :)
Would be nice if vendors updated their benchmark results in the light of a 30% performance hit, so we can get an apples-apples comparison against processors that don't suffer from this particular fault.
"Now the end user can be told what he can or cannot do with the hardware he purchased?"
In effect NVidia are informing their customers that the manufacturing tolerances are pushed beyond the edge, their gear is unreliable and unfit for purpose. Take note and adjust your purchasing decisions accordingly.
I imagine that the marketing dept. have been insisting that the driver & CUDA devs implement some kind of "datacentre" detection system to help enforce the licensing constraints too, so I'd give some consideration to moving away from CUDA while you are at it. ;)
"So I hold my head above the parapet with confidence and state that *we* are the slowest "broadband" ... unless someone knows better ..."
A relative's rural exchange runs at 512kbit max - but delivers somewhat less than 64kbit/sec (contention). When using webmail the pages frequently fail to load (timing out due to the weight of spamvertising) - POP3 for the win... Interestingly BT did get the gov cash to upgrade the exchange (closing the door to alternative providers), but no cable has been replaced and the exchange remains as it was.
No mobile reception either - welcome to "It's grim up North" Cumbria. :)
INMOS used formal methods to verify their IEEE754 implementation, sure they made mistakes but they made *far* fewer than the established players in the field, and consequently made less steppings/field changes to compensate. It was a reasonably quick FPU for the day too. :)
Hardware guys are light years ahead with formal methods, but their dev iterations are considerably more time consuming and costly so they have a greater incentive to weed out thinkos and bugs early. There is already software out in the wild that has had formal methods applied (rigorously in some cases) with varying degrees of success, as time goes by economics may justify/drive more software to apply formal methods. There's no reason to apply them willy-nilly, select the areas that can benefit the most to maximise the bang for buck.
"Like it or not AutoCad and its brethren remain a resolutely Windows-Only affair so any ideas of migrating a whole city council to Linux for the time being are in the realm of science fiction. "
Little nitpick - AutoCAD users would (normally) form a very small proportion of the total number of City Council users, it would be silly to build your entire infrastructure around it IMO. Give the AutoCAD victims some boxes to RDP into and be done with it already...
"What makes you think Qualcomm will be better than Intel with regards to buggy chips ?"
I think they have a better chance because the target ISA is so much simpler - better defined, peer-reviewed etc. Qualcomm could still screw it up of course, but the problem domain *should* be a lot smaller than verifying an x86-64 design - so they have a better chance of making a good fist of it.
I don't think it's actually possible to produce a formal model of the Intel ISA, and I feel safe throwing that out there because I very much doubt anyone will ever produce a complete formal model of it and prove me wrong. :)
"The Intel f00f bug was a bad one, as was the FDIV bug."
The current errata are somewhat worse in my view, but don't take my word for it, you should take a look yourself and make your own call - Intel do publish them.
"If Intel chips were so buggy there would be a lot of people complaining"
I'm complaining - but I clearly don't qualify as a lot of people. Few people look at the errata, when a box is a bit flakey folks tend to (naively) assume the CPU is OK, and look elsewhere at stuff like firmware, memory, PSUs, or OS bugs. They might even find problems in those areas too - but for whatever reason few people choose to look at the CPU errata - my guess is that many simply don't understand the language & concepts in the errata sheets and so ignore them outright.
I am no Qualcomm fanboy - I would rather someone else punted this gear. ;)
"Yes, nice technology, shame about how you licence IP."
"Icon, because I'm not sure why I'd want a 120W ARM CPU?"
I am hoping it's because it will pack more densely into racks, and deliver good enough aggregate throughput in production to allow you to squeeze more bang for buck out of your date centres. IMO Intel have dropped the ball on verifying and testing their designs - the errata sheets have been horrific for a few generations of Xeons now. There would be some advantage to having less buggy chips - firmware/hardware bugs & work-arounds get tiresome and very costly at scale... :)
I tend not to read as much into geometry these days, although in this case it does look like it's made a difference in the sheer amount of cache on the chip - which is a good thing. It also shaves a cycle of latency here and there in comparison to the competition in terms of cache/memory latencies and branching. The instruction issues/cycle look well balanced and they've made an interesting choice in pipeline lengths as well - superficially it looks like they've put a lot of effort into minimizing latency. Can't wait to see some SPEC & SPEC_rate results - I'm not expecting top marks but I reckon the Centriq has a fair chance of achieving respectable SPEC / cubic metre (and watt) figures - which would be exciting. :)
There really is no solid basis for comparison at first glance. The pin bandwidth seems to be in a different league and the "ring bus" does look quite different to what Intel were punting in Xeons, it looks a lot closer to a contemporary datacentre chip than Thunder-X & X-Gene.
"Unless of course it runs say Secure Boot with Bitlocker."
Plenty of locally exploitable priv escalation vulns once the box is up though. ;)
Biting the hand that feeds IT © 1998–2018