back to article Here come the lawyers! Intel slapped with three Meltdown bug lawsuits

Just days after The Register revealed a serious security hole in its CPU designs, Intel is the target of three different class-action lawsuits in America. Complaints filed in US district courts in San Francisco, CA [PDF], Eugene, OR [PDF], and Indianapolis, IN [PDF] accuse the chip kingpin of, among other things, deceptive …

Page:

  1. Mark 85 Silver badge

    My... not that this was unexpected but the lawyers seem to be approaching lightspeed these days.

    I would hope that the chip engineers kept their copy of the spec and hardcopies of any emails with "directions".

    1. Nick Kew Silver badge
      Pirate

      My... not that this was unexpected but the lawyers seem to be approaching lightspeed these days.

      Well, of course.

      No matter how evil a bigco, the lawyers are worse. Even when it's Dilbert's crowd deliberately releasing a product so harmful it hospitalised its test subjects.

      1. Unicornpiss Silver badge

        Light speed

        "My... not that this was unexpected but the lawyers seem to be approaching lightspeed these days."

        And as we know, as you approach light speed, time slows down and mass becomes infinite.

    2. G2

      re: lightspeed lawyers

      those are not lawyers, those are ambulance chasers.

      a real lawyer with IT knowledge would have known that there is practically NO SUCH thing as a CPU on the market these days that is not affected by Meltdown and/or Spectre, they all are, even ARM or Qualcomm. It's an industry-wide bug.

      Such a CPU has not been seen since speculative execution acceleration was introduced about ~20 years ago. If they want a CPU without speculative/pipeline execution they should go back to 80286, or better yet 8086 processors to be "safe".

      Either that or they should wait for the industry to design and release new silicon that's safe, and since silicon development, testing and release cycles take about 2 years, we should have the new CPUs by 2020 or 2019 if we're lucky.

      1. G2

        P.S.: in the above post by CPUs i mean manufacturers that offer x86/x64 compatible CPUs not special industrial / RISC CPUs... those are another kettle of fish.

      2. Tom Paine Silver badge

        a real lawyer with IT knowledge would have known that there is practically NO SUCH thing as a CPU on the market these days that is not affected by Meltdown and/or Spectre, they all are, even ARM or Qualcomm. It's an industry-wide bug.

        Oh, well THAT'S alright, then!

        A real lawyer... would be entirely happy to sue ARM, AMD and any other processor designer turning out substandard products as well as Intel.

      3. Roo
        Windows

        "a real lawyer with IT knowledge would have known that there is practically NO SUCH thing as a CPU on the market these days that is not affected by Meltdown and/or Spectre"

        A real commentard with CPU architecture expertise would know that there are CPUs on the market that are not affected by those bugs... :)

      4. MacroRodent Silver badge

        Pentium I

        No need to go all the way to 286. The original Pentium and Pentium mmx did not spekulate. They just executed two adjacent instructions at the same time, if the pair satisfied certain conditions. Fun for compiler writers.

      5. MarkSitkowski

        re: lightspeed lawyers

        At last! I knew that if I waited long enough, my Z80 and 8080 assembler skills would be in demand...

    3. Anonymous Coward
      Anonymous Coward

      OK, I'll bite

      How many of the claimed performance hits are estimates, and how many are based on real data? Most CPU cores these days are dynamically clocked, and not usually running at top (or turbo) speed. One would think that proper patches, as opposed to hastily hacked patches, could mitigate a lot of the performance hit by explicitly kicking the core clock into turbo when swapping between kernel & user modes.

      1. a_yank_lurker Silver badge

        Re: OK, I'll bite

        The only fixes currently available are in the OSes so it is reasonable to expect some slowdown. The slowdown is not likely to be a problem for home computer, phone, or office drone box but with servers where the slowdown could be noticeable and potentially very severe for websites, database access, cloud based applications, etc. All areas that can affect business profitability and that is a big issue for Chipzilla, et. al. Businesses will probably start suing once they have better metrics on actual costs and losses and these numbers probably will be eye popping.

        OS suppliers are stating to expect a performance hit and why. It is partly defensive (avoid lawsuits) and partly giving best estimate of what to expect even if a bit vague now.

        1. CommanderGalaxian
          FAIL

          Re: OK, I'll bite

          "The slowdown is not likely to be a problem for home computer...."

          More specifically it has been stated that you won't experience slowdowns unless you are doing a lot of disk access or network access - so if you happen to be a freelance software developer working from home then expect your compile times to increase - or if you happen to be an online games player then expect to experience degraded performance - perhaps quite significantly so.

          1. Michael Wojcik Silver badge

            Re: OK, I'll bite

            it has been stated that you won't experience slowdowns unless you are doing a lot of disk access or network access

            That may have been stated (by whom?). That doesn't mean it's correct.

            The Meltdown remediations cause a performance hit for all kernel-to-user context switches. I/O is a major cause of such switches, but it certainly isn't the only one.

            Software-based Spectre remediations,[1] once they start appearing in software, will cause a performance hit every time they're encountered. As hardware assist for them is introduced in new CPUs (such as the new conditional branch instructions ARM describes in their Spectre whitepaper) the cost will drop, but for older CPUs the techniques that have been identified so far, such as memory fences and retpolines, are significantly expensive.

            [1] Aside from the initial stopgaps put into browsers, which were simply to defeat the two cache-timing mechanisms identified in the Spectre paper.

        2. Claptrap314 Bronze badge

          Re: OK, I'll bite

          I spent 10 years in microprocessor validation--basically from the start of the speculative execution era. I've got some ideas about what might be done to mitigate this sort of thing in hardware. The obvious solution for Spectre would be to add some bits of the pointer to the head of the page table into the branch history table indices. Doing this, however, would require committing to an architectural feature which really, really is not something that you want to commit to.

          The next thing to consider would be to add cache state to the speculative state that gets rolled back on a branch mispredict. You create an orphan pool for the caches, and pull those back. This would be quite expensive, depending on how completely you want to block such an attack. It is FAR from clear to me how such an orphan pool should be treated to avoid a variant of such an attack that takes the orphan pool into account.

          If the papers are accurate, and modern CPUs really do have close to 200 instructions in flight, you would need at least 600 cache lines in your orphan buffers per level of cache--probably a lot more.

          1. Michael Wojcik Silver badge

            Re: OK, I'll bite

            The obvious solution for Spectre would be to add some bits of the pointer to the head of the page table into the branch history table indices

            That only helps with one of the two Spectre variants demonstrated in the original paper, and in that paper the authors point to other side channels which are probably also usable for Spectre attacks (e.g. ALU contention). Blinding the BTB would be a bandaid.[1]

            As anyone with a crypto background knows, it's really hard to reduce all your side channels to the point where they leak information too slowly to feasibly exploit. Paul Kocher showed us that more than 20 years ago, a mere 7 years after the DEC branch-prediction patent was granted.

            [1] Sticking plaster, for CPUs in the Commonwealth.

        3. CheesyTheClown Silver badge

          Re: OK, I'll bite

          Not only are the fixes through software, hardware fixes wouldn’t work anyway.

          So, here’s the choices :

          1) Get security at the cost of performance by properly flushing the pipelines between task switches.

          2) Disable predictive branch execution slowing stuff down MUCH more... as in make the cores as slow as the ARM cores in the Raspberry Pi (which is awesome, but SLOW)

          3) Implement something similar to an IPS in software to keep malicious code from running on the device. This is more than antivirus or anti malware. This would need to be an integral component of web browsers, operating systems, etc... compiled code can be a struggle because finding patterns to exploit the pipeline would require something similar to recompiling the code to perform full analysis on it before it is run. Things like Windows Smart Screen does this by blocking unknown or unverified code from running without explicit permission. JIT developers for web browsers can protect against these attacks by refusing to generate code which makes these types of attacks possible.

          The second option is a stupid idea and should be ignored. AMDs solution which is to encrypt memory between processes is useless in a modern environment where threads are replacing processes in multitenancy. Hardware patches are not a reasonable option. Intel has actually not done anything wrong here.

          The first solution is necessary. But it will take time before OS developer do their jobs properly and maybe even implement ring 1 or ring 2 finally to properly support multi-level memory and process protection as they should have 25 years ago. On the other hand, the system call interface is long overdue for modernization. Real-time operating systems (and generally microkernels) have always been slower than Windows or Linux... but they all have optimized the task switch for these purposes far better than other systems. It’s a hit in performance we should have taken in the late 90’s before expectations became unrealistic.

          The third option is the best solution. All OS and browser vendors have gods of counting clock cycles on staff. I know a few of them and even named my son after one as I spent so much time with him and grew to like his name. These guys will alter their JITs to handle this properly. It will almost certainly actually improve their code as well.

          I’m pretty sure Microsoft and Apple will also do an admirable job updating their prescreening systems. As for Linux... their lack of decent anti-malware will be an issue. And VMware is doomed as their kernel will not support proper fixes for these problems... they’ll simply have to flush the pipeline. Of course, if they ever implement paravirtualization like a company with a clue would do, they could probably mitigate the problems and also save their customers billions on RAM and CPU.

          1. bombastic bob Silver badge
            Devil

            Re: OK, I'll bite

            "Get security at the cost of performance by properly flushing the pipelines between task switches."

            I would think this should be done within the silicon whenever you switch 'rings'. If not the OS should most definitely do this. Does the instruction pipeline (within the silicon) stop executing properly when you switch rings, like when servicing an ISR? If not, it may be part of the Meltdown problem as well, that is the CPU generating an interrupt, which is serviced AFTER part of the pipeline executes. So reading memory generates a trigger for an ISR, but other instructions execute 'out of order' before actually servicing the ISR...

            I guess these are the kinds of architecture questions that need to be asked by Intel (and others), what the safest way is to do a state change within the silicon, and how to preserve (or re-start) that state without impacting anything more than re-executing a few instructions...

            So I'm guessing that this would need to happen:

            a) pipeline has 'tentative' register values being stored/used by out-of-order instructions, branch predictions, etc.

            b) interrupt happens, including software interrupts (executing software interrupts should happen 'in order' in my opinion, but I don't know what the silicon actually does)

            c) ring switch from ISR flushes all of the 'tentative' register values, as if those instructions never executed

            If that's already happening, and the spectre vulnerabilities can STILL leverage reading memory across process and kernel boundaries, then I'm confused as to how it could be mitigated at ALL...

            the whole idea of instruction pipelining and branch prediction was to make it such that the software "shouldn't care" whether it exists or not. THAT also removes blame from the OS, really. But that also doesn't mean that the OS devs should sit by and let it happen [so a re-architecture is in order].

            But I wouldn't blame the OS makers at all. What we were told, early on, is that this would speed up the processors WITHOUT having to re-write software. THAT was "the promise" that was broken.

      2. CheesyTheClown Silver badge

        Re: OK, I'll bite

        I agree.

        The patches which have been released thus far are temporary solutions and in reality, the need for them is because the OS developers decided to begin with that it was worth the risk to gain extra performance by not flushing the pipeline. Of course, I haven’t read the specific design documents from Intel describing the task switch mechanism for the effected CPUs, but following reading the reports, it was insanely obvious in hindsight that this would be a problem.

        I also see some excellent opportunities to exploit AMD processors using similar techniques in real world applications. AMD claims that their processors are not effected because within a process, the memory is shielded, but this doesn’t consider multiple threads within a multitennant application running within the same process... which would definitely be effected. I can easily see the opportunity to hijack for example Wordpress sites using this exploit on AMD systems.

        This is a problem in OS design in general. It is clear mechanisms exist in the CPU to harden against this exploit. And it is clear that operating systems will have to be redesigned, possibly on a somewhat fundamental level to properly operate on predictive out of order architectures. This is called evolution. Sometimes we have to take a step back to make a bigger step forward.

        I think Intel is handling this quite well. I believe Linux will see some much needed architectural changes that will make it a little more similar to a microkernel (long overdue) and so will other OSes.

        I’ll be digging this week in hopes of exploiting the VMXNET3 driver on Linux to gain root access to the Linux kernel. VMware has done such an impressively bad job designing that driver that I managed to identify over a dozen possible attack vectors within a few hours of research. I believe very strongly that over 90% of that specific driver should be moved to user mode which will have devastating performance impact on all Linux systems running on VMware. The goal is hopefully to demonstrate at a security conference how to hijack a Linux based firewall running in transparent mode so that logging will be impossible. I don’t expect it to be a challenge.

        1. bombastic bob Silver badge
          Devil

          Re: OK, I'll bite

          "OS developers decided to begin with that it was worth the risk to gain extra performance by not flushing the pipeline."

          read: they used CPU features as-documented to avoid unnecessary bottlenecks

          The problem is NOT the OS. It's the CPU not functioning as documented, i.e. NOT accessing memory in which the page table says "do not access it", even if it does so only briefly. The fact that a side-channel method of detecting this successful access exists does not preclude the somewhat lazy method in which Intel's code checks the access flags when out-of-order execution is happening. Security checks should never have been done after the fact, and yet they were.

          (my point focuses mostly on meltdown; branch prediction is another animal entirely)

          In short, Intel's benchmarks could have been *slightly* faster (compared to AMD, which apparently doesn't have THAT bug) because they delayed the effect of security checking just a *little* bit too long...

          fixing that in microcode may not even be possible without the CPU itself slowing down. If AMD's solution was to have more silicon involved with caching page tables so that the out-of-order pipeline's memory access would throw an exception at the proper time, then Intel may have to do some major re-design.

          So you could argue that NOT doing these security checks "at the proper time" within the out-of-order execution pipeline may have given Intel a competitive advantage by making their CPUs just 'faster' enough to allow the benchmarks to show them as "faster than AMD".

          And it's NOT the fault of OS makers, not even a little. They were proceding on the basis that the documentation represented what the silicon was really doing. And I bet that only a FEW people at Intel knew that the security checks on memory access were being 'delayed' a bit (to speed things up?).

          It's sort of like only a FEW people at VW knew that their 'clean diesel' tech relied on fudging the smog checks by detecting that the car was hooked up to a machine and running a smog check, and thus alter the engine performance accordingly so it would pass. THAT gave VW competitive advantage over other car makers. Same basic idea, as I see it.

          1. Michael Wojcik Silver badge

            Re: OK, I'll bite

            The problem is NOT the OS. It's the CPU not functioning as documented, i.e. NOT accessing memory in which the page table says "do not access it", even if it does so only briefly.

            While Meltdown does involve speculative access across privilege levels, Spectre does not. And if you believe either of the attacks violates something in the CPU specification, I'd like to see a citation. CPU specifications tend to be quite vague and leave a great deal of room for the implementation.

            In particular, memory-protection features are described in terms of their direct effects on registers and memory, not on microarchitectural features such as the caches. There's no magical guarantee that memory protection prevents ever loading anything from an unpermitted page into a CPU storage area that's not directly accessible by the executing program.

            What you wish CPUs would do, and what they're documented as doing, are two different things.

        2. smot

          Re: OK, I'll bite

          I'm quite affected by this post.

      3. Sonny Jim

        Re: OK, I'll bite

        > How many of the claimed performance hits are estimates, and how many are based on real data?

        Epic games posted a graph showing the CPU increase on a 'real' server:

        https://www.epicgames.com/fortnite/forums/news/announcements/132642-epic-services-stability-update

    4. handleoclast Silver badge
      Coat

      My... not that this was unexpected but the lawyers seem to be approaching lightspeed these days.

      It's not so much that the lawyers are clocked any faster but that they employ pipelineing, branch prediction and speculative execution.

      I wonder when the equivalent of meltdown/spectre hacks will appear for lawyers.

    5. CommanderGalaxian

      Ambulance chasing lawyers have their place in the scheme of things - especially when the reason is that you've purchased premium kit - and discover some way down the line that the only way it can run safely is by turning it into crippleware, performance-wise.

  2. Lorribot

    We have only ourselves to blame

    If we had all done 64 bit properly with Itanium like Intel told us to we would not be in this situation so really it is our own fault for following the cheap and simple AMD64 route. We made Intel fuck up.

    1. tim292stro

      Re: We have only ourselves to blame

      "...If we had all done 64 bit properly with Itanium like Intel told us to we would not be in this situation so really it is our own fault for following the cheap and simple AMD64 route. We made Intel f**k up..."

      The market had built itself around x86 and Itanium would have broken compatibility rather suddenly, leaving a CPU without any software. AMD's x64 extension to x86 was easier for software people to get on board with while they evaluated their life choices on code management. When PowerPC shortly thereafter stopped getting produced and ARM came along a lot of software companies had a bit of a come-to-Jesus moment about how fragile the CPU sector could be and realized that a bit of code-base agility was the way forwards.

      Of course this whole time, Intel learned the exact opposite lesson - rather than still paving the way forwards with new clean and well though out ISAs, they reacted like a dog that got tazer'd and really dug-in to the trench of "Hey look! x86 is still compatible with all of your code!!! Don't think about any other ISA!!! EVER!!!" See KnightsCorner/Ferry, etc... They even dabbled in Arm for a bit with the XScale stuff, but never really wanted to impact their server/desktop market with that. Now Marvell has taken that business unit and run with it.

      1. Nano nano

        Re: We have only ourselves to blame

        Itanium had a x86-32 compatibility mode which allowed x86-32 code to run, albeit more slowly than might be expected at that clock speed. I had an Itanium desktop system in 2000 on which I ran IA64 and IA32 code and benchmarks ...

        1. DougS Silver badge

          Re: We have only ourselves to blame

          Only the first generation Itanium hardware included x86-32 hardware. Later versions used JIT to run x86 code in software.

    2. Updraft102 Silver badge

      Re: We have only ourselves to blame

      "If we had all done 64 bit properly with Itanium like Intel told us to we would not be in this situation so really it is our own fault for following the cheap and simple AMD64 route. We made Intel fuck up."

      Right. Intel good, AMD bad... even though somehow AMD managed not to be vulnerable to Meltdown using the same AMD64 instruction set, it's all AMD's fault, not Intel's, that Intel managed to mess it up so badly.

      1. Tom Paine Silver badge

        Re: We have only ourselves to blame

        It's not AMD's /fault/ they came up with a good tactic for attacking Intel after the launch of Itanic. It does rather imply we're stuck with x64 forever, now, though, and that it's no-one's fault. How do you make that puzzled / thoughtful face emoticon?

    3. Flocke Kroes Silver badge

      Re: We have only ourselves to blame

      Itanium's first success was before it was even a product, R&D on existing 64-bit designs stopped on the assumption that they would not be able to compete with Intel. Anyone know if any of the old 64-bit designs could later have become susceptible to meltdown? Itanium took ages to get to market either because it was a difficult design or because with the competition gone there was no reason to rush.

      Itanium was not built for speed. The primary design goal was to use so many transistors that no-one would be able to manufacture a compatible product. This goal was achieved by such a large margin that the first version used too much power to become a product. Even when Itanium became a real product its performance per watt stank. Software was either non-existent or priced higher than the SLS so sales were crap leading to poor performance/$. Itanium was never a competitor to X86 and was a zombie incapable of eating brains before AMD64 was available.

      68020 had separate tables for user and supervisor address translations. It was meltdown proof, and the same went for 88110. I do not know if Itanium had a sane MMU design, but it was never an option for anyone without an unlimited budget and it did kill a bunch of architectures some of which were meltdown proof.

    4. Brian Miller Silver badge

      Re: We have only ourselves to blame

      No, we didn't cause Intel to "fuck up." Intel did not take any lessons learned from its experiences with other chip architectures and apply them to the x86_64. Intel has a lot of experience with RISC and non-86 architectures. Choosing to ignore design deficiencies is their action.

      1. LOL123
        Facepalm

        Re: We have only ourselves to blame

        >>Intel did not take any lessons learned from its experiences with other chip architectures

        which architecture for example?? This has nothing to do with processor ISAs.. Architectures and the implementation of an architecture are two different things. AMD's implementation of x86_64 architecture is better when evaluated on this specific criteria. It might be worse on others. So could ARM8. Better and worse.

        >>Intel did not take any lessons learned from its experiences with other chip architectures

        On which of their other implementations did they make the same mistake to learn from and corrected it there but chose to ignore "the lesson" for their out of order CPUs?

        This is becoming an Intel withhunt - lots of comments but little fact.

        "Intel knew all along I tell you.. all along! The CEO must be burned alive! Burn Intel.. off with it's head.."

        "I'm afraid to enter my password.. Am I alive? Is my money safe? Can I fly? My phone! Will the internet stop? Bitcoin wallets are insecure causing market crash.. It's all intel's fault I tell you!"

        "Everyone should have used <favourite vendor of the week != Intel>. That's what I've been telling to you all.. Told ya..."

        I mean at what point does one state that The Register is spewing "fake news". If it became obvious in a few years that the commentary didn't hold or other cpus have even more fatal flaws, does The Register become a fake news outlet for what it printed ingood faith today??

        Prove Intel's bad faith...

        1. Patrician

          Re: We have only ourselves to blame

          "....Prove Intel's bad faith...". ...

          Intel knew about both issues back in June last year but carried on selling CPU's with that flaws since then; there is Intel's "bad faith".....

    5. Voland's right hand Silver badge

      Re: We have only ourselves to blame

      If we had all done 64 bit properly with Itanium

      Next time, use "joke alert" flags. Most of the el-reg readership speculatively executes a "turn off humor grand" branch the moment they see the Itanic name.

      1. DougS Silver badge

        Sorry, Itanium sucks

        It avoids these issues only because it is in-order, not because it is better designed. HP's engineers thought a smart compiler could make up the difference for an in-order processor, and sold Intel on the idea so they collaborated on what would have been PA-RISC 3.0 and it became the Itanium. Those engineers were wrong, which is why Itanium has never lived up to its performance processes.

        Not defending the turd that is x86-64, but its biggest problem is a refusal to drop backwards compatibility with old shit that goes back 40 years. Drop support for anything but 64 bit mode in hardware, handle 32 bit apps via JIT, and it would be a lot better. If you want a clean 64 bit ISA you should be looking at ARM64. It is not perfect but better by far than either x86-64 or Itanium!

        1. Joe Montana

          Re: Sorry, Itanium sucks

          ARM is not exactly a clean 64bit architecture either, like amd64 it's an extension to a 32bit architecture that was never intended to be extended. The only difference is that the 32bit architecture was cleaner in the first place.

          There are much cleaner 64bit implementations in the form of Alpha, POWER, MIPS and SPARC. Alpha was even a pure 64bit design with no 32bit mode at all.

          1. DougS Silver badge

            Re: Sorry, Itanium sucks

            Actually ARM64 is a totally independent ISA, unlike x86-64, so you can drop 32 bit mode entirely if you want. Which Apple did in the A11.

            Like I said, its not perfect, but it is so much better than x86-64 or IA64 its like comparing brownies made with chocolate with those made with dirt substituted for the chocolate. They're both edible, but one them you will only eat if you're really hungry.

    6. Anonymous Coward
      Coat

      Re: We have only ourselves to blame

      Isn't that an unnecessarily long-winded way to spell "race to the bottom"?

    7. Steve Channell
      Flame

      Re: We have only intel to blame

      When AMD introduced the AMD64 architecture they remapped the segment registers as general purpose registers because nobody was using them anymore... until Google came up with NaCL (that uses segment registers to provide a hardware sandbox).. intel had a chance (with x86-64) to keep one segment register for hardware security support, but they didn't.

      The fact the market leading chip designer chose not to support a kernel segment (in future we'll call that hardware support for operating systems) is down to politics... NSA politics.

    8. gnasher729 Silver badge

      Re: We have only ourselves to blame

      You must be joking.

      The Itanic processor was a monstrosity. The most complex beast ever created. If it had been successful, and if every laptop and desktop today were Itanic based, we would have hit more than one iceberg already.

      1. tiggity Silver badge

        Re: We have only ourselves to blame

        With the low performance per watt, the heat generated if we had all been using Itanics would mean no icebergs were left

    9. Destroy All Monsters Silver badge
      Trollface

      Re: We have only ourselves to blame

      If we had all done 64 bit properly with Itanium like Intel told us to we would not be in this situation so realy it is our own fault for following the cheap and simple AMD64 route. We made Intel fuck up.

      Why are you applying Bái Zuô (white left) logic to this particular case of less-than-perfect engineering?

      Anyway, can we bring freedom to downtrodden Intel? Maybe an "IA-64 affirmative action" program and subsidies to increase the dominance of IA-64. I'm totally in favour of a "Free Itanium March" through Washington D.C.!

    10. Brewster's Angle Grinder Silver badge

      Expert trolling!

      @Lorribot Your comment made me laugh, anyway. And the more I read it, the funnier it gets.

    11. This post has been deleted by its author

    12. bombastic bob Silver badge
      Thumb Down

      Re: We have only ourselves to blame

      blame the victims. nice. job.

  3. KH

    It could turn out alright for Intel. Maybe they'll sell a whole bunch of new chips because of it. Equifax certainly picked up a lot of new credit-watching-service clients when they released leaked millions of users' data. Financially speaking, it's probably the best thing they ever did for their bottom line. Some punishment. People are dumb, and keep going back to the idiots that burned them, despite their lousy performance records. (Think TSA, for example)

    Data breeches and design flaws -- the sex tapes of the business world! ;)

    1. Anonymous Coward
      Anonymous Coward

      Data breeches

      I need to get myself a pair of those for swanning about the data centre.

      1. Anonymous Coward
        Anonymous Coward

        Re: Data breeches

        They would go nice with a pair of flip-floppies.

        1. Anonymous Coward
          Anonymous Coward

          Re: Data breeches

          They'd make a nice replacement for my unsigned shorts.

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019