* Posts by Cl9

4 posts • joined 2 Jan 2018

Here come the lawyers! Intel slapped with three Meltdown bug lawsuits


Re: Should Intel (and other chip makers) be held responsible for hardware flaws?

But both AMD and ARM are also vulnerable to related (both to do with speculative execution) flaws, such as Spectre. I'm not sure what your point is.


Re: Should Intel (and other chip makers) be held responsible for hardware flaws?

Food hygiene processes are relatively simple and there's a set list of guidelines and requirements that need to be met.

This is not the case with CPU design so you can't really compare the two. If a restaurant is breaching existing food safety standards, then of course they should be held liable. There are no such requirements or standards for CPU design.


Should Intel (and other chip makers) be held responsible for hardware flaws?

It's an interesting one, but I don't personally think that Intel should be held liable for this, as it's not an intentional bug. Modern CPUs are just so incredibly complex, containing billions of transistors, that I don't think it's feasibly possible to create a 'flawless' CPU, there's always (and always will be) bugs and flaws, discovered or not.

I'm also not sure if you could pin the potential performance loss on Intel either, as that's technically the operating system vendor who's implementing the slow down.

Don't get me wrong, I've got an Intel CPU myself, and I can't say that I'm too happy about this either. But I can't really blame Intel for it either. And yes, Intel's PR release was absolute BS.


Kernel-memory-leaking Intel processor design flaw forces Linux, Windows redesign


Re: Hmmm...

Kernel memory is mapped to user mode processes to allow syscalls (a request to access hardware/kernel services) to execute without having to switching to another virtual address space. Each process runs in its own virtual address space, and it's quite expensive to switch between them, as it involves flushing the CPU's Translation Lookaside Buffer (TLB, used for quickly finding the physical location of virtual memory addresses) and a few other things.

This means that, with every single syscall, the CPU will need to switch virtual memory contexts, flushing that TLB and taking a relatively long about of time. Access to memory pages which aren't cached in the TLB takes roughly 200 CPU cycles or so, access to a cached entry usually takes less than a single cycle.

So different tasks will suffer to different extents. If the process does much of the work itself, without requiring much from the kernel, then it wont suffer a performance hit. But if it uses lots of syscalls, and do lots of uncached memory operations, then it's going to take a much larger hit.

That's what I make of it from understanding of it, which might not be 100% correct.



Biting the hand that feeds IT © 1998–2018