"...the ball only had a near-field communication (NFC) chip and therefore was out of range of internet hackers."
INCLUDING those with a Yagi antenna and a signal amplifier?
This week we dealt with buggered bookies, trouble at Ticketmaster, and a compromised Linux build from Gentoo. Here's what else went down during the week. Trustwave sued Some breaking news as we were typing away: two insurance companies, Lexington Insurance Co and Beazley Insurance Co in the US, are suing infosec biz Trustwave …
Maybe NSA forgot to turn on the USA citizens filter in their slurp of the rest of the worlds stuff? A simple mistake really, some PFY wandered past thinking "what's this switch do?". Too easy to miss hundreds of millions of extra people in a database of several BEEELLION.
It's likely that the current (mis)Administration of the USofA found out that the NSA was able to capture all these phone records. It finally dawned on them that calls between All The pResdents' men and the soviet, whoops russian plutocrats was also ensnared.
I still want to here the tapes from the Cheney Energy Task force that were mysteriously wiped by that prior malevolent W administration. I'm sure they're somewhere.
Usually "delete" means remove the linking data that points to it, but don't actually overwrite the actual data, and especially don't overwrite it using the sorts of military grade overwriting algorithms that the NSA might just be able to read through anyway. Oh, and don't bother deleting copies on the backups, or on old backup tapes that have been retired due to wear.
Hopefully they are not using that other definition of "delete". Hold on, there's a loud knocking at the door, and I hear lots of nearby helicopters.
"The Wireguard VPN service"
Er, it's not a service, it's software. It's own website says "WireGuard is not yet complete. You should not rely on this code. It has not undergone proper degrees of security auditing and the protocol is still subject to change." I wonder if US Senator Ron Wyden read that bit. I do hear good things about it though, and intend to try it out later.
Both suspects have confessed. Neither has been named. Their arrests took place last month but news of the case was only released this week.
Both suspects have confessed....................................
no surprise there then
Their arrests took place last month but news of the case was only released this week......................
probably a week after they were buried :o)
OK, I'm back, it was just the new neighbour wondering about all those helicopters. I live half a suburb away from six major hospitals, at least two of them have helipads on their roofs. I live in a house that is on stilts ("Queenslander" they are called, very common here in Queensland), on top of a hill, in a suburb with "High" and "Hill" in the name. I've watched helicopters flying past two blocks away, by looking down from my bedroom window.
Oh look, there goes another chopper. /me waves to the pilot. Cool, she waved back.
The author of WireGuard claims that the code is a lot simpler, thus easier to audit. It also works in a different way, which the author claims makes it easier to deal with by the people setting it up. As mentioned above, I've heard good things about it, but have not actually looked deeper yet, so I dunno how true these things are yet.
As you said, OpenVPN does what it claims to do - nothing wrong with that. But Wireguard does have some things going for it:
1) It doesn't rely on OpenSSL for encryption, so there is a whole lot less code to audit if you want to check for security problems
2) It is a kernel module implementation (at least on Linux), so the processing overhead is much smaller and it should be able to scale to wirespeed while handling multiple connections. It also means that it works like any other network interface, so the usual configuration files and network scripts will take care of running your VPN.
3) Authentication and setup is much simpler, since it is a trust-on-first-use so no need for setting up your own CA.
Have a look at it, it does work quite well.
There is an interesting article in CACM of July (may not yet have arrived in the uni print mag rag) by David Chisnall with the title "C is Not A Low-Level Language" and it lays the extreme complexity of the current microprocessor at the feet of an ancient dogma, which is that the CPU has to make C code run fast, whereas C is meant for a now obsolete abstract machine, the PDP-11 CPU, which does not map to modern CPUs in any convenient way:
The root cause of the Spectre and Meltdown vulnerabilities was that processor architects were trying to build not just fast processors, but fast processors that expose the same abstract machine as a PDP-11. This is essential because it allows C programmers to continue in the belief that their language is close to the underlying hardware. C code provides a mostly serial abstract machine (until C11, an entirely serial machine if nonstandard vendor extensions were excluded). Creating a new thread is a library operation known to be expensive, so processors wishing to keep their execution units busy running C code rely on ILP (instruction-level parallelism). They inspect adjacent operations and issue independent ones in parallel. This adds a significant amount of complexity (and power consumption) to allow programmers to write mostly sequential code. In contrast, GPUs achieve very high performance without any of this logic, at the expense of requiring explicitly parallel programs. The quest for high ILP was the direct cause of Spectre and Meltdown. A modern Intel processor has up to 180 instructions in flight at a time (in stark contrast to a sequential C abstract machine, which expects each operation to complete before the next one begins).
... Unfortunately, simple translation providing fast code is not true for C. In spite of the heroic efforts that processor architects invest in trying to design chips that can run C code fast, the levels of performance expected by C programmers are achieved only as a result of incredibly complex compiler transforms. The Clang compiler, including the relevant parts of LLVM, is around two million lines of code. Even just counting the analysis and transform passes required to make C run quickly adds up to almost 200,000 lines (excluding comments and blank lines). For example, in C, processing a large amount of data means writing a loop that processes each element sequentially. To run this optimally on a modern CPU, the compiler must first determine that the loop iterations are independent. The C "restrict" keyword can help here. It guarantees that writes through one pointer do not interfere with reads via another (or if they do, that the programmer is happy for the program to give unexpected results). This information is far more limited than in a language such as Fortran, which is a big part of the reason that C has failed to displace Fortran in high-performance computing
... Optimizers at this point must fight the C memory layout guarantees. C guarantees that structures with the same prefix can be used interchangeably, and it exposes the offset of structure fields into the language. This means that a compiler is not free to reorder fields or insert padding to improve vectorization (for example, transforming a structure of arrays into an array of structures or vice versa). That is not necessarily a problem for a low-level language, where fine-grained control over data structure layout is a feature, but it does make it more difficult to make C fast.
Several other problems with the unfortunate approach of creating a CPU that shall run an inappropriately obsolete language fast are exposed.
Thanks for that explanation. As a hobbyist I ask in all sincerity what is a viable alternative to C? For instance, if a group was going to re-create the functionality of one of the BSDs what would be one of the appropriate languages that one would use to replace C? Is C++ as difficult in the same aspects as C in an OS? I've read that as C++ is expanded it is becoming more difficult to use; so, that would appear to be a disadvantage. I'm curious about this if any of you have time to elucidate that would be wonderful.
That guy sounds kinda full of it. There's a specific reason GPUs are structured the way they are, and yes any other CPU-based algorithm operating on large sets of non-interrelated data could benefit from more massively parallel data processing, but that's an awfully specific condition. Just because every computer operates on data it by no means follows that those operations can always - or even most of the time - be performed simultaneously, and speculative execution is the only thing that can help you there, kinda by definition. We can decide to give it up if we're okay with the performance hit it would cause, but we should stop pretending it's just a matter of "doing it differently" and we can have all that performance right back...
Biting the hand that feeds IT © 1998–2019