Re: That's not an erection...THIS is an erection...
He could have someone's eye out with that...
1421 posts • joined 6 Sep 2013
He could have someone's eye out with that...
According to the VMware security advisory, fixing the bug completely requires guest OS patches as well as patches to the CPU microcode and hypervisor. I can't see anything in the blog post about whether these have been applied or are relevant. Anyone have any further details?
The growth in car ownership led to all sorts of grandiose post-war plans to "modernise" cities for the age of personal transportation. These included the London Ringways, the similar Manchester plan and the Newcastle Central Motorway(s). None of these plans was ever fully completed owing to the ever-growing protests about the destruction of both buildings and environment that resulted. The roads that were built are now little better than car parks at peak travel times.
In the unlikely event that autonomous vehicles ever become realistic and in the absence of any other change, most of them are going to be sitting in the same traffic jams. You can't fix that with more roads, only by changing lifestyles so there are fewer vehicles.
The point is that the usual suspects don't actually have the huge numbers of staff sitting around on the off chance that they'll win the contract for which they've bid - they build teams and consortia when they know the money is secure and not before.
And as we've seen with Carillion, their role as gatekeeper to major projects means that a ruthless primary contractor can pass all the "fronting" risk down to their subcontractors (accept our crap terms for late payment or get no work) and use the up-front payments from the government to prop up the dividends and bonuses.
What you're actually buying from the mega company may be very little more than its database of desperate suppliers and some contracted project managers.
Are these the names of the toys?
I suppose if you compare QUIC with (TLS + TCP) you're looking at a similar level of complexity overall.
Setting aside the cryptographic stuff, QUIC does fix a number of inherent security problems with TCP and finally deals with its window size limitations, though it curiously does not incorporate any message boundary delineation. I have a suspicion that the complexity of the stream multiplexing and the individual acknowledgment of small chunks of data in each stream is a bear trap - that in practice a lost packet will stall more streams and result in more data being held pending retransmission than if the streams were sent individually.
That might not matter if the main application is retrieving web pages - the improvement in connection overhead is likely to be more than adequate compensation, but it would be interesting to see data from other types of application.
The only performance figures I've seen compare QUIC and HTTP (which I presume means they're not using the cryptographic features) and in those circumstances QUIC doesn't seem to be a clear winner and no more robust in the face of packet loss than HTTP over TCP.
While the router shouldn't totally lock up under these conditions and ought eventually to recover, receiving a large number of short packets back to back will inevitably seriously impact the throughput for other devices on the network and could result in network devices being dropped temporarily, simply because the radio space is being hogged by the amount of traffic from one source.
Domestic access points are typically not equipped with particularly fast processors and so there is a depdency on the underlying network hardware to be able to discard or ignore data that's arriving too quickly - it's possible in this case that because the packets are multicast they're being passed up the stack to the CPU but there's no limit on the number of buffers that can be allocated for that purpose, the CPU is getting behind and hence memory is being exhausted. A better thing to do might be to stop receiving altogether, but the AP would then effectively go deaf until it had caught up with the multicast traffic. However, unless you have a device that can process all potential traffic at the speed of the physical medium you're going to have problems of some sort - and that device wouldn't be competitive in a domestic market.
Even if it were, shared-media systems (like wired and wireless ethernet) depend to some extent on the connected devices behaving: if you have a device that jabbers constantly, there's not much you can do.
In short, it's OK to expect better, but you're not going to get perfect.
You don't even need to do the highlight trick on the PDF.
If you google "comug correspondence ofcom", the PDF file seen by The Register appears at or near the top of the search listings. If you click on the down arrow adjacent to the link and select "cached", Google will helpfully give you the entire text of the document, e-mail addresses included.
I hope the PR people have spent as much time hounding Ofcom...
If it isn't, it would be a first for legislation relating to data privacy.
There are plenty of applications that need high-precision timers: media synchronisation, in-process threading, etc. Taking them away from applications isn't really an option - though if you did, you'd also have to "fuzz" the lower-resolution timers because otherwise an application can simply execute, say, increment instructions to pad out the longer period and use the resulting number to work out how long the rest of the operation took.
The point of the processor is to run non-privileged, end-user applications. The operating system is just there to get multiple processes to play nicely together - it's not a repository for application code that processor bugs make unsafe to run.
Does this account for any potential partnership with the US or China
The point is that there's nothing stopping us having partnerships with the US or China right now - the EU doesn't have exclusivity over our research programmes. Our research partnerships with countries like Canada fall largely within their participation in EU programmes.
The EU has always made research funding a priority, partly because it sees the technical dominance of the US (in particular) as a threat not only to European industry, but also to European social policy. I would be very surprised if the UK had the same interest in continuing to fund research - it's been at best a grudging concession from the Treasury in the past - or to provide the freedom of movement for international scientists that has underpinned our research collaborations within Europe.
The WannaCry debacle suggests the opposite to me.
If the various outpatients clinics and operating theatres had printouts of their schedules for the next few days, most of them would - in conjunction with their paper medical records - have been able to continue to function, at least until the inability to make fresh appointments deprived them of work - probably about 3 months at the current rate of referrals...
Most of the work of the NHS is extremely mundane. There are cases of complex diseases or difficult-to-intepret diagnostics where AI may be of some benefit, but they're the exception rather than the rule.
The reason that isn't equivalent to wealth is that you're not "entitled" to it in any way under your control. There is no fund of equivalent value in which you have a share, all you have is a promise that a future government will require future taxpayers to pay you that sum out of their earnings.
If there's noone to tax (because the winners are offshore and the losers are impoverished), it will quickly become apparent that "wealth" is of very little value.
I wouldn't be surprised if someone hasn't had a wizard wheeze over at the MoD and is even now explaining how much more credible our battle fleet would be if only we could persuade our potential naval enemies to get into a bathtub with the First Sea Lord...
Surly we should respect peoples choices.
I can't choose to work for Google, they would have to choose to employ me.
And don't call me surly.
get on the motorway and relax for a couple of hours
That is in principle achievable using existing technology because you have a segragated, pedestrian-free environment, limited exits and plenty of space to install roadside equipment, the downside being that you'd only be able to permit suitably equipped vehicles on the road.
Perhaps you could wall off the outside lane of the motorway, stick a cable down the middle and reserve it for cable-following vehicles that were aware of the position and speed of the vehicle ahead of them. That would solve the problem of road signs being obscured and other drivers breaking into the middle of convoys. It might also introduce Audi drivers to lanes of the motorway they have not hitherto explored.
The thing about truly autonomous vehicles is that a lot of the technology (the ability to recognise road markings, traffic signs, stop lights, etc) is only of any value while there are still human drivers around. Once you get rid of those, you'd likely go back to more traditional and reliable forms of vehicle control.
She would be unlikely to get legal aid and I imagine the lawyers fees would mount up pretty quickly if you were up against a large corporation like Intel.
Having said that, you shouldn't represent yourself in court without some form of preparation - in particular on the kind of behaviour that might trigger the awarding of costs against you: this isn't likely to happen in an industrial tribunal, but rubbing the judge up the wrong way may lessen the odds...
they want to permanently compromise the privacy of around 200,000,000 of their fellow citizens
They've already done that through mass data collection of data in transit - and most of their fellow citizens seem fine with that. They want to build on that precedent while they (think they) can.
Do you still need to install the HUNDREDS of patches that have accumulated over the ages?
Time to play your "nobody knows" card...
a wholescale redesign of the instruction set
There's a lot of cruft in the instruction set, but this bug has got nothing to do with it. Variants of BOTH Meltdown and Spectre are found in computer architectures that are unrelated excepting their common use of speculative execution.
The common origin of these bugs is that CPU instructions execute a lot faster than memory accesses and ever more complex ways are being found to reduce the inevitable stalls in the instruction pipeline to a minimum. You'd need a very different kind of CPU - and very likely a very different kind of software and different kind of application domain - to make that go away.
The Spectre paper goes into great detail, but there's a summary here.
In the same way, the eBPF JIT compiler can be used to inject known code into the kernel, if eBPF is enabled.
What does SJWing even mean?
SJW is like a generic version of the n-word that can be applied to any uppity minority.
Good luck with that. Microsoft's only real platform is the desktop now and all those "innovative" voice-activated and data-mining gadgets aren't going to be running on the desktop or even on a hybrid tablet. Their only market is the legacy market - but it's a big market and it's going to be around for a long time if they want to serve it.
Gradually turning Windows into a desktop version of Android sounds more like desperation than innovation.
Frankly, the description of Damian Green as a "right-hand man" was enough for me to set aside any thought of looking at the pictures...
Given that there have been previous court decisions confirming that employees both have a general right not to be monitored and recorded and to make and receive personal calls necessary for their family life this could presumably create some confusion. Any restrictions would have to be necessary and proportionate and if they exceed those required by the EU directive, presumably that would bring their necessity and proportionality into question?
Having said that, we all managed to function perfectly well in the days before mobile phones...
The solution to rogue technology is not likely to be the immediate addition of more technology. He really needs to take charge of the technology he has first. If he can.
all this hush-hush work should come 100% open
Where do you draw the line? I don't seen any reason in principle to trust the "real" CPU or its microcode any more than the "management" CPU and its software. I can see a valid argument in favour of 100% open hardware (though not one that would presently make much commercial sense), but assuming that one proprietary CPU is somehow more trustworthy than another, does not seem logical, especially if they're both on the same die or closely coupled.
large companies are going to be doing their remote management using IPMI to the BMC
The Intel ME and AMD PSP are (among other things) alternatives to the BMC. Do you know what that proprietary BMC is doing? Given that using the BMC you can at least in principle rewrite the operating system before it boots, I'm not sure how much of a security difference there is in principle between not knowing what the ME/PSP is doing while an unmodified OS is running and not knowing what a modified OS might be doing.
People see less risk in that which is familiar and mistrust the unfamiliar, but that is a risk in itself: we become blind to the risks that are, with hindsight, staring us in the face and of which several examples have received a great deal of exposure this week. There may be reason to be paranoid, but at least be uniformly paranoid!
It may be needless complexity for the computer on your desktop, but it's a different matter when your computing is either under outsourced management or is in a cloud data centre.
You not only need the remote management capabilities if you happen to have a few hundred thousand machines to administer, but your customers are likely to want some means of ensuring they don't have to trust you with their data. In the latter case, it's likely that more complexity is needed than is presently offered - as this week's news has demonstrated.
What might partly be at play here, though, is that chip design is a very specialist skill - there aren't that many people doing it and they have the choice of a small number of employers. And everyone is constrained by physics in the same way. And for the most part, it's an evolutionary process (tick-tock) not a revolutionary one.
Looked at in that light, it would actually be surprising if processor designs were radically different.
I'm not an expert, so please correct me as required, but I think this one is going to be with us for some time because:
1/ There is unlikely to be a better solution to the Meltdown problem until the silicon is redesigned.
2/ The mitigations for Spectre are still not all in and are unlikely to be comprehensive until the silicon is redesigned.
3/ Redesigning the silicon may in itself result in a loss of performance.
In the longer term, things like memory encryption and secure zones for storing critical secrets may offer a way out, but you can't retrofit them to hardware that's already out there.
The issue here is that if you take a traditional view of processor "correctness", there is no real bug here: the software runs as it should and returns the right results.
We are very much in a new world where we have to assume that having malicious software running on any system is a likely event and hence any observable side effects of "correct" operation that leak information are likely to be observed. I'd be surprised if there weren't a whole range of other attacks waiting to be discovered.
When you have as many niche (Windows only) applications as a local council
You may have some niche, platform-specific, applications - and not just for Windows - but very few people will actually be using them as local users. There will be widely-used applications - fleet management, for example - but you would expect those to be running on servers, not on desktops, and you would be asking serious questions of their developers if the client interface was not a web browser by now.
And councils already have a massive issue dealing with legacy document formats: they still have a mountain of stuff on paper. If they can integrate that into their workflow, the issues about electronic document incompatibilities are really just fluff.
Linux uses a different firewall
The Linux kernel supports eBPF since version 3.18 and the exploit was demonstrated using a Debian distribution, though by default eBPF would not be a configured kernel option so it might not be so widely used.
It isn't necessarily an easy patch (for any architecture) and it applies to any JIT code (not just BPF), so I assume there may be some impact on nft for Linux. ARM recommend changing the code emitted by the JIT compiler using new conditional speculation barriers. I'm not sure what options are being proposed for other platforms, but they could well have performance implications of their own aside from those already being discussed.
Bad form, I know, to reply to my own post, but there was one other thing that occurred to me.
Computer architecture has historically assumed that you controlled your computer and the workload that ran on it. The protections in place were there largely to mitigate against mistakes - bugs in your software taking down other software or the computer itself. For the most part, your computer ran software whose behaviour was predictable. The improvements in memory capacity and CPU speed largely depend on that predictability.
The reason that Spectre, Meltdown and, previously, Rowhammer are issues is mostly because the assumption of ownership is no longer valid. Either you have consciously chosen to run your software on someone else's computer (cloud) or it's possible that ownership of what you believe is your own computer has been ceded to criminals (possibly with the backing of state resources).
If you control your own computer, it doesn't really matter if you can read the kernel memory from user space. If you don't, pretty much every statistically-based optimisation (whether it's DRAM stability or branch prediction) is up for exploitation by software that's designed to skew the statistics and either gain knowledge that it shoudn't have or deny service (for example by forcing cache flushes).
Computer architecture hasn't changed a great deal in principle from the days of co-operative time sharing - perhaps it's time it needs to be reinvented for an explicitly hostile environment.
The extensive Google Blog post suggests that AMD processors are only "less" vulnerable - and I don't imagine the investigations have yet stopped. Specifically, they found they could use side-channel attacks to get memory contents from the same process on an AMD chip (hardly a big deal, but still a warning flag) and they could read the entire contents of kernel memory on an AMD chip IFF the Berkeley Packet Filter (BPF) Just-In-Time compiler is enabled in the kernel. Is that an AMD bug or a Linux bug? Can you actually assign "blame"?
Since the entire vulnerability is related to the relative execution time of cached and non-cached operations, it's difficult to believe that there are not other potential exploits to be discovered. The BPF issue is interesting because it means that the ability to inject any abitrary code into the kernel, even code that is statically proven to be "safe" in traditional software terms, is in fact a potential vector for side-channel attacks for which there is no obvious mitigation.
That's a very big deal for a lot of Linux-based firewalls and probably many other applications.
Drives that were intended for paging sometimes had multiple sets of fixed heads and were frequently referred to as "drums" by analogy with their historical counterparts despite being platters rather than cylinders.
Have you considered a rockery? It could occupy your lonely days and cost less than $700, even with some exotic planting. Plenty of places to hide a key and if you can't find it again, you have everything necessary to effect entry via a broken window.
I think we need to return to PDP11
While the elegance of the PDP-11 design is beyond dispute, it wouldn't scale to the performance of current systems. Notably, its memory management was very simple: 8 base and length registers for each execution mode. As soon as you go much beyond a 64k address space it becomes infeasible to have a full set of mapping registers on the CPU and you're forced down the TLB road.
What might well make sense is if it were possible to run the (key parts of the) kernel in a "protected real" mode (i.e. with no virtual address translation at all, but with some form of memory protection using keys or limit registers. If you don't have enough physical memory to contain a kernel, you're not going to make much progress anyway. And it's only one of many areas in which improving performance with caching of one kind or another leads to potential anomalies with specific execution patterns.
Not that any speculation (sorry!) of that kind helps with the current debacle - but it does illustrate how the growing complexity and unauditably of processor designs has largely been overlooked until recently while we've been principally worried about software security.
So when Facebook or Twitter decide you're a racist, who do you appeal to?
If Simon & Schuster, to take a current example, decide your book isn't worth publishing, who do you appeal to? [Hint: you don't, you try to find another publisher, if you can].
Regardless of what you are proposing to say, free speech does not mean you are entitled to a platform (either your own, or someone else's).
Presumably, the same people who are the arbiter of problem in other media: the courts.
There isn't a special law for content on the Internet - the same rules on content would apply to a book or newspaper and in that case a publisher takes an informed view before publication of whether the content is likely to be legal.
The only difference here is that the publisher's attention is being drawn to potentially illegal content ex post facto and they then get to make exactly the same judgment as a publisher in another medium. It's up to the publisher* to decide whether they want to make a free-speech argument for some specific content.
*Yes, Twitter, Facebook, et al, don't want to be regarded as publishers, precisely because of the liability that implies, but this law is pretty much pushing them down that road. Not before time, in my view, and not in enough places, but it's a start.
I have just moved and needed to change GPs. The new practice took my name and date of birth and on learning I am approaching retirement age asked "you're not on lots of medication, are you?" before confirming whether their list was open to new patients.
A former practice made the headlines for writing to its older patients asking them to move to other GPs as the practice had a large student population and therefore didn't have the "experience" to deal with older, and significantly more expensive, patients.
Nearly all GP practices are private businesses (not many are limited companies - yet - but they're still independent contractors): if the NHS controls their gross income and no-one controls demand, the GP's only control is on their expenditure.
For as long as they maintain it and its compatibility with current generation Nest equipment.
Traditional alarms are just switches, wires and relay logic (emulated by a microcontroller) - they just work and go on working.
There is a difference between putting advertisements into different media - to ensure you cover a broad spectrum of potential recruits - and specifically limiting the target of your advertisements in a single medium, thus ensuring you reach a narrow spectrum of potential recruits.
Just as there is a difference between plain truth and sophistry.
yes, they will have something to do with it
Well, the Contracting States (which overlap with the EU states) appear to have been failing dismally to influence the EPO, never mind anyone else. If the European Commission had any influence, it presumably would have used it to resolve this ongoing impediment to "European harmonisation" before now.
It would seem that the aim of the EPO to reach beyond the mere confines of the EU is actually the source of the problem. Had it just been an agency of the EU, none of this could have arisen.
I presume you'd prefer a model in which every country has a separate Patent Office and businesses have to file their patents in every single one?
HMS Obvious Target
The ingenuity of the MOD seems to have eluded you. As long as the aircraft carrier carries no aircraft, it is not worth attacking - I'm sure this was their plan all long.
Which clown at facebook wrote that?
I presume the one that realised "if you believe this, you're as thick as mince" would have an adverse effect on Facebook's revenue.
I shall resist owning one of those
Good luck with that - if you want a larger screen size with a decent picture quality you're likely to be out of luck. The reason the "dumb" ones are cheaper is that they strip out everything - including the more expensive panels and advanced video processing.
I also suspect (though I have no actual knowledge) that the cost of "smart" TVs is to some extent subsidised (or the margin is maintained) by the potential to upsell commercial streaming services and/or user behavioural data, so the incentive to sell dumb devices is diminishing.
Biting the hand that feeds IT © 1998–2018