Re: Theory and in practice?
Hz needs an El Reg alternative to avoid all this confusion.
2545 publicly visible posts • joined 7 May 2012
> Me too, given the company paid nearly a quarter million for those two years of backups
That's a rather strange method of accounting. Is it a waste that I paid about 1.5K to insure my car last year, but I didn't even have an accident?
The cost of your restore* is the time taken by whatever person needed to locate the right tape and find the right file(s) plus the lost opportunity cost of whatever that person+equipment would have otherwise been doing.
*I would argue that the restore was free, the cost was on the unintended deletion or hardware failure.
Setup fake hotspot with believable name. Check (although you forgot the de-auth packet flood to disconnect everyone on those other APs).
Poison the responses from DNS. Check
Obtain a SSL certificate for natwest.com
Yeah, no. Obtaining a fake certificate isn't completely impossible because CAs have and probably will in the future make mistakes. Some guy ended up with a github certificate a few months back due to a CA stuff up. But CAs have been distrusted for giving out fakes (Google diginotar). We have also seen the likes of Lenovo and Dell installing themselves as certificate authorities, and I believe in the Dell case this could have been used to sign a fake server.
Far more likely is someone registering natvvest.com and getting a legitimate certificate for that domain. Of course it natwest used* HSTS then the redirect page wouldn't be trusted by your browser. (A 302 is needed because the browser is expecting a certificate owned by natwest.com not natvvest.com. If the original request is http, it can be intercepted and responded to redirect your browser to the new domain)
The actual problem with https is that an observer can correlate who you are talking to and the response size and infer what you are doing. The Facebook image on this article is 13282 bytes. How many other el reg resources are exactly that size?
Tl;dr - https doesn't give you perfect security, but it is inarguably better than http.
*They may well. I didn't check.
(d) Having no* onsite x-ray machine means a shed load of paperwork** if someone needs to be hospitalised.
(e) the whole "not supposed to expose your prisoners to ionising radiation***" thing.
*At least none designed for a human.
**I am pretty confident in this guess.
***although I imagine that at least some of the cell's walls are made of bricks.
> There are two paths for smartphone peddlers these days.
@AC, not sure I agree with that dichotomy. I would agree that Apple can't really go #1 without cannibalizing their #2s (what a fortunate pun). We see this in other product categories too, where a carmaker uses different marques to sell something made from the same parts bin at substantially different prices.
But I see it as a scale rather than two camps. You just need profit per unit * units sold > $X
In the #2 world, profit per unit is just absurd. When I last replaced my phone (firmly from one of the vendors of #1s), it wasn't because I wanted the cheapest possible thing. I wanted a Nexus price/feature compromise, not a Pixel price/feature compromise. If the midrange isn't selling it's because they're not wanting to take any bullet points off their flagships feature list, so the midrange then can't pull from the Chinese. You can't ask for another 200 quid if all you get for it is an extra 2MP on the camera.
I think that it should be considered on a case by case basis. There is nothing in this article that the wider security industry would take umbrage with. HTTPS becoming more common. Check. His survey has been running over years and the methods are generally quite sound. May be off by a percentage point or 2 because of limited sampling capabilities with subdomains, but this doesn't impact the trend. Does he have any conflict of interest? Well I guess he will have more business opportunities if he can establish that HTTPS is inevitable, but I can't think of any subject matter expert who wouldn't equally benefit. And no one serious is questioning his expertise in this area.
The uBlock origin thing is one where he does have a direct commercial interest, but I believe this was disclosed. It would have been impossible to cover that story without sourcing his views because he was the one complaining. And for the record, there are legitimate arguments for both positions on the uBlock reporturi thing. There is a potential side channel tracking capability if it is honoured but it can also benefit *other* people by notifying the site owners if someone injects a coinhive JavaScript into your site. You don't personally need it fixed because your browser has already protected you. On balance, I think the tracking protection is a better benefit (and I made this point at the time). But yes, that does impact his commercial reporturi service so that needs to be disclosed.
Ultimately, that is a journalistic integrity decision. I personally find this particular red top acts reasonably responsibly.
A MitM* can easily modify the El Reg homepage to add a coinhive JavaScript or any other tracking token they want. They can manipulate the stories you see, include content not in the original or censor content they don't want you to know. If there is a link to the forums login screen, they can point that to a phishing site.
*And let's be clear here, a WiFi pineapple can be had for a few hundred local currency and about 15 minutes of YouTube instructions will have your MitM up and running. This isn't a TLA level hack.
I rather think you're missing the point of my reference to circuit switched networks.
So in Bazzaland, you visit www.google.com and some nice young woman dressed in a 1950s dress and hair grabs an RJ11 to plug you in? You then download the page, then realise that you need some resource from Google's CDN or analytics, so 1950's woman disconnects your www.google.com circuit?
If your answer is circuit switching, you asked the wrong question. You also made the MitM attacks a lot easier. There are a lot of 1950's exchange operators needing to sit in the middle and anyone of them can passively observe or actively change the communication.
Luckily nothing approaching even a 1990s internet would have been possible under circuit switching.
My reference to circuit switched networks is that you know more about how one's traffic gets from A to B and exactly who the intervening switches belong to.
Not true. Unless, you and your server are on networks run by the same operator, that isn't even technically possible, let alone feasible. Your network operator loses any capability of such a promise at their interconnects. You are then relying on another network to finish the circuit. Your network knows that the traffic came on the expected interconnect, but without encryption or at least a digital signature involved, you cannot prove that what you receive is what they sent and vice versa.
Look at https and the system of certificate authorities that "secures" it. It doesn't secure it at all. There's a market for certificates, and some of the vendors aren't particularly choosy who they sell certificates too.
Ironically, you seem to be highly concerned with corrupt or inept CAs granting certificates to third parties, yet entirely trusting of your network operators to do the diligence to connect to the right endpoint. In both cases you rely upon the diligence of a third party to have verified the identity before signing the certificate. If the network operators are so diligent, then let's cut the middleman out here and make the network operators the CAs. No, your argument doesn't hold up, not because CAs are perfect (hi there wosign), but because your suggested alternative has exactly the same problem but additional problems as well.
> Whilst many see encryption as being a tool with which to defend against baddies, one has to wonder whether we'd be better off without it.
And one doesn't have to wonder too hard to realise that the baddies will continue to use the existing strong encryption to communicate with each other or to lock up your files and demand a ransom. Meanwhile, your defences against this same scum are gone. You first.
Mosquitos. The annoying nuisance at your BBQ dinner, sure. But also the animal responsible for more human deaths than any other in our long history. Relatively simple to deal with compared with other threats (netting, proper drainage, removing standing water, immunisations, and yes, in some circumstances spraying). The biggest problem is public awareness.
> Or does that sound too hard?
No, but if that solution didn't bring its own problems, it'd be a common design.
> Use a local proxy to cache all your remotely collected scripts. Have that proxy run a comparison check against the last known good version for all external scripts.
If you locally proxy then you need to deal with all your traffic. Troy Hunt's blog about this story mentions for his site today that this would be 500GB of minified http data just for jQuery if he didn't offload it to cloudflare CDN. That's costly. You can make all sorts of arguments about how people shouldn't use all these libraries, that we should jump right into notepad to develop our static pages, but that kinda misses the whole point.
Another downside of your local cache model is that your performance will be crap if I'm not geographically close to your server. Most of those problems disappear if your are delivering off googakaflare. Even if you put your own site on a CDN, it's still a download overhead to your first time visitors.
You haven't specified how you differentiate a new good version Vs a new not so good version. Your technique tells you when a script changes but so does SNI, and you can do that in 30 seconds.
The problem with local caches or SNI is that they solve this problem, but block solution to the counter problem. Imagine that instead of being pwnd, this library vendor was privately reported an XSS bug that allowed your (and any other site using the same library's customers) private data to be exposed. Without SNI and with everyone referencing a common version, that XSS flaw could be immediately fixed across millions of websites. With SNI and private caches, you can't do patch management in that way. That is the crux of why it is a hard problem.
Most folk who use no script do not block each and every JavaScript. They rather block by default, then whitelist either sites you trust, or specific scripts or whatever. In this specific circumstance, a lot of the sites would have been trusted, and even going through the details, is a JavaScript for a known screen reader going to set off any alarm bells?
The two problems with hosting* a local copy of dependencies.
- You don't benefit from the browser cached version that almost certainly exists due to the user previously visiting a site with that plugin.
- You have to pay for that bandwidth**
At the end of the day, you have a trade-off because you need to decide whether to trust a third party to manage risk on your site or whether you want to vet everything. SRI would have prevented this specific hack, true, assuming though you didn't have website authors who saw the error in the console, got the new hash, then blindly applied it to put out the "our customers can't login" fire. It also means that where there is a legitimate vulnerability in the framework, your site cannot be fixed automatically.
*Making a copy of any version that you deploy is important if only to deal with one of these vendors disappearing without notice.
**The irony isn't lost on my me.
Pre SNI, you could only have been 1 HTTPS cert per IP address*. So a pretty trivial reverse DNS lookup will would have revealed the site you were visiting**. HTTPS won't stop a MitM knowing that you went to en.wikipedia.org. But they cannot see the specific pages within Wikipedia that you visited.
*It was technically possible to do multiple subdomains with a wildcard (eg *.theregister.co.uk could multi tennant forums and www and whatever else on the same IP address). It was technically possible to do a SAN cert (eg, Google could have got a single certificate for google.com, YouTube.com, gmail.com etc.) But for the most part, if you wanted HTTPS (pre SNI), you had to buy a dedicated IP address for it.
**TOR or a VPN through a trustworthy provider are your friends to that end.
> Just a nit... being a grammar Nazi... the word is regardless not irregardless. ;-)
> But I digress
Please. Know apologies necessary. Its nonacceptable for they're post their too contain sew many errors. If your just going to let it slide, its going to be dog's living with cat's.
Yet I have some trouble with the notion that the promising young defence barrister in that trial would, in 2018, use the phrase "If he is an honest man, then he appears rather like a well-educated mushroom" in this context. I could better believe "How high would you like me to jump?".
> Yes, yes, I know... the NSA, GCHQ, etc. may have intercepted the original e-mail and replied, spoofing Bob's e-mail address. I'll think I'm talking to Bob, but I'm really not. But most of us aren't up against the NSA
It's not only the TLAs that have superpowers over such a scheme. If your email is being sent over plain ol' SMTP, then it would be trivial to both intercept and modify your email and therefore change that header string. I haven't checked, but I'd put money on there being a pre built module for the WiFi pineapple to do this already.
> 'Better Off With Map And Nokia'.
> At least back then you could go more than 1/2 a day before the battery ran out!
And if it did run out, you could unclip the back and put your spare battery in. You didn't even need some weirdly shaped screwdriver and plastic lever and half a dozen highly carcinogenic chemicals on hand to unglue the old one.
> What is interesting about these attacks is that they require considerable physical access to the ATM itself, meaning that there is a high risk of getting caught,
High risk of who being caught? Some gang foot soldier who got in to deep and is "paying off" their debt. Paraphrasing Lord Farquaad "some of you may get caught, but that's a risk that I'm willing to take".
I watched the Barnaby Jack video years ago. It's well worth your time if for no other reason than to appreciate the mindset of someone determined to get into one.
From memory*, he pointed out how the threat model was understood to be a case of protect the cash safe and not enough thought was given to protecting the PC itself which was accessible with a pretty simple key. A bit of social engineering would make your farting about non suspicious. Have two of you there, wear something resembling a uniform and bring a lanyard, and call the manager of the store an hour before you get there telling them that there has been an alert which requires a technician. Ask the manager to call some number when they arrive and when they leave "for security".
*at least I think it was that video, apologies if it was another.
If I were Intel,I would have us as genuine concern that a disclosure to NSA would be followed the next day by a secret court order preventing disclosure and therefore nipping in the bud any chance of the microcode patches* to partially mitigate the attack vectors being widely deployed.
That said, I don't want to over defend their behaviour because I don't know the timeframes. If it were my call, I'd spread the news far enough that the genie is out of the bottle and not going back, then as early as possible work with various TLAs in their defense remit (the part of their job that they always seem to forget).
*Leaving aside the, er, quality assurance issues surrounding these patches
Not sure small fry is the best description. Its readership is no doubt smaller than WSJ, but that is because it covers only* IT news and the other masthead is generic need and analysis. It is unsurprising that it takes some time for such news to be distilled down to a level where their readership actually gets the gist of the fact it is going to affect them. I mean, how do you explain speculative branch prediction or kernel mode to someone with no understanding of computing architecture? You could reasonably explain a side channel attack by an analogy (eg a thief cloud check your water meter over a few days to determine whether you're on holidays), but this stuff is complex.
*almost
> if you can't understand how to use a computer mouse (which was designed for the non technical) after an hour and be able to click on a simple icon then frankly you're just thick
By all means have a chuckle at the slow speed that many technically illiterate people learn at. Just remember that unlike you, they can probably hem their own trousers/build that retaining wall/change that spark plug/bake a cake without a packet and probably hundreds of other things that we need to pay a guy to do these days. They can probably figure out how much change is due without using a calculator.
> Wrong order. You need to make the improvements first.
No, you need to do both at the same time. It isn't like one week we'll all toss away our ICE vehicles and start charging our EVs. It will be a decade+ before "most" new cars are EVs (or PHEVs). It's then another 5 years+ until most families have one.
Auctioning off slots just allows the distribution networks to optimise their loads. People can downvote me all they want, but I have shown my math. There is never going to be a night where the grid cannot top up every EV because across a 12 hour window, the draw during these times is relatively insignificant compared to peak times. The only reason you need to bid for a slot is if you need to leave again with a full charge before the 12 hours. You need to upgrade distribution networks for that. The amount you spend depends on how many people need it.
> Great. And if the grid is busy one night, I can't drive to work in the morning.
I really don't think you have thought that argument through. If you are plugging in every night, then you are "topping up" only, not doing a full charge. Average km/years in Australia is 15-20K, call it 50Km/day. That's going to be in the ballpark of 10-15KWhr between plugin and unplug. If you charged that evenly from 6pm through 6am, that is going to be an average draw of 1KW, or about the same as a fan heater on low. Still sounds scary? Didn't think so.
Grids are provisioned to deliver the maximum expected draw, not a typical draw. Due to a regulation failure here, distributers were able to get a guaranteed profit simply by showing they had invested in the polls and wires. The more they spent, the more profit, so of course they carried out upgrades with the slimmest of justifications. This was largely responsible for a doubling of power bills over a 5 year period. So what's that to do with my point? Glad you asked. The figures published to justify the need for this gold plating showed that it was literally needed for 20 hours a year. (Blame air conditioning during the 47°C day we had a few weeks back for a large number of those hours). A typical nightly load does not stretch the distribution networks, certainly nothing happening at 2am comes close. You are never going to be without a full top up over that 12 hour period.
In fact, it is beneficial to the grid to have these 5KWhr power reserves sitting on every other house. It reduces the load on the generator to local grid connections where many of these bottlenecks are.
> And, "auction off those slots to the highest bidder and earmark all profits from those auctions to distribution network improvements". Did you keep a straight face with that one?
We already have auctions for base load, backup, frequency stabilisation, and load shedding, and already have buy backs for PV panel surpluses. Retailers already need to bid for this capacity. It really is just another two markets for emergency load shedding and buy back. Hardly impossible. Or are you pointing out the lack of foresight held by our Muppets-in-charge? You are sadly probably right that they will want a cut. I hope they can see that taking a cut of such slots will result in higher electricity prices and leave consumers worse off than if it was just a direct tax of whatever amount (the difference in that money go round is the lining of the pockets of the generators). By earmarking the proceeds, you eventually kill the need for that market and drive the costs down to an optimal equilibrium.