I am personally thrilled with precocious Tex. It does exactly what I try to toe. You guys are just total webinars
2241 posts • joined 7 May 2012
I am personally thrilled with precocious Tex. It does exactly what I try to toe. You guys are just total webinars
I rather think you're missing the point of my reference to circuit switched networks.
So in Bazzaland, you visit www.google.com and some nice young woman dressed in a 1950s dress and hair grabs an RJ11 to plug you in? You then download the page, then realise that you need some resource from Google's CDN or analytics, so 1950's woman disconnects your www.google.com circuit?
If your answer is circuit switching, you asked the wrong question. You also made the MitM attacks a lot easier. There are a lot of 1950's exchange operators needing to sit in the middle and anyone of them can passively observe or actively change the communication.
Luckily nothing approaching even a 1990s internet would have been possible under circuit switching.
My reference to circuit switched networks is that you know more about how one's traffic gets from A to B and exactly who the intervening switches belong to.
Not true. Unless, you and your server are on networks run by the same operator, that isn't even technically possible, let alone feasible. Your network operator loses any capability of such a promise at their interconnects. You are then relying on another network to finish the circuit. Your network knows that the traffic came on the expected interconnect, but without encryption or at least a digital signature involved, you cannot prove that what you receive is what they sent and vice versa.
Look at https and the system of certificate authorities that "secures" it. It doesn't secure it at all. There's a market for certificates, and some of the vendors aren't particularly choosy who they sell certificates too.
Ironically, you seem to be highly concerned with corrupt or inept CAs granting certificates to third parties, yet entirely trusting of your network operators to do the diligence to connect to the right endpoint. In both cases you rely upon the diligence of a third party to have verified the identity before signing the certificate. If the network operators are so diligent, then let's cut the middleman out here and make the network operators the CAs. No, your argument doesn't hold up, not because CAs are perfect (hi there wosign), but because your suggested alternative has exactly the same problem but additional problems as well.
A more cynical commentard may imagine that the government of the day is simply trolling to get the beetroot* tops and twitters to start discussing something else.
*Sorry, I'll grab my coat
> Whilst many see encryption as being a tool with which to defend against baddies, one has to wonder whether we'd be better off without it.
And one doesn't have to wonder too hard to realise that the baddies will continue to use the existing strong encryption to communicate with each other or to lock up your files and demand a ransom. Meanwhile, your defences against this same scum are gone. You first.
> Here's a salutary reminder why it pays to patch promptly
Shirley, in this context, that should be "pays to not patch promptly".
> Adblock Plus developer Eyeo, meanwhile, said ....
Hardly the high watermark of advertising ethics. Pay us or we'll block your ads.
Mosquitos. The annoying nuisance at your BBQ dinner, sure. But also the animal responsible for more human deaths than any other in our long history. Relatively simple to deal with compared with other threats (netting, proper drainage, removing standing water, immunisations, and yes, in some circumstances spraying). The biggest problem is public awareness.
> It's certainly not a unit of force.
Someone once told Chuck Norris that he could not be used as a unit of speed. He was henceforth known as smudge.
> strength of the Yarakovsky effect is ~ 0.05 AU/Myr
For those of us not familiar with the such domain specific units of measure, what is that in nano-Norris?
I heard that some of the patches were so effective that after applying them there would be no way to run this sort of exploit code.
A broken clock shows the correct time twice* a day.
*Unless it happens to be stopped between two regional dependent early morning hours on a particular Spring morning in which case it is only once.
Kind of. Except obviously the encrypted backdoor would require the threat actor to be present whereas the TSA baggage locks can be remotely exploited from another continent.
@Vector, you are looking at this wrong. The investment opportunities are boundless, once you realise that it's the law firms' shares that you should be shoveling your hard earned into.
> and if he's [not] proven to be
not guilty, fine.
TFTFY. It's a subtle but significant difference.
> Or does that sound too hard?
No, but if that solution didn't bring its own problems, it'd be a common design.
> Use a local proxy to cache all your remotely collected scripts. Have that proxy run a comparison check against the last known good version for all external scripts.
If you locally proxy then you need to deal with all your traffic. Troy Hunt's blog about this story mentions for his site today that this would be 500GB of minified http data just for jQuery if he didn't offload it to cloudflare CDN. That's costly. You can make all sorts of arguments about how people shouldn't use all these libraries, that we should jump right into notepad to develop our static pages, but that kinda misses the whole point.
Another downside of your local cache model is that your performance will be crap if I'm not geographically close to your server. Most of those problems disappear if your are delivering off googakaflare. Even if you put your own site on a CDN, it's still a download overhead to your first time visitors.
You haven't specified how you differentiate a new good version Vs a new not so good version. Your technique tells you when a script changes but so does SNI, and you can do that in 30 seconds.
The problem with local caches or SNI is that they solve this problem, but block solution to the counter problem. Imagine that instead of being pwnd, this library vendor was privately reported an XSS bug that allowed your (and any other site using the same library's customers) private data to be exposed. Without SNI and with everyone referencing a common version, that XSS flaw could be immediately fixed across millions of websites. With SNI and private caches, you can't do patch management in that way. That is the crux of why it is a hard problem.
The two problems with hosting* a local copy of dependencies.
- You don't benefit from the browser cached version that almost certainly exists due to the user previously visiting a site with that plugin.
- You have to pay for that bandwidth**
At the end of the day, you have a trade-off because you need to decide whether to trust a third party to manage risk on your site or whether you want to vet everything. SRI would have prevented this specific hack, true, assuming though you didn't have website authors who saw the error in the console, got the new hash, then blindly applied it to put out the "our customers can't login" fire. It also means that where there is a legitimate vulnerability in the framework, your site cannot be fixed automatically.
*Making a copy of any version that you deploy is important if only to deal with one of these vendors disappearing without notice.
**The irony isn't lost on my me.
You're only supposed to blow the bloody doors off!
Pre SNI, you could only have been 1 HTTPS cert per IP address*. So a pretty trivial reverse DNS lookup will would have revealed the site you were visiting**. HTTPS won't stop a MitM knowing that you went to en.wikipedia.org. But they cannot see the specific pages within Wikipedia that you visited.
*It was technically possible to do multiple subdomains with a wildcard (eg *.theregister.co.uk could multi tennant forums and www and whatever else on the same IP address). It was technically possible to do a SAN cert (eg, Google could have got a single certificate for google.com, YouTube.com, gmail.com etc.) But for the most part, if you wanted HTTPS (pre SNI), you had to buy a dedicated IP address for it.
**TOR or a VPN through a trustworthy provider are your friends to that end.
> ISPs have been caught modifying and injecting ads.
MitM could even inject coin miners.
> However, it does remove/destroy all the capacity of centralised caching (e.g. in workplaces)
It removes it for byod stuff, but company issue kit can have a MitM trusted CA to sign your fake certs.
> Just a nit... being a grammar Nazi... the word is regardless not irregardless. ;-)
> But I digress
Please. Know apologies necessary. Its nonacceptable for they're post their too contain sew many errors. If your just going to let it slide, its going to be dog's living with cat's.
Yet I have some trouble with the notion that the promising young defence barrister in that trial would, in 2018, use the phrase "If he is an honest man, then he appears rather like a well-educated mushroom" in this context. I could better believe "How high would you like me to jump?".
> Yes, yes, I know... the NSA, GCHQ, etc. may have intercepted the original e-mail and replied, spoofing Bob's e-mail address. I'll think I'm talking to Bob, but I'm really not. But most of us aren't up against the NSA
It's not only the TLAs that have superpowers over such a scheme. If your email is being sent over plain ol' SMTP, then it would be trivial to both intercept and modify your email and therefore change that header string. I haven't checked, but I'd put money on there being a pre built module for the WiFi pineapple to do this already.
> - 71,500 lines is 1 WaP. Which is based on the length of Tolstoy's book "War and Peace" (Oxford World Classics edition).
I have seen a ViewModel with 1.1 WaPs. And not just a bunch of consts or enum declarations either. Yes, I wish I was exaggerating too.
> 'Better Off With Map And Nokia'.
> At least back then you could go more than 1/2 a day before the battery ran out!
And if it did run out, you could unclip the back and put your spare battery in. You didn't even need some weirdly shaped screwdriver and plastic lever and half a dozen highly carcinogenic chemicals on hand to unglue the old one.
Just give it half an hour to upload.
> What is interesting about these attacks is that they require considerable physical access to the ATM itself, meaning that there is a high risk of getting caught,
High risk of who being caught? Some gang foot soldier who got in to deep and is "paying off" their debt. Paraphrasing Lord Farquaad "some of you may get caught, but that's a risk that I'm willing to take".
I watched the Barnaby Jack video years ago. It's well worth your time if for no other reason than to appreciate the mindset of someone determined to get into one.
From memory*, he pointed out how the threat model was understood to be a case of protect the cash safe and not enough thought was given to protecting the PC itself which was accessible with a pretty simple key. A bit of social engineering would make your farting about non suspicious. Have two of you there, wear something resembling a uniform and bring a lanyard, and call the manager of the store an hour before you get there telling them that there has been an alert which requires a technician. Ask the manager to call some number when they arrive and when they leave "for security".
*at least I think it was that video, apologies if it was another.
> that's another story line of closest spectacular near misses
@ElReg, make this new column happen!
If I were Intel,I would have us as genuine concern that a disclosure to NSA would be followed the next day by a secret court order preventing disclosure and therefore nipping in the bud any chance of the microcode patches* to partially mitigate the attack vectors being widely deployed.
That said, I don't want to over defend their behaviour because I don't know the timeframes. If it were my call, I'd spread the news far enough that the genie is out of the bottle and not going back, then as early as possible work with various TLAs in their defense remit (the part of their job that they always seem to forget).
*Leaving aside the, er, quality assurance issues surrounding these patches
Not sure small fry is the best description. Its readership is no doubt smaller than WSJ, but that is because it covers only* IT news and the other masthead is generic need and analysis. It is unsurprising that it takes some time for such news to be distilled down to a level where their readership actually gets the gist of the fact it is going to affect them. I mean, how do you explain speculative branch prediction or kernel mode to someone with no understanding of computing architecture? You could reasonably explain a side channel attack by an analogy (eg a thief cloud check your water meter over a few days to determine whether you're on holidays), but this stuff is complex.
> if you can't understand how to use a computer mouse (which was designed for the non technical) after an hour and be able to click on a simple icon then frankly you're just thick
By all means have a chuckle at the slow speed that many technically illiterate people learn at. Just remember that unlike you, they can probably hem their own trousers/build that retaining wall/change that spark plug/bake a cake without a packet and probably hundreds of other things that we need to pay a guy to do these days. They can probably figure out how much change is due without using a calculator.
> Wrong order. You need to make the improvements first.
No, you need to do both at the same time. It isn't like one week we'll all toss away our ICE vehicles and start charging our EVs. It will be a decade+ before "most" new cars are EVs (or PHEVs). It's then another 5 years+ until most families have one.
Auctioning off slots just allows the distribution networks to optimise their loads. People can downvote me all they want, but I have shown my math. There is never going to be a night where the grid cannot top up every EV because across a 12 hour window, the draw during these times is relatively insignificant compared to peak times. The only reason you need to bid for a slot is if you need to leave again with a full charge before the 12 hours. You need to upgrade distribution networks for that. The amount you spend depends on how many people need it.
> Great. And if the grid is busy one night, I can't drive to work in the morning.
I really don't think you have thought that argument through. If you are plugging in every night, then you are "topping up" only, not doing a full charge. Average km/years in Australia is 15-20K, call it 50Km/day. That's going to be in the ballpark of 10-15KWhr between plugin and unplug. If you charged that evenly from 6pm through 6am, that is going to be an average draw of 1KW, or about the same as a fan heater on low. Still sounds scary? Didn't think so.
Grids are provisioned to deliver the maximum expected draw, not a typical draw. Due to a regulation failure here, distributers were able to get a guaranteed profit simply by showing they had invested in the polls and wires. The more they spent, the more profit, so of course they carried out upgrades with the slimmest of justifications. This was largely responsible for a doubling of power bills over a 5 year period. So what's that to do with my point? Glad you asked. The figures published to justify the need for this gold plating showed that it was literally needed for 20 hours a year. (Blame air conditioning during the 47°C day we had a few weeks back for a large number of those hours). A typical nightly load does not stretch the distribution networks, certainly nothing happening at 2am comes close. You are never going to be without a full top up over that 12 hour period.
In fact, it is beneficial to the grid to have these 5KWhr power reserves sitting on every other house. It reduces the load on the generator to local grid connections where many of these bottlenecks are.
> And, "auction off those slots to the highest bidder and earmark all profits from those auctions to distribution network improvements". Did you keep a straight face with that one?
We already have auctions for base load, backup, frequency stabilisation, and load shedding, and already have buy backs for PV panel surpluses. Retailers already need to bid for this capacity. It really is just another two markets for emergency load shedding and buy back. Hardly impossible. Or are you pointing out the lack of foresight held by our Muppets-in-charge? You are sadly probably right that they will want a cut. I hope they can see that taking a cut of such slots will result in higher electricity prices and leave consumers worse off than if it was just a direct tax of whatever amount (the difference in that money go round is the lining of the pockets of the generators). By earmarking the proceeds, you eventually kill the need for that market and drive the costs down to an optimal equilibrium.
Yes, a lot of people will get home at 6pm and plug in their EVs for the night. Yes, if those chargers start pumping as much energy as possible into the EV battery packs during the evening peak, the distribution networks are going to be seriously tested.
But who said that all these future EVs need dumb charging? The chargers themselves could have a 3G connection that negotiated charging times and rates with the grid operator in exchange for a small discounted rate during those times. You could have a website where you could for a nominal fee reserve immediate charging time slots if there was a reason you needed a quick top up at volume 11. Better still, auction off those slots to the highest bidder and earmark all profits t from those auctions to distribution network improvements. Finally, incentivise EV owners to let the grid take back a certain number of KWhr over a particular portions of time for grid stability services. For example, let's use some numbers. Assume a 50KWhr battery pack. If you could be paid by the grid for giving back the top 5KWhr (so your available capacity was guaranteed to be at least 45KWhr). They could for example credit 10KWHr free electricity for those 5 you gave back when the grid was struggling.
I'm sorry but when a Torvalds rant comes across as a reasonable response to your baffoonary, it is probably time to take stock, admit there's a problem and start methodically working towards a sensible solution.
HTTPS isn't magic but it does cut out whole classes of vulnerabilities that can cause malware to be transported to you.
Or another way, with HTTPS, the site has to be compromised or otherwise be untrustworthy (or a combination of compromised DNS and compromised CA has tricked the browser). With HTTP, you only need to connect via a rogue free WiFi access point in order to introduce malware not actually sent by the source website. And before anyone comments on some l337 haxor skills required for such pwnage, Google WiFi pineapple and then watch the YouTube instructions whilst awaiting your kit to be delivered.
> about BAE experimental aircraft and projects that hadn't got off the ground?
Shirley it is only the ones that they got off the ground that are worth considering?
The problem is when someone thinks that it equates to "unplanned", or that changing requirements has no consequences.
When you boil it down, the claims it makes are hardly controversial. "Issues discovered early on in a process are cheaper to remedy than those found later", so tasks are supposed to be self contained a and be achievable in half a day. Sprints are equally a week or two so idea to usable feature is much shorter. If it is found to be a dumb feature, it is not likely to be intricately linked to hundreds of other features and therefore ridiculously expensive to change.
Analysis and design is really a Goldilocks artform. Too little, and the inevitable feature request comes in that requires an entirely new implementation even though in the customer's eyes, the request is simple. Too much, and projects get stuck in analysis paralysis, too scared to head down a direction because of unknown unknowns, or worse where the requirements change and developers become unwilling to throw away that code they have so much mentally invested in, even if a clean slate would be a better spot to start.
Of course, it is the unit tests and continuous integration that makes it possible to deeply refactor code without risking breakage, and these are often what companies won't invest in. It is always seen as an overhead, and the future efficiencies it creates are never credited to it.
> That set of emails becomes part of the project documentation.
I would caution about using emails as project documentation. In 5 years time when some intended is being called a bug, the email from <insert old broom from customer who was frog marched out two years ago> to <insert old manager who jumped before they were pushed six months back> is not going to help you if those mailboxes are long dead.
I would suggest using something like confluence to capture the requirements. If you think that the customer (which could be an internal customer) is likely to try something on then by all means attach an email as pdf as a sign off to wave around later, but don't rely on outlook as a knowledge repository. Please.
There is no problem with the way the software responds to a failing battery. Slowing things down is certainly preferable to having a phone that reboots continuously. Noone is complaining about that. The complaints all stem from the fact that this throttling is done silently, making your 'old' phone feel slow, and without telling people of the relatively cheap resolution to the speed issues this causes them.
that if cars were invented in the past 5 years, the engine would be completely encased with no way to change the oil.
Why would a removable battery make it heavier? Larger, well maybe a mm thicker to allow for a clip, but that is really clutching at straws.
The issue is exactly the phones got an update to slow them down without consent, notification, or any way to opt out.
You can do broad stroke heuristics. How many connections are attempted, where are they destined, how big is the payload, how long between connects, what ports are used, and the sorts of DNS queries these things make.
> No word on demo bird's range or speed
African or European spec?
If this threat is to teach us anything, clearly we must give encryption a backdoor.
/Logic brought to you by the numpties who run the show.
> Since your fingerprint (or face, or (presumably) DNA) is stored as a salted hash in the Secure Enclave of the phone
Disclaimer, it has been a few years since I last looked into facial recognition (wasn't quite up to snuff back then), but I work on systems with deep integration of fingerprint and vein scan as well as regular password authentication.
Hashed authentication for passwords/passcodes works because you can* store Hash(secret + salt) and later test whether Hash(guess + salt) == stored value without storing the secret itself. You don't need that secret, just statistical proof that it is neigh impossible for the guess to not be the actual secret**.
Biometric templates are different because you are not able to get an identical scan for verification. Even two photos taken on the same camera on a tripod in a studio seconds apart will have subtle differences. If you were to perform a substraction operation on the bitmaps, it would not be pure black. Because of this, templates are more like a series of measurements of angles and ratios of various features. It can be thought of as a template in the sense that you can't take those numbers and reconstruct the original scan/photo, but the verification logic needs to have those numbers to determine whether the candidate finger/face is "close enough" to the template. (This is why we can meaningfully talk about false accept rate and false reject rate for biometrics). My point is that you can encrypt the template but you cannot hash it.
*But please don't. Google scrypt or bcrypt and use one of them.
**Aka a collision
Biting the hand that feeds IT © 1998–2018