I am struggling to resist making an extremely defamatory comment here.
478 posts • joined 11 Jan 2010
I am struggling to resist making an extremely defamatory comment here.
No, the difficulty of protecting people from other people is not some trivial monotonic function of the number of people you have, if the number is small. When it's one, then there is no difficulty. You can't have a total of two people, because one of them must be the protecting person who then serves no purpose. With two people you need a third protecting person which is very expensive, or you take a risk. Up from there the cost comes down and tends to some fixed proportion (n protecting people per m people being protected), but since you can't supply partial people it comes down rather jerkily and perhaps quite slowly (if you need one person per 20, say, but you only have five people, you're paying way over the odds and may decide not to have anyone at all, meaning there is no protection). So yes, small numbers of people are hard to protect, where 'small numbers' means 'more than one'.
What really happened was probably: they knew someone at the place was downloading something nasty using its internet connection. But they didn't know who: they just knew it was coming from an address in the place, and they certainly did not know enough to nick whoever it as. So they closed it down and sent everyone home. Now they can (a) quietly pore over the machines there to try and work out who it was, and (b) keep watching for the same patterns, this time coming from someone's private address.
But of course it wasn't that, it was aliens, or possibly liberals: liberal aliens?
I'm entranced by your naivety: I keep expecting you to start talking about formal proofs (although that's probably a different branch of the cult).
Yes, approval to back out. The technical person or people implementing the change are absolutely not in a position to make decisions which could influence the functioning of the organisation, especially where the functioning or otherwise of the organisation is going to be in the papers. That's why there are elaborate governance structures in banks.
Quite so. I spend my entire life taking large systems and carefully dividing them down into smaller parts with controlled interactions. Yet still my programs have unexpected bugs: how strange!
Between (7) and (8) there is 'get approval to back it out': that's what takes the time, especially when (7) passed (but the tests were not adequate).
Well, assume that the monitors do go off, and that they go off promptly. If they do, then you don't just reverse whatever change you made: you have to fill in a great mass of forms which describe what you're going to do, apply for the access which lets you do what you're going to do, get approval from a bunch of very cautious people many of whom don't understand what you did to break it, how your proposed fix is going to fix it, or indeed any of the technical details of the thing at all, but who have burned into their brains the memory of a previous instance where someone 'backed out a change' which then took the bank down for days and are really concerned that this should not happen again. This takes time. It would be much quicker if you could just apply the fix you know will solve the problem, but the entire system is designed to make that hard.
Yes, it would be easier, and much quicker, if all this laborious process did not get in the way. But no-one really knows how to do that: the laborious process makes it hard to do bad things as it's intended to do, but it also makes it hard to do anything at all as a side effect. It's like chemotherapy: it kills the bad stuff, but it nearly kills the good stuff too. I think this is an open problem in running large systems: people like Googlebook claim to have solved it ('we can deploy and run millions of machines') but they do that by accepting levels of technical risk which a financial institution would find terrifying (and there's a big implied comment here about people (like banks) moving services into the clown and hence implicitly accepting these much higher levels of technical risk which this margin is not large enough to contain).
It's hard to get right because we're dealing with systems which are at or beyond the ability of humans to understand them.
And 'the companies making mega profits' are companies like Google, Intel, Facebook & Apple: not, for instance, RBS. Of course, those highly profitable companies never ever make mistakes. No company ever shipped several generations of processors with catastrophic security flaws, for instance.
That is the common model for enterprise platforms already.
You probably can do that. But then you'll have no credit record at all and will have an entertaining time if you want a mortgage, say.
It's being held on machines accessible from the internet so you can make credit reference enquiries over the internet.
There's pretty much no evidence for this. For instance look at computer science: the proportion of female CS graduates has fallen dramatically in less than a generation. CS is something women are, in fact, interested in, but they are being driven away.
No-one is assuming that. What they are assuming, because it is true, is that women are much kore likely to be sexually harrassed than men are.
So, why do you think most of the interested people would be men? That couldn't be, for instance, because you make it so unpleasant for women? No, of course not, it must be because men are just innately better, right?
Although I'm not sure about this, I believe that some groups to which this term might apply use it about themselves. If so, it's an endonym as well as an exonym, and I think it's fine to use it (it's not always safe to use endonyms if you're not in the group they apply to though: there's a famous example where it's very offensive).
I think the benefits of leaving are, in fact 'less foreigners', although no-one is allowed to say that, quite.
I think you're assuming that people who wanted to leave the EU wanted any coherent thing at all: it's clear they didn't, because if they did they'd have had a plan and they didn't. Instead they wanted 'to leave the EU' with no detailed idea what that involved at all: it was just some words that sounded kind of good. It's like 'wanting unicorns': everyone 'wants unicorns' even though we don't really know what unicorns would be like, until it turns out they're the unicorns of Equoid and we really didn't want them at all.
I certainly think it would be a good thing if all the major parties stopped refighting battles from the 1970s and before, and if that takes old people to die off, well, OK.
I think that's OK though. Given the demographics of the vote, any sufficiently long chain of referendums (referenda?) will probably show an increasing trend towards remain as elderly people die off. (I realise not everyone who voted leave was elderly but, well, look at the demographics of the vote.)
Of course there's always the option that as people become elderly they turn from remain to leave, but I suspect that's not the case as things like explicit jingoism in the education system have become less acceptable since the 1940s & 1950s. I also kind of hope that even elderly people are becoming aware that their health care & hence quality of life since, well, they are elderly and are going to need health care, kind of depends on lots of EU people being willing & able to work here, although apocryphal evidence (my mother (in her 80s, remain) says her leave-voting friends have not changed their minds) says otherwise.
You can indeed vote the media out: you stop consuming their products and they go bankrupt.
If the private sector can design websites that track our every move, that come up with suggestions for goods before we realise we want them ... and, every once in a while, puke all our private data all over the internet.
Somehow she missed that last bit: perhaps she doesn't read very well.
Yes, the reignition events are similar to type I supernovae so, yes, they're pretty big bangs. Type Ia supernovae are also related to events that happen with white dwarfs: in that case it's when they're in a binary of some kind and accrete enough matter from their partner to reignite fusion.
This past year we explored some design changes and heard from customers that we overcomplicated some of our core scenarios. Calling became harder to execute and Highlights didn’t resonate with a majority of users.
Was that written by a human being or some kind of half-working natural-language system?
The problem with that is that unless you make fees mandatory, then any bank which starts charging will, the next month, discover that half its customers have walked, because people are much more interested in 'free' than they are in 'good'. And the party which has in its manifesto 'we will make banking fees of <x> mandatory' loses every election, I find.
There are fairly well-known terms that are acceptable when used by the groups they refer to but very much not when used by people not in those groups. I won't give the canonical example...
Chrome wasn't ahead 'for a while': Chrome is now dominant in terms of number of users: it has something like 60% of the market. It may not be technically ahead (I don't know: I'm a Firefox user and I've barely tried Chrome), but that's not what is at stake here.
I think you can add at least hens, cattle, sheep, probably horses, cats & dogs to that list.
(Not arguing we're not a disaster, note.)
Something I'm never sure about with this event: it lasted a million years or so. Did things get bad quickly and then stay bad for a million years, or did it take a million years to do its work? Because if it's the latter thing this is actually a rather slow event: a few hundred times as long as there has been any real civilisation, and about three times as long as Homo sapiens has existed for. That doesn't mean it wasn't a cataclysm of course, but it might be one that was hard to recognise while it was happening.
(And I won't add a comment about how the timescale compares with what we're now doing because the Great Orange Prune has informed me that all that inconvenient fake science news media is made up by librals who are, for some reason against fa (are they OK with do, re, me, sol, la & ti? I don't know).)
Unfortunately the JS interpreter, no matter how carefully it's written, up to and including being formally proved correct (which is not anything like plausible, but let's assume it is), relies on the hardware on which it is running to not leak information. And the hardware does, in fact, leak information.
So either you fix the hardware, or you check and sign every bit of JS to warrant that the JS you run does not exploit the leaks in the HW. That's why I said that.
And we're back where we started.
It would be interesting to think how this could work for, say, a machine running a web browser. You'd need (say) all the JS that you ever ran from anywhere to be signed, or you'd want formal proofs of non-maliciousness of the JS. The second is not possible, the first is merely impractical.
They didn't 'immediately pour scorn': they found something extremely odd-looking in the data, and then started doubting.
If what they need is xattrs then it sounds like there's an easy way out. They could, rather than saying that 'Dropbox will only work on filesystems x, y & z' say 'Dropbox needs xattrs to work. If the filesystem you are using it on has xattrs and supports them in a way compatible with ext4, then Dropbox should work. However we can't test all the Linux filesystems: the ones we test are x, y & z, so those are the only ones that we will support Dropbox on: if you have a problem with it running on some other filesystem then, at our option, we may choose to reject support requests'.
However one of x, y & z should be an encrypted filesystem, for sure.
[...] no one other than Trump really wants it [...]
Does he want it, or does he just want another mechanism to funnel money to his
I think the mistake is that companies often don't understand what they do, and end up outsourcing everything that makes them interesting. If your business is part of what people call 'the service industry' and if the date is after about 1990 then there's a very higj chance that what your business is about is shuffling data around on computers. If you outsource that then there's not much left of your company but a shell which will live on for a while until people realise that it no longer serves any purpose.
And the 'something else' is going to be something thought up[*] by Boris 'fuck business' Johnson and Jacob 'fuck everything since the 18th century' Rees-Mogg. This is going to work so well.
[*] Please don't assume that my use of the term 'thought up' implies these people can actually think, or at least about anything other than their own aggrandisement.
There's no doubt that trade is completely possible under WTO rules (no-one economically literate seems to think we'll do as well as we do under the current rules, but we would kot starve).
But that's not the problem: the problem is the transition from the current system to the new one which, in the case of a no-deal brexit, will be abrupt and largely unprepared for. That kind of chaos is moderately likely to be pretty bad:
Ah, that makes sense. Live in France, get paid in Euros, vote to fuuk the UK economy then when your retire you can buy, I dunno, a small town in the UK.
Clever. Evil, but clever.
So, wait, you're living IN FRANCE but voting that the UK should leave the EU? because, I suppose, you think there are enough horrible British people in France already and you want to make sure that no more of them get in to disturb you, or what? I mean, seriously, what the fuck?
I think they want a return to a time slightly before they were born when everything, of course, was wonderful. When that is depends on how old the person is.
Even if you don't know the language you can start looking for patterns which look like language based on the statistics of letters and n-grams of letters, which are very non-random in natural languages.
This manifestly is not true, unless you choose the bits to xor together randomly in which case you are, in fact, adding more random information to the stream.
You really, really do not want to use a website to generate passwords unless you are extremely confident both in the code it runs, the hardaware it runs on and the security of the connection between you and it.
Well, the system is (you hope) storing only hashes of the passwords, so when changing password it can know, at most, the current and new plain texts and the hashes of the previous n passwords. So the very best it can do is ensure that the new password is sufficiently different to the current one and that it is different in some way (but now how different) from the previous n.
Well, yes, of course. I didn't specify what
/usr/share/dict/words on my machine contains, or exactly what
LANG is set to, and perhaps I should not do that.
I have found an interesting thing regarding this: encryption is not enough. Even looking too closely at the encrypted contents of the disk is enough to cause quite nasty things to happen to potential eavesdroppers. The results are usually fatal, and I imagine the eavesdroppers are glad of that, at least until their minds go.
I had an account on photo.net which (a) changed at some point so it would not let me use my account-with-+-in-it and (b) kept on sending me junk mail to that address and ignored my requests to delete it or make it work again. If I had more than one suitcase nuke I think I would have used one of them to deal with this cretinism.
There are, I believe, RFC-822 parsers out there (as in: there are hundreds): why can't these fuckwits just use one to tell if email addresses are valid rather than use some half-baked regexp of their own devising which doesn't actually work.
Based on my experience (so, OK, sample of one, self-selected), passphrases made from random words are much easier to remember, yes. I think this is because we have specialised machinery in our heads for dealing with natural language, and while we don't have specialised machinery in our heads for dealing with written language (too recent, evolutionarily) the more general-purpose machinery we've trained to deal with it turns out to work really well. So if you see a string of words in a natural language you speak then you're remarkably good at remembering them even if they are randomly chosen.
This works, surprisingly, even if you have never seen the words before: I just ran my generator for a three-word passphrase and it came up with 'cinephotomicrography franchisal lineation': I don't think I've ever used any of those words, or probably even seen them before, but I typed all but the first without looking back at the window I'd covered.
One argument for biometrics is that they are harder to shoulder-surf, especially compared with something you're likely to be able to reliably type on a phone. I'm not sure how good an argument that is, but it's not obviously silly.
I use random (and I mean random: generated from proper randomness) strings of dictionary (/usr/share/dict/words / /usr/dict/words) words as passwords (well, passphrases). It's easy to show that these, if they are long enough, are harder to guess than normal line-noise passwords (the alphabet the symbols are chosen from is much bigger, the symbols are randomly chosen). But I still have to add a little bit of line-noise to the end of them to keep the stupid 'must be line noise' checker happy.
Biting the hand that feeds IT © 1998–2018