there's just one tiny problem... (Edmund Blackadder)
'carton' (картон) doesn't mean carton (a box) in Russian - it means 'cardboard'.
96 posts • joined 10 Sep 2009
'carton' (картон) doesn't mean carton (a box) in Russian - it means 'cardboard'.
Why could it be that two letters of mine this year relating to important issues, sent directly to the ministers responsible, have elicited zero response but the govt is dead keen to find out at our expense what some of us have tweeted about it?
Tell all that to the guy who did the forensics in Bookout v. Toyota - several really basic systems design and programming errors. But we're not just blaming coders here, nor just the auto industry - there have recently been some quite spectacular aeronautical software design snafus.
The bottom line is that the whole software development process still fails to meet the standards expected of all other branches of engineering. And falling back on 'testing' is not an appropriate solution. We don't build bridges without doing the math and then just test them by running trucks across (we used to: there's a famous 19th century verse about Crystal Palace, London that goes "... the sappers and miners who marched and who ran ... To test the girders to Plaxton's plan..." but we've advanced beyond that by now.
So the reality is that software engineering is not yet a mature enough discipline to apply with confidence to safety-critical systems. With luck and persistence it may become so, but presently it's too damned dangerous to trust your life to software.
"One of the purported benefits of public cloud is you no longer need to buy and maintain your own servers – they become the responsibility of somebody else."
Oh no they don't - they get to be _managed_ by somebody else, but the responsibility remains firmly in your corporate lap. That actually increases your exposure, as you can't control the screw-ups of your providers.
The simplest one is that GDS has conclusively demonstrated via a succession of projects that they couldn't design their way out of a wet paper bag. Any other explanation needed?
but still clearly incapable of maintaining a coherent train of thought or coping with basic grammar:
"not sell our personal information and preferences for money, and will make it clearer if the company/website intends to do so."
Actually worse for you than that. For a collision-free hashing algorithm the safe limit is for the total length of the clear text to not exceed the length of the hash (in bits). If it does, there _will_ be (not just may be) collisions. So very long plaintexts (regardless of their make-up) actually make the attacker's job more rewarding, as brute forcing a given hash may yield more than one plaintext. Thus the attacker can potentially obtain more credentials from the same number of captured hashes.
However your '50% probability' depends on the hashing algorithm's transfer function having a uniform distribution. I'm not sure whether it does, but I'd be surprised if it did considering the principle of how it works.
And that excuse too. "An attack in this class..." - what class? We don't seem to have any details yet, but as a security professional I'm regularly less than amazed when the latest "sophisticated attack" eventually turns out to have been a total push-over that circumvents deficient or degraded controls. Our biggest problem is that the "defenders" only defend reactively, but the attackers are proactive. If we managed our systems (and our business processes) robustly, a lot of these attacks would bounce off without doing much (or any) harm. But we just skirmish defensively in a guerrilla war in the enemy's territory, so we keep losing.
A nicely conducted piece of statistical research, telling us what we've actually known for years. The entire "character set + template" approach to authentication credential creation is well recognised by both experts in systems and psychologists to be flawed, but we're stuck with it because the people defining login requirements currently have no understanding of either.
The silliest recommendation after "character set + template" is the supposedly random character string. This is grounded in a misunderstanding (and misapplication) of Shannon entropy, and fundamentally fails because (even if generated by a true random process) no-one (OK, maybe one in a million) can remember it. It's actually impossible for a human to create because the mind can't wrap round true randomness - what looks like a "random string" to a human is usually biased to emphasise a small subset of the possible code space.
Even the random word sequence advocates ("horse staple ...") have it wrong. The essence of a robust authentication credential subsists in three requirements:
 it must be long enough to make brute forcing hard - the required length will change with time and the criticality of what is being protected;
 it must be memorable to its creator - so in principle it must mean something to him or her;
 it must not be readily guessable by anyone else - so a problem arises for folks who are not very original ;-)
Within the string space fulfilling these three requirements, the strongest strings against guessing attacks will be the ones that conform least well to a common template. So the best rule set will contain the fewest, simplest rules. Here's my take with commentary in square brackets:
"A logon credential [note that we intentionally don't say 'password'] is not to allow you access to our systems - it's to prevent anyone else gaining access by pretending to be you. It must therefore be easy for you to remember but difficult for anyone else to guess. To achieve this, here are some basic guidelines:
 think up a memorable but not well known phrase or sentence of at least four words totalling at least 15 characters [reasonable length at time of writing, but may need to increase]. This phrase should mean something to you to make it easy to remember, so be imaginative, consider using humour and/or your native language.
certain obvious words are blocked and therefore cannot be used, including [e.g.] your user name, the company name or date words (month and day names) [but keep the excluded words list to a minimum to avoid user frustration].
 you may, but are not obliged to, separate the words in your phrase with non-alpha symbols."
Not the ultimate maybe, but probably a better start than the standard rules that render all words in any dictionary illegal (rather a challenge for a literate user) but permit 'Pa55w0rd!'. I've written about this elsewhere (http://intinfosec.com/library/policies/2011-Instant_Compliance_for_a_Grand.pdf)
The item on Haskell included mention of its use to create a statistical analysis tool for assessing drug rehab clients, during which 'Dr. K' made the statement that it was a surprising use of such a mathematically oriented language. I've seldom heard such a silly statement from a supposed expert - a mathematical approach is essential to solving statistical problems, so a mathematically oriented language would in principle be the ideal choice.
This device (although less abstracted and obscure than the Raspberry Pi) is still too complex to really impart the fundamental concepts of computer technology. Kids would be vastly better served by a simple board carrying an 8-pin or 14 pin PIC, plus the device data sheet. The skills we are primarily short of (even among developers) are much nearer the metal than current programming practice encourages or imparts. A PIC solution would offer two key advantages: it would probably be cheaper, and the device architecture and instruction set are so simple that a child could grasp them in a few days, leading to basic understanding of machine architecture, Boolean logic and the electronics of interfacing, little or none of which is acquired by high level coding practice, particularly at school level.
The offending phish email (on the netcraft site) is not actually very convincing. I'm not going into details as I don't want to assist the perps, but there are several tell-tale signs that anyone who was paying attention would immediately spot. If you're not paying that much attention you'd get stung by anything!
Auntie is never wrong - even about trivial things. There is one presenter of the morning shipping forecast (Louise Lear) who, whenever the same conditions pertain in both the Forth and Tyne regions, always merges them into the non-existent "Forthtyne" with the stress on Forth. I've complained to Auntie several times over a period of several years, and all the responses I've received have simply stated that I'm mistaken.
If Auntie can deny such simple checkable matters of fact, she can deny anything.
I wish I knew where these knowledgeable programmers are hiding. Most "programmers" I've met can't even create bug-free code using "flat pack assembly" dev tools.
The (obviously) prevalent idea that patched=secure is spherical and plural, and always has been. It makes no more sense than "what you don't know can't hurt you" - indeed it's grounded in that false premise.
It's about time we stopped relying on reactive fixes based on blacklisting and got round to creating some real resilience - starting with the ability to write software that isn't littered with exploitable bugs.
It would have been generous to link to the report so we could read it for ourselves!
This is a classic example of the exact opposite of what is really needed. The prevalent technocentric approach to infosec has got us where we are, so doing more of it will not improve our state of security.
What is really needed (and in my experience as a security consultant is almost universally missing) is a robust security management framework consisting of  a strategy that defines the security priorities of the organisation in terms of risk,  tactics for addressing the priorities, and  operational processes that fulfil the requirements defined by the tactics and strategy. The framework essentially needs to include monitoring and feedback to ensure that [a] perceived risk continues to accurately represent reality as things change, [b] control objectives have a realistic chance of protecting against threats, and [c] controls that actually work.
Appointing techie "hackers" to oversee the security of a vast corporate (or indeed a government, as we seem to be doing here in the UK) is about as useful as appointing a bricklayer (however skilled) to oversee the building of a city.
We need to wake up to the reality that information security is primarily a problem of business process management. Yes - we can be attacked via technologies and we use technologies extensively to protect ourselves, but as in the case of JP Morgan http://www.theregister.co.uk/2014/12/23/jpmorgan_breach_probe_latest/ it's in BAU management that the weaknesses mostly manifest themselves.
"the much-scaled back pilot programme" - clearly an attempt by airlines to save on salaries by employing fish.
Just highlights the level of competence of our web developer community - don't write code with care, copy and paste from demos without attention. Explains a lot about the deluge of breaches.l
Accumulating these acronyms does not mean they're intellectuals (although being one is not necessarily a bad thing in a sphere where unconsidered rote learning and rule of thumb still dominate) - it means they've put up the money to take a bunch of computer marked multiple choice pub quizzes. Expertise cannot be evaluated that way, but it does free those who select practitioners from the burden of knowing the subject. It also creates multiple closed shop cliques that can capitalise on the "mysteries" of narrow subsets of infosec - witness PCI DSS, which is in reality little more than basic good practice in infrastructure security and information management - things you should be doing as a matter of course across your whole estate - but has spawned a huge and very lucrative specialist consultancy and conference industry.
BTW, I recently saw an UK advert for a PCI security contractor at 450 quid a day (that's over US$170k per year) that specified "at least two years IT security experience", and a recent survey of the security knowledge of software developers incidentally found that almost 50% of respondents in key fields including banking and systems software development had less than two years experience. It appears therefore that the pub quizzes are a fast track for the inexperienced into lucrative security-related roles where they can earn a lot while perpetuating the insecurity of our infrastructure.
'men have evolved a greater spatial ability to "benefit reproductively ...'
Supposing this is a direct quote, it's pretty sad that scientists (even if only anthropologists) continue to promulgate the fallacy that evolution is directed to defined purposes. If it's not a direct quote, shame on el Reg for doing likewise.
It says something about the threat intelligence service that (according to the graph in the image) it's failed to identify 40% of threat actors. Presumably the comment "Advanced Threat actors are getting smarter" is based on the assumption that the "unknown" 40% are smarter than the analysts.
"We glue the wings on airplanes with evostick and they keep falling off, so let's abandon airplanes" - that's no sillier than this commonly repeated argument about passwords. We define them poorly and manage them worse (just for example, the last time I asked el Reg for a password refresh I was emailed my existing password in plain text), so they must be intrinsically crap.
They don't have to be, were we to get our act together, but we're stuck in a sloppy mind set that will actually make any alternative authentication method pretty much equally open to abuse.
Those who implement password controls must stop thoughtlessly repeating mantras ("special symbols and squirrel noises") and take notice of a vast and growing body of rigorous scientific research on both the psychology and technologies of authentication and breaches. The problems are actually much simpler than we have been led to believe, but require more effort and imagination that we have brought to them so far to solve.
So no, passwords are not dead - they just need to be created and used intelligently with reference to the real world. Then they are just as good as any other authentication method in their own context.
Is 30cm the resolution limit, the pixel size or the size of an arbitrary object that can be recognised in the image?
Resolution limit means the ability to resolve a pair of high contrast lines not less than that wide, pixel size is typically a quarter to a ninth of the area of the minimum resolvable dot. Neither mean that objects of this size could be recognised from the images. I would guess that the size of a minimum recognisable object is more likely to be in the order of 1.5-3 meters.
"...I don't see it as any of my lightbulb's business if my electric car has paired with my washing machine..."
The light bulb, being quite bright, may be worried by the potential nature of the offispring...
I spent a few years working on this (blind denoising of tree ring width series) in the '90s. The only moderately reliable first order separation was between signal components common to multiple concurrent series from a specific site and signals uniquely present in individual series. The assumption on which my work was based was that individual variation is less likely to be driven by a common influence, so removal of individual variation should leave a better approximation to the common signal indicating the common influence.
Admittedly this is a fairly loose argument, but my work did show fairly conclusively that high frequency components tend to be local to individual series and low frequency components have a better chance of being common to all the series. Unfortunately, the then (and I believe still) common practice of "detrending" by normalising each individual series to its own low frequency spline before any analysis tends to mask the lower frequency components that might be some of the most interesting in terms of climate change.
However tree rings are not alone in providing rather tenuous and noisy signals. All currently used climate proxies suffer from this, each in their own way, so using them has to be done with a great deal of caution.
"...the possibility we're no longer alone in the universe..."
reminiscent of Columbus "discovering" America - the native populations didn't even know it existed until he turned up and told them...
"...the secret signal sauce that allowed location to be determined..."
An interesting culinary sidelight on what would otherwise seem to be a pure DSP problem.
The most serious culprit in all this is the EULA. As soon as you "license" rather than sell the product (or the software component of the product even if you just paid for the hardware) all the established legal protections relating to safety, functionality and even fitness for purpose suddenly cease to apply. Consequently there's absolutely no incentive to make the software secure or even robust against failure. In short - the EULA is a perfect get out clause and that's very unlikely to change due to the pressure of the vendor community on legislation.
It's ironic (and self-defeating) that you can, just for an obvious example, buy a car (a potentially lethal machine) the mechanics of which must meet increasingly stringent safety standards in order to prevent fatalities, but the software that controls many of its safety critical functions can be complete garbage and there's very little comeback. Indeed someone usually has to die before any action is taken, and even then there's financial penalty, but no guarantee the next piece of software will be any better.
If you aren't convinced yet, read http://www.safetyresearch.net/Library/Bookout_v_Toyota_Barr_REDACTED.pdf then extrapolate its findings to the entire hypothetical IoT. That's not unrealistic - the flaws Barr described are _seriously_ basic stuff - the kind of mistakes a student would be marked down for on any adequate programming course, and furthermore protecting the vendor's IP seems from that report to have taken precedence over the level of facilities provided to a court-appointed expert examiner. Does that not shout volumes about the way forward?
Then of course there's the rather funnier (in hindsight) incident of the Satanic Renault http://www.theregister.co.uk/2013/02/15/satanic_renault/
How about spending all that money and ingenuity on teaching people to write code that isn't a bug-ridden load of excement in the first place...
'"Another possible application of this principle may be for trapping radiation inside a shell of plasma rather than excluding it" said Toohie.'
Larry Niven and Jerry Pournelle invented this (the Langston Field) in 1974.
It would be a nice gesture to identify (or even link to) the original study. We should not have to do our own legwork to find Incapsula and then locate the study in question (which has proved impossible anyway).
The Register is apparently joining the ranks of "parasites" - sites that merely rehash other people's conent without any value-add or proper referencing to sources.
"... and don't call me Shirley..."
The most obvious fatal error in this whole "cunning plan" is the assumption that "coding" should be the objective. Coding is merely the manipulative mechanical skill used to realise programming (an instance of intellectual and creative problem solving).
If we just teach our kids "coding" we will finish up with echelons of unemployable incompetents, whereas teaching programming can result in expanding their mental capacities (just as chess or Latin do) for those with an appropriate mind set to start with, which should make life more interesting for them, quite apart from any direct benefit for employment as software developers. The undeniably abysmal quality of software today (monthly "patches" to fix silly mistakes, security breaches &c.) is a direct result of too many people already just learning to code rather than to program.
The second, and equally egregious, error is the assumption that teachers can be taught to teach "coding" in "a day" (http://politicalscrapbook.net/2014/02/tory-boss-of-government-coding-education-initiative-cant-code-lottie-dexter/), or indeed any other short period without a preparatory grasp of both the first principles that underlie the technologies and a grounding in analytical and logical thinking.
Oh dear, did I say "thinking" - how absurd...
"there are two tiny problems with that theory..."
The two major misconceptions here are:
 that the problem is primarily "weak passwords". Yes, the passwords exposed by major offline cracking attacks are generally weak, but before offline cracking can be carried out the authentication server has to be breached so the password database can be stolen. That is the real root problem we have to solve, and it remains regardless of the authentication mechanism in use. There must always be, somewhere, a record of legitimate credentials in some form or other to compare authentication attempts with. It may be made more difficult to abuse it, but the threat cannot be eliminated.
 that biometrics should be used for authentication. Biometrics are validly used for identification, as the identity of a supplicant is not expected to change. But using a biometric for authentication (i.e. validating that the supplicant has presented their legitimate identity) is fundamentally flawed. The reason is simple - how do you change the credential when it gets compromised? Eye and fingerprint replacements are still the stuff of Hollywood, and will remain so.
The greatest real problem we face in corporate information security is the over-emphasis of technocentric attack skills and countermeasures at the expense of adequate preparedness and basic "digital hygiene".
Contrary to popular report, well over 80% of all successful attacks do not need highly sophisticated skills to accomplish, but are push-overs due to mismanagement - e.g. systems being left wide open - by the victim.
For adequate defence, we need people who can take a holistic view of business processes, data processing and infrastructures, identify weaknesses and cover for them in advance much more than we need people who can find and exploit the individual holes that are merely symptoms of mismanagement.
not everyone has a mobile either.
0300 numbers are not included in the "all in" packages for BT land lines, so they cost extra, but 0845 and 0800 numbers are. So whether or not you get a free call depends on the service you sign up to - a very strong argument for publishing all the alternative numbers - geographic as a fundamental prerequisite, 0845 or 0800 and 0300.
Provision of a geographical number should be the minimum obligation (particularly for govt. and essential services), as it not only provides a means of contact but also offers a validation of the authenticity of the agency - geographical numbers are by definition tied to addresses, whereas no-geographical numbers are not.
"Your fully autonomous vehicle can see them coming via radar a few hundred yards off..." Yes, that'll work while only one or two cars are using radar in the same space. However, when the idea catches on and every one of hundreds of cars is pinging constantly, the system is going to break down. Even if it's not based on "radar", the sheer number of independent comms channels required will exceed practicability if channel separation is maintained, or alternatively there will be loss of signal integrity and consequent errors (aka accidents).
The very simple alternative solution would be to teach folks to drive properly - it's not that hard.
Strictly "by the big blue egg [singular, familiar form] (or testicle [singular]) of St. Cyril..."
The validity of this kind of research depends entirely on the nature of the population from which the samples are drawn. If for example, we draw both the experimental sample and the control sample from a population of extremely unperceptive people who generally just mooch around with their eyes half shut (aka students), those who play video games might well score higher than those who don't. Were the samples to be drawn from a society of hunter-gatherers or orchestral musicians, a different outcome might be expected.
A damn sight too much social and behavioural science research ignores this key issue. It's entirely wrong to assume that the whole human population behaves identically - regardless of delightful that assumption is to politicians and marketeers.
You might like to take note of this yourselves, starting with your annoying grey bar...
Let me correct that for you:
HTML is not a file format - it's a page description language.
"...the gravity of foreground objects warps and magnifies the light from background objects"
How do you magnify light? Images can be magnified by deflecting light. Light can be amplified, diffused or concentrated, but magnified? Surely "magnified means "made bigger", which could only be accomplished by adding more photons from somewhere (thus effectively meaning the same as "amplified"). Or have I missed something?
A temperature may be high or low - not "hot" or "cold". Temperature is a description or measure of hotness or coldness. You can't have a "fast speed" or an "expensive price" for similar reasons.
It would be nice if the link actually pointed to the relevant paper, rather than to an index page where it can't be found. As we don't know the title of the paper and a search for "Bayne" on the linked page yields nothing relevant, it's impossible to find it.
In UK copyright law at least, the issue isn't attribution - it's permission. So retweeting might indeed constitute copyright infringement the original tweeter had not given permission and were to decide to be a literalist. Courtesy of the media moguls who are powerful enough to buy the law and just want our money any which way, it's technically almost impossible to avoid copyright infringement unless we stay silent and never write anything in public.
But in practical terms, due to the cost of litigation, it all comes down to money (like just about everything else). Unless it were a recognised catch phrase (e.g. a movie or pop song quote or a commercial strap line) it would be a very wealthy copyright owner to complain about the use of half a dozen words on Twitter. But it's worth reviewing the restrictions imposed recently in relation to a major sporting event in London, see http://www.keystonelaw.co.uk/other/keynotes/2012/june/restrictions-of-olympic-proportions
Although these particular restrictions rest largely on trade mark law, it's clear that the whole IP system is getting out of hand - becoming a source of revenue rather than it's original purpose of protecting creative works against debasement.
Well this is only "anecdotal", but I had a wide variety of small songbirds visiting my garden bird feeders for several years - I usually had to refill them daily. Then two neighbours introduced three young cats around Christmas time 2011. This year I have only recorded two visits to my bird feeders since January, and the untouched seed goes mouldy in the feeders.
A huge amount of the comment here and elsewhere on this research anthropomorphises cats - accusing them of "murder", "torture" &c. &c. All this misses the point entirely. Cats are much more hard-wired than many of us would like to believe. They are to a large extent stimulus-driven automata, pre-programmed to pounce on small animals that move in their field of awareness.
That means it's the responsibility of the "owner" (although nobody really 'owns' a cat - it simply occupies a territory that you may also occupy) to minimise the damage a cat can do - particularly in densely populated urban environments. The simplest fix is a collar with a bell on it, but it has to be a sensible bell, not the tiny token gesture fitted as standard to most commercial cat collars.
A bell does not so much alert the prey as distract the cat by spoiling its stealth as it springs - provided the cat can hear the bell and it is fitted when the cat is young enough. If it works, operant conditioning eventually sets in, reducing the incidence of the predatory behaviour.
Nevertheless, the biggest problem for prey species is not the behaviour of the individual cat but excessive predator density. Where I live, nine or ten cats have "homes" within an area of one acre (18 residences). This is at least 20 times the natural predator density, and is only sustainable for the predators because the cats are artificially fed. It is however, completely unsustainable for many of the prey species.
I wonder how many who have commented here have actually read the full paper. Minor point maybe, but...