"...these results confirm what programmers should already know..."
I wish I knew where these knowledgeable programmers are hiding. Most "programmers" I've met can't even create bug-free code using "flat pack assembly" dev tools.
83 posts • joined 10 Sep 2009
I wish I knew where these knowledgeable programmers are hiding. Most "programmers" I've met can't even create bug-free code using "flat pack assembly" dev tools.
The (obviously) prevalent idea that patched=secure is spherical and plural, and always has been. It makes no more sense than "what you don't know can't hurt you" - indeed it's grounded in that false premise.
It's about time we stopped relying on reactive fixes based on blacklisting and got round to creating some real resilience - starting with the ability to write software that isn't littered with exploitable bugs.
It would have been generous to link to the report so we could read it for ourselves!
This is a classic example of the exact opposite of what is really needed. The prevalent technocentric approach to infosec has got us where we are, so doing more of it will not improve our state of security.
What is really needed (and in my experience as a security consultant is almost universally missing) is a robust security management framework consisting of  a strategy that defines the security priorities of the organisation in terms of risk,  tactics for addressing the priorities, and  operational processes that fulfil the requirements defined by the tactics and strategy. The framework essentially needs to include monitoring and feedback to ensure that [a] perceived risk continues to accurately represent reality as things change, [b] control objectives have a realistic chance of protecting against threats, and [c] controls that actually work.
Appointing techie "hackers" to oversee the security of a vast corporate (or indeed a government, as we seem to be doing here in the UK) is about as useful as appointing a bricklayer (however skilled) to oversee the building of a city.
We need to wake up to the reality that information security is primarily a problem of business process management. Yes - we can be attacked via technologies and we use technologies extensively to protect ourselves, but as in the case of JP Morgan http://www.theregister.co.uk/2014/12/23/jpmorgan_breach_probe_latest/ it's in BAU management that the weaknesses mostly manifest themselves.
"the much-scaled back pilot programme" - clearly an attempt by airlines to save on salaries by employing fish.
Just highlights the level of competence of our web developer community - don't write code with care, copy and paste from demos without attention. Explains a lot about the deluge of breaches.l
Accumulating these acronyms does not mean they're intellectuals (although being one is not necessarily a bad thing in a sphere where unconsidered rote learning and rule of thumb still dominate) - it means they've put up the money to take a bunch of computer marked multiple choice pub quizzes. Expertise cannot be evaluated that way, but it does free those who select practitioners from the burden of knowing the subject. It also creates multiple closed shop cliques that can capitalise on the "mysteries" of narrow subsets of infosec - witness PCI DSS, which is in reality little more than basic good practice in infrastructure security and information management - things you should be doing as a matter of course across your whole estate - but has spawned a huge and very lucrative specialist consultancy and conference industry.
BTW, I recently saw an UK advert for a PCI security contractor at 450 quid a day (that's over US$170k per year) that specified "at least two years IT security experience", and a recent survey of the security knowledge of software developers incidentally found that almost 50% of respondents in key fields including banking and systems software development had less than two years experience. It appears therefore that the pub quizzes are a fast track for the inexperienced into lucrative security-related roles where they can earn a lot while perpetuating the insecurity of our infrastructure.
'men have evolved a greater spatial ability to "benefit reproductively ...'
Supposing this is a direct quote, it's pretty sad that scientists (even if only anthropologists) continue to promulgate the fallacy that evolution is directed to defined purposes. If it's not a direct quote, shame on el Reg for doing likewise.
It says something about the threat intelligence service that (according to the graph in the image) it's failed to identify 40% of threat actors. Presumably the comment "Advanced Threat actors are getting smarter" is based on the assumption that the "unknown" 40% are smarter than the analysts.
"We glue the wings on airplanes with evostick and they keep falling off, so let's abandon airplanes" - that's no sillier than this commonly repeated argument about passwords. We define them poorly and manage them worse (just for example, the last time I asked el Reg for a password refresh I was emailed my existing password in plain text), so they must be intrinsically crap.
They don't have to be, were we to get our act together, but we're stuck in a sloppy mind set that will actually make any alternative authentication method pretty much equally open to abuse.
Those who implement password controls must stop thoughtlessly repeating mantras ("special symbols and squirrel noises") and take notice of a vast and growing body of rigorous scientific research on both the psychology and technologies of authentication and breaches. The problems are actually much simpler than we have been led to believe, but require more effort and imagination that we have brought to them so far to solve.
So no, passwords are not dead - they just need to be created and used intelligently with reference to the real world. Then they are just as good as any other authentication method in their own context.
Is 30cm the resolution limit, the pixel size or the size of an arbitrary object that can be recognised in the image?
Resolution limit means the ability to resolve a pair of high contrast lines not less than that wide, pixel size is typically a quarter to a ninth of the area of the minimum resolvable dot. Neither mean that objects of this size could be recognised from the images. I would guess that the size of a minimum recognisable object is more likely to be in the order of 1.5-3 meters.
"...I don't see it as any of my lightbulb's business if my electric car has paired with my washing machine..."
The light bulb, being quite bright, may be worried by the potential nature of the offispring...
I spent a few years working on this (blind denoising of tree ring width series) in the '90s. The only moderately reliable first order separation was between signal components common to multiple concurrent series from a specific site and signals uniquely present in individual series. The assumption on which my work was based was that individual variation is less likely to be driven by a common influence, so removal of individual variation should leave a better approximation to the common signal indicating the common influence.
Admittedly this is a fairly loose argument, but my work did show fairly conclusively that high frequency components tend to be local to individual series and low frequency components have a better chance of being common to all the series. Unfortunately, the then (and I believe still) common practice of "detrending" by normalising each individual series to its own low frequency spline before any analysis tends to mask the lower frequency components that might be some of the most interesting in terms of climate change.
However tree rings are not alone in providing rather tenuous and noisy signals. All currently used climate proxies suffer from this, each in their own way, so using them has to be done with a great deal of caution.
"...the possibility we're no longer alone in the universe..."
reminiscent of Columbus "discovering" America - the native populations didn't even know it existed until he turned up and told them...
"...the secret signal sauce that allowed location to be determined..."
An interesting culinary sidelight on what would otherwise seem to be a pure DSP problem.
The most serious culprit in all this is the EULA. As soon as you "license" rather than sell the product (or the software component of the product even if you just paid for the hardware) all the established legal protections relating to safety, functionality and even fitness for purpose suddenly cease to apply. Consequently there's absolutely no incentive to make the software secure or even robust against failure. In short - the EULA is a perfect get out clause and that's very unlikely to change due to the pressure of the vendor community on legislation.
It's ironic (and self-defeating) that you can, just for an obvious example, buy a car (a potentially lethal machine) the mechanics of which must meet increasingly stringent safety standards in order to prevent fatalities, but the software that controls many of its safety critical functions can be complete garbage and there's very little comeback. Indeed someone usually has to die before any action is taken, and even then there's financial penalty, but no guarantee the next piece of software will be any better.
If you aren't convinced yet, read http://www.safetyresearch.net/Library/Bookout_v_Toyota_Barr_REDACTED.pdf then extrapolate its findings to the entire hypothetical IoT. That's not unrealistic - the flaws Barr described are _seriously_ basic stuff - the kind of mistakes a student would be marked down for on any adequate programming course, and furthermore protecting the vendor's IP seems from that report to have taken precedence over the level of facilities provided to a court-appointed expert examiner. Does that not shout volumes about the way forward?
Then of course there's the rather funnier (in hindsight) incident of the Satanic Renault http://www.theregister.co.uk/2013/02/15/satanic_renault/
How about spending all that money and ingenuity on teaching people to write code that isn't a bug-ridden load of excement in the first place...
'"Another possible application of this principle may be for trapping radiation inside a shell of plasma rather than excluding it" said Toohie.'
Larry Niven and Jerry Pournelle invented this (the Langston Field) in 1974.
It would be a nice gesture to identify (or even link to) the original study. We should not have to do our own legwork to find Incapsula and then locate the study in question (which has proved impossible anyway).
The Register is apparently joining the ranks of "parasites" - sites that merely rehash other people's conent without any value-add or proper referencing to sources.
"... and don't call me Shirley..."
The most obvious fatal error in this whole "cunning plan" is the assumption that "coding" should be the objective. Coding is merely the manipulative mechanical skill used to realise programming (an instance of intellectual and creative problem solving).
If we just teach our kids "coding" we will finish up with echelons of unemployable incompetents, whereas teaching programming can result in expanding their mental capacities (just as chess or Latin do) for those with an appropriate mind set to start with, which should make life more interesting for them, quite apart from any direct benefit for employment as software developers. The undeniably abysmal quality of software today (monthly "patches" to fix silly mistakes, security breaches &c.) is a direct result of too many people already just learning to code rather than to program.
The second, and equally egregious, error is the assumption that teachers can be taught to teach "coding" in "a day" (http://politicalscrapbook.net/2014/02/tory-boss-of-government-coding-education-initiative-cant-code-lottie-dexter/), or indeed any other short period without a preparatory grasp of both the first principles that underlie the technologies and a grounding in analytical and logical thinking.
Oh dear, did I say "thinking" - how absurd...
"there are two tiny problems with that theory..."
The two major misconceptions here are:
 that the problem is primarily "weak passwords". Yes, the passwords exposed by major offline cracking attacks are generally weak, but before offline cracking can be carried out the authentication server has to be breached so the password database can be stolen. That is the real root problem we have to solve, and it remains regardless of the authentication mechanism in use. There must always be, somewhere, a record of legitimate credentials in some form or other to compare authentication attempts with. It may be made more difficult to abuse it, but the threat cannot be eliminated.
 that biometrics should be used for authentication. Biometrics are validly used for identification, as the identity of a supplicant is not expected to change. But using a biometric for authentication (i.e. validating that the supplicant has presented their legitimate identity) is fundamentally flawed. The reason is simple - how do you change the credential when it gets compromised? Eye and fingerprint replacements are still the stuff of Hollywood, and will remain so.
The greatest real problem we face in corporate information security is the over-emphasis of technocentric attack skills and countermeasures at the expense of adequate preparedness and basic "digital hygiene".
Contrary to popular report, well over 80% of all successful attacks do not need highly sophisticated skills to accomplish, but are push-overs due to mismanagement - e.g. systems being left wide open - by the victim.
For adequate defence, we need people who can take a holistic view of business processes, data processing and infrastructures, identify weaknesses and cover for them in advance much more than we need people who can find and exploit the individual holes that are merely symptoms of mismanagement.
not everyone has a mobile either.
0300 numbers are not included in the "all in" packages for BT land lines, so they cost extra, but 0845 and 0800 numbers are. So whether or not you get a free call depends on the service you sign up to - a very strong argument for publishing all the alternative numbers - geographic as a fundamental prerequisite, 0845 or 0800 and 0300.
Provision of a geographical number should be the minimum obligation (particularly for govt. and essential services), as it not only provides a means of contact but also offers a validation of the authenticity of the agency - geographical numbers are by definition tied to addresses, whereas no-geographical numbers are not.
"Your fully autonomous vehicle can see them coming via radar a few hundred yards off..." Yes, that'll work while only one or two cars are using radar in the same space. However, when the idea catches on and every one of hundreds of cars is pinging constantly, the system is going to break down. Even if it's not based on "radar", the sheer number of independent comms channels required will exceed practicability if channel separation is maintained, or alternatively there will be loss of signal integrity and consequent errors (aka accidents).
The very simple alternative solution would be to teach folks to drive properly - it's not that hard.
Strictly "by the big blue egg [singular, familiar form] (or testicle [singular]) of St. Cyril..."
The validity of this kind of research depends entirely on the nature of the population from which the samples are drawn. If for example, we draw both the experimental sample and the control sample from a population of extremely unperceptive people who generally just mooch around with their eyes half shut (aka students), those who play video games might well score higher than those who don't. Were the samples to be drawn from a society of hunter-gatherers or orchestral musicians, a different outcome might be expected.
A damn sight too much social and behavioural science research ignores this key issue. It's entirely wrong to assume that the whole human population behaves identically - regardless of delightful that assumption is to politicians and marketeers.
You might like to take note of this yourselves, starting with your annoying grey bar...
Let me correct that for you:
HTML is not a file format - it's a page description language.
"...the gravity of foreground objects warps and magnifies the light from background objects"
How do you magnify light? Images can be magnified by deflecting light. Light can be amplified, diffused or concentrated, but magnified? Surely "magnified means "made bigger", which could only be accomplished by adding more photons from somewhere (thus effectively meaning the same as "amplified"). Or have I missed something?
A temperature may be high or low - not "hot" or "cold". Temperature is a description or measure of hotness or coldness. You can't have a "fast speed" or an "expensive price" for similar reasons.
It would be nice if the link actually pointed to the relevant paper, rather than to an index page where it can't be found. As we don't know the title of the paper and a search for "Bayne" on the linked page yields nothing relevant, it's impossible to find it.
In UK copyright law at least, the issue isn't attribution - it's permission. So retweeting might indeed constitute copyright infringement the original tweeter had not given permission and were to decide to be a literalist. Courtesy of the media moguls who are powerful enough to buy the law and just want our money any which way, it's technically almost impossible to avoid copyright infringement unless we stay silent and never write anything in public.
But in practical terms, due to the cost of litigation, it all comes down to money (like just about everything else). Unless it were a recognised catch phrase (e.g. a movie or pop song quote or a commercial strap line) it would be a very wealthy copyright owner to complain about the use of half a dozen words on Twitter. But it's worth reviewing the restrictions imposed recently in relation to a major sporting event in London, see http://www.keystonelaw.co.uk/other/keynotes/2012/june/restrictions-of-olympic-proportions
Although these particular restrictions rest largely on trade mark law, it's clear that the whole IP system is getting out of hand - becoming a source of revenue rather than it's original purpose of protecting creative works against debasement.
Well this is only "anecdotal", but I had a wide variety of small songbirds visiting my garden bird feeders for several years - I usually had to refill them daily. Then two neighbours introduced three young cats around Christmas time 2011. This year I have only recorded two visits to my bird feeders since January, and the untouched seed goes mouldy in the feeders.
A huge amount of the comment here and elsewhere on this research anthropomorphises cats - accusing them of "murder", "torture" &c. &c. All this misses the point entirely. Cats are much more hard-wired than many of us would like to believe. They are to a large extent stimulus-driven automata, pre-programmed to pounce on small animals that move in their field of awareness.
That means it's the responsibility of the "owner" (although nobody really 'owns' a cat - it simply occupies a territory that you may also occupy) to minimise the damage a cat can do - particularly in densely populated urban environments. The simplest fix is a collar with a bell on it, but it has to be a sensible bell, not the tiny token gesture fitted as standard to most commercial cat collars.
A bell does not so much alert the prey as distract the cat by spoiling its stealth as it springs - provided the cat can hear the bell and it is fitted when the cat is young enough. If it works, operant conditioning eventually sets in, reducing the incidence of the predatory behaviour.
Nevertheless, the biggest problem for prey species is not the behaviour of the individual cat but excessive predator density. Where I live, nine or ten cats have "homes" within an area of one acre (18 residences). This is at least 20 times the natural predator density, and is only sustainable for the predators because the cats are artificially fed. It is however, completely unsustainable for many of the prey species.
I wonder how many who have commented here have actually read the full paper. Minor point maybe, but...
Despite the prevalent myth of the Superhacker, there's plenty of solid evidence that most breaches are total pushovers. Just for example, Verizon's 2012 report (on 2011 data) concluded that 96% (4% more than the previous year) of attacks were "not highly difficult" and that "97% of breaches were avoidable through simple or intermediate controls".
So what we really need is not a few expensive cyber whiz kids on short term assignment for the duration of the London jamboree, but for ordinary IT staff at all levels to be competent in basic security housekeeping. It would be much safer and vastly more cost-effective, and would also release the real experts to protect us against the occasional attacks that are not so trivial.
However, it's not in the interest of the attackers, the defenders or indeed many security researchers to point out how easy cyber attacks currently are to accomplish, as they would all lose face (and, in many cases, huge revenue streams or big salaries). So we are kept in ignorance by an informal (and albeit uncomfortable) collaboration of deception on the part of pretty much all those who know the real situation. It would be incredibly difficult for government to justify proposed levels of expenditure on "cyber defence" if it was well known that the vast majority of their appallingly frequent security problems stem from the incompetence and slackness of the implementers and defenders of their systems. But we are up against a very determined adversary, so we have only one real choice - face facts or lose.
Maybe this explains why so many web applications are security nightmares? See http://www.infosecurity-magazine.com/blog/2011/12/14/software-insecurity-thrives/474.aspx
"...rising carbon precedes accelerating warming"
Perhaps someone could explain what "risng carbon" is? This kind of sloppy thinking contributes to the huge confusion that surrounds "global warming" in the public mind. Please let's get our fundamental terminology right - shouldn't be too hard.
Yes, it's be rather nice if someone (anyone) could write software without crass errors in it after all these years
I fail to see how this device will help many school kids get to real grips with microprocessor technologies. In the late '60s-early '80s we bought chips, obtained the support manuals, learned the device architectures and instruction sets and built ourselves (admittedly idiosyncratic and limited) microsystems from scratch, and we learned to do all this without formal instruction by trial and error. We could do this solely because the devices were simple and transparently documented.
This device, neat as it is, is extremely complex and non-transparent in terms of hardware and, by virtue of using a high level OS, presents to the user such an abstracted view of the machine that very little more can be learned than would be possible using a conventional PC running linux.
A much better starting point for imparting fundamental principles to school kids would be based on a simple device such as mid-range PIC (for which many affordable demonstrators are already available), coupled with programming in C and assembler. The essential task is not to take the current "hacker kids" to a higher level (they're already self-motivated enough) but to bring a basic understanding of systems principles to as wide a sector of the population as possible - so we must start simple. This offering sits half way home, rather than at the starting line.
Useful place to report service outage - on the web-based status page!
I'm concerned about the idea that posting on a social network is in any way comparable to pinning a photo on an internal noticeboard. See http://threatpost.com/en_us/blogs/twenty-something-asks-facebook-his-file-and-gets-it-all-1200-pages-121311 to find out one of the big differences
Not only is making the link a potential invasion of privacy, the mere evisaged use of the means to do so brings it squarely within the provisions of the UK Data Protection Act. This covers information that identifies a living person and information that would do so if combined with other information that might come into the Data Controller's possession. If an established mechanism is in place for obtaining the further information, it's a cut and dried case - except if there are exemptions for law enforecement. There might be via other legislation.
"may in turn trigger a governance and compliance risk." How do you "trigger" something that's an intrinsic attribute of an object or process? "Incident" may be what is meant here. The word "risk" is so widely misused in the IT community I'm surprised it still means anything at all.
Reminds me of the production lines in the movie THX 1138
"1.3m litres" - did you mean "1.3M litres"? if so, you're only a factor of 10^7 out.
This will brand recipients of social care as a card-holding sub-class. Has this been considered?
"There can be ... 120TB of data written to the 240GB Ultra"
That's only 500 full-volume write cycles. Which makes such a device very poor for large volume writes such as disk backups, and, given the way SSD handles erasure, only moderate for small volume dynamically updated data. So what''s it best for? Storing your photos and MP3s I guess.