61 posts • joined Thursday 10th September 2009 07:16 GMT
The biggest real problem
The greatest real problem we face in corporate information security is the over-emphasis of technocentric attack skills and countermeasures at the expense of adequate preparedness and basic "digital hygiene".
Contrary to popular report, well over 80% of all successful attacks do not need highly sophisticated skills to accomplish, but are push-overs due to mismanagement - e.g. systems being left wide open - by the victim.
For adequate defence, we need people who can take a holistic view of business processes, data processing and infrastructures, identify weaknesses and cover for them in advance much more than we need people who can find and exploit the individual holes that are merely symptoms of mismanagement.
Re: @Benjamin 4
not everyone has a mobile either.
0300 numbers are not included in the "all in" packages for BT land lines, so they cost extra, but 0845 and 0800 numbers are. So whether or not you get a free call depends on the service you sign up to - a very strong argument for publishing all the alternative numbers - geographic as a fundamental prerequisite, 0845 or 0800 and 0300.
Provision of a geographical number should be the minimum obligation (particularly for govt. and essential services), as it not only provides a means of contact but also offers a validation of the authenticity of the agency - geographical numbers are by definition tied to addresses, whereas no-geographical numbers are not.
Re: All at once or none at all - or maybe only a few?
"Your fully autonomous vehicle can see them coming via radar a few hundred yards off..." Yes, that'll work while only one or two cars are using radar in the same space. However, when the idea catches on and every one of hundreds of cars is pinging constantly, the system is going to break down. Even if it's not based on "radar", the sheer number of independent comms channels required will exceed practicability if channel separation is maintained, or alternatively there will be loss of signal integrity and consequent errors (aka accidents).
The very simple alternative solution would be to teach folks to drive properly - it's not that hard.
Re: Big Bang Expletives
Strictly "by the big blue egg [singular, familiar form] (or testicle [singular]) of St. Cyril..."
The validity of this kind of research depends entirely on the nature of the population from which the samples are drawn. If for example, we draw both the experimental sample and the control sample from a population of extremely unperceptive people who generally just mooch around with their eyes half shut (aka students), those who play video games might well score higher than those who don't. Were the samples to be drawn from a society of hunter-gatherers or orchestral musicians, a different outcome might be expected.
A damn sight too much social and behavioural science research ignores this key issue. It's entirely wrong to assume that the whole human population behaves identically - regardless of delightful that assumption is to politicians and marketeers.
El Reg please note
You might like to take note of this yourselves, starting with your annoying grey bar...
Re: But I am an Englishman after all.
Let me correct that for you:
HTML is not a file format - it's a page description language.
"...the gravity of foreground objects warps and magnifies the light from background objects"
How do you magnify light? Images can be magnified by deflecting light. Light can be amplified, diffused or concentrated, but magnified? Surely "magnified means "made bigger", which could only be accomplished by adding more photons from somewhere (thus effectively meaning the same as "amplified"). Or have I missed something?
A temperature may be high or low - not "hot" or "cold". Temperature is a description or measure of hotness or coldness. You can't have a "fast speed" or an "expensive price" for similar reasons.
where's the paper please
It would be nice if the link actually pointed to the relevant paper, rather than to an index page where it can't be found. As we don't know the title of the paper and a search for "Bayne" on the linked page yields nothing relevant, it's impossible to find it.
Re: Copied Tweets?
In UK copyright law at least, the issue isn't attribution - it's permission. So retweeting might indeed constitute copyright infringement the original tweeter had not given permission and were to decide to be a literalist. Courtesy of the media moguls who are powerful enough to buy the law and just want our money any which way, it's technically almost impossible to avoid copyright infringement unless we stay silent and never write anything in public.
But in practical terms, due to the cost of litigation, it all comes down to money (like just about everything else). Unless it were a recognised catch phrase (e.g. a movie or pop song quote or a commercial strap line) it would be a very wealthy copyright owner to complain about the use of half a dozen words on Twitter. But it's worth reviewing the restrictions imposed recently in relation to a major sporting event in London, see http://www.keystonelaw.co.uk/other/keynotes/2012/june/restrictions-of-olympic-proportions
Although these particular restrictions rest largely on trade mark law, it's clear that the whole IP system is getting out of hand - becoming a source of revenue rather than it's original purpose of protecting creative works against debasement.
Re: Bird numbers
Well this is only "anecdotal", but I had a wide variety of small songbirds visiting my garden bird feeders for several years - I usually had to refill them daily. Then two neighbours introduced three young cats around Christmas time 2011. This year I have only recorded two visits to my bird feeders since January, and the untouched seed goes mouldy in the feeders.
A huge amount of the comment here and elsewhere on this research anthropomorphises cats - accusing them of "murder", "torture" &c. &c. All this misses the point entirely. Cats are much more hard-wired than many of us would like to believe. They are to a large extent stimulus-driven automata, pre-programmed to pounce on small animals that move in their field of awareness.
That means it's the responsibility of the "owner" (although nobody really 'owns' a cat - it simply occupies a territory that you may also occupy) to minimise the damage a cat can do - particularly in densely populated urban environments. The simplest fix is a collar with a bell on it, but it has to be a sensible bell, not the tiny token gesture fitted as standard to most commercial cat collars.
A bell does not so much alert the prey as distract the cat by spoiling its stealth as it springs - provided the cat can hear the bell and it is fitted when the cat is young enough. If it works, operant conditioning eventually sets in, reducing the incidence of the predatory behaviour.
Nevertheless, the biggest problem for prey species is not the behaviour of the individual cat but excessive predator density. Where I live, nine or ten cats have "homes" within an area of one acre (18 residences). This is at least 20 times the natural predator density, and is only sustainable for the predators because the cats are artificially fed. It is however, completely unsustainable for many of the prey species.
not quite the right approach
Despite the prevalent myth of the Superhacker, there's plenty of solid evidence that most breaches are total pushovers. Just for example, Verizon's 2012 report (on 2011 data) concluded that 96% (4% more than the previous year) of attacks were "not highly difficult" and that "97% of breaches were avoidable through simple or intermediate controls".
So what we really need is not a few expensive cyber whiz kids on short term assignment for the duration of the London jamboree, but for ordinary IT staff at all levels to be competent in basic security housekeeping. It would be much safer and vastly more cost-effective, and would also release the real experts to protect us against the occasional attacks that are not so trivial.
However, it's not in the interest of the attackers, the defenders or indeed many security researchers to point out how easy cyber attacks currently are to accomplish, as they would all lose face (and, in many cases, huge revenue streams or big salaries). So we are kept in ignorance by an informal (and albeit uncomfortable) collaboration of deception on the part of pretty much all those who know the real situation. It would be incredibly difficult for government to justify proposed levels of expenditure on "cyber defence" if it was well known that the vast majority of their appallingly frequent security problems stem from the incompetence and slackness of the implementers and defenders of their systems. But we are up against a very determined adversary, so we have only one real choice - face facts or lose.
Maybe this explains why so many web applications are security nightmares? See http://www.infosecurity-magazine.com/blog/2011/12/14/software-insecurity-thrives/474.aspx
What is "rising carbon"?
"...rising carbon precedes accelerating warming"
Perhaps someone could explain what "risng carbon" is? This kind of sloppy thinking contributes to the huge confusion that surrounds "global warming" in the public mind. Please let's get our fundamental terminology right - shouldn't be too hard.
Re: Major version 11 introduces auto update
Yes, it's be rather nice if someone (anyone) could write software without crass errors in it after all these years
It's a small single board computer - so?
I fail to see how this device will help many school kids get to real grips with microprocessor technologies. In the late '60s-early '80s we bought chips, obtained the support manuals, learned the device architectures and instruction sets and built ourselves (admittedly idiosyncratic and limited) microsystems from scratch, and we learned to do all this without formal instruction by trial and error. We could do this solely because the devices were simple and transparently documented.
This device, neat as it is, is extremely complex and non-transparent in terms of hardware and, by virtue of using a high level OS, presents to the user such an abstracted view of the machine that very little more can be learned than would be possible using a conventional PC running linux.
A much better starting point for imparting fundamental principles to school kids would be based on a simple device such as mid-range PIC (for which many affordable demonstrators are already available), coupled with programming in C and assembler. The essential task is not to take the current "hacker kids" to a higher level (they're already self-motivated enough) but to bring a basic understanding of systems principles to as wide a sector of the population as possible - so we must start simple. This offering sits half way home, rather than at the starting line.
Providing useful information
Useful place to report service outage - on the web-based status page!
not "as seriously" but more seriously
I'm concerned about the idea that posting on a social network is in any way comparable to pinning a photo on an internal noticeboard. See http://threatpost.com/en_us/blogs/twenty-something-asks-facebook-his-file-and-gets-it-all-1200-pages-121311 to find out one of the big differences
Not only is making the link a potential invasion of privacy, the mere evisaged use of the means to do so brings it squarely within the provisions of the UK Data Protection Act. This covers information that identifies a living person and information that would do so if combined with other information that might come into the Data Controller's possession. If an established mechanism is in place for obtaining the further information, it's a cut and dried case - except if there are exemptions for law enforecement. There might be via other legislation.
trigger a risk?
"may in turn trigger a governance and compliance risk." How do you "trigger" something that's an intrinsic attribute of an object or process? "Incident" may be what is meant here. The word "risk" is so widely misused in the IT community I'm surprised it still means anything at all.
"don't worry - be happy..."
Reminds me of the production lines in the movie THX 1138
"There can be ... 120TB of data written to the 240GB Ultra"
That's only 500 full-volume write cycles. Which makes such a device very poor for large volume writes such as disk backups, and, given the way SSD handles erasure, only moderate for small volume dynamically updated data. So what''s it best for? Storing your photos and MP3s I guess.
"...staging a public hanging of whoever set up robots.txt on its Website..."
Anyone who thinks robots.txt is a security feature that would have protected against this kind of leak needs their head examined. The real problem here is a wide-open database.
Which one per cent?
"...only going to be able to simulate about 1 per cent of the complexity inherent in the human brain"
So which one per cent are they going to choose? The classic error of artificial intelligence gurus that keeps the intelligence truly artificial is to consider the brain as a single entity with intellect as its prime purpose. The brain is actually an integrated assemblage of several organs - evolved independently and performing multiple separate functions (admittedly with many local overlaps). But fundamentally, as Robt. Ornstein pointed out some 30 years ago, the brain is a body controller. The rest is extra. So an arbitrary replica of one per cent of the number and interconnections of synapses in an average brain dedicated to intellectual processing is not a model of the brain - nor would a similar replica of 100% of the number be.
So this is all good fun, but really has nothing to do with a decent quality human brain. The Californian definition of artificial intelligence draws from the Californian definition of intelligence, and that of Southampton similarly - but they may not be identical, and neither may be all that representative of the real thing. Mr. Spock was a non-feasible fantasy too.
Plus, think of the energy consumption. I can work till lunchtime on a couple of slices of toast and some coffee. A million CPUs (even ARM CPUs) are going to clock up some serious electricity bills. Then you have maintenance - we'll be back to the reliability problems of the early vacuum tube computers. So why bother Steve?
More than just "cookies"
Having participated in a UK forum on this legislation I feel I should point out that Mr. Roper is mistaken on an important count. The European legislation does _not_ just relate to cookies in the strict technical sense - it relates of all tracking methods, and the exemption for functionality is being very narrowly interpreted.
The underlying aim of the legislation is self-management of personal privacy, so that makes perfect sense. I have actually raised the issue of server-side session-to-session state with the ICO and have been told it does come within the remit of the legislation unless it is strictly and solely used for direct benefit to the user.
A Clear and Present Threat
There are so many ways connectivity can fail -
The benefits of a fully redundant cloud data centre are only any good if you can reach it. But there's more at stake than just access to the cloud. We're rapidly replacing a wide range of well-proven infrastructure with faster, more convenient but much less reliable alternatives - and also converging all our communications onto fewer and fewer of those less reliable channels. There'll be a tipping point very soon.
Are these your only assets?
Not a bad notion - but it doesn't cover the whole ground by a long chalk. Your most important assets are your business information. These need to appear on the inventory too, otherwise you're just talking about empty boxes.
ISO 27001/2 requires you to identify your "most important information assets", but how can you do that unless you know about all of them? For much too long we have concentrated on the technologies at the expense of their business purpose. Information is what matters to your business - your IT is just a tool for making use of it.
What was the project really?
"Amalgamation of four separate authorities, 270 offices ... reduced into four hubs..." Was this primarily a Windows 7 rollout, or a business restructuring with Windows 7 thrown in? I think the distinction is critical to the argument for its results. Can it really be said that moving to Windows 7 per se will save "some £85 million over 25 years" as mfraz has queried, or is it the reduction of an excessively complex business structure to something more practicable that might achieve this? Why should Microsoft get the credit for a business rationalisation programme undertaken by Wiltshire County Council?
Wots "electrical culture" then?
What a horrendous mess
The whole concept of TLDs was to have a small - repeat small - forest of strictly hierarchical trees that could be easily parsed and verified. Instead we'll end up with a massive jungle of inconsistencies that will unverifiable, opening the doors to confusion, malicious abuse and plain old-fashioned error.
Even the basic task of parsing a user-supplied URL for validity on a web form will be impossible, as there will be no predictable extent or content for the TLD.
But what the heck? We already have the same problem with RFC 5322 - compliant email addresses - right from the start. The entire infrastructure is creaking to ruin as we bow low to the almighty buck.
Nice to be free of that responsibility
The Cabinet Office's interim executive director for digital government - quote: "I don't sit around calculating how much we will save"
Isn't that part of his remit then? How comforting that must be.
"...to be stolen by an attacker by reading unitialised data from graphics memory". First, I'm wondering what "unitialised data" are. Supposing they mean uninitialised memory, it's impossible to "steal" data from uninitialised memory - by definition it doesn't contain data as you haven't put any there. If they mean residual data from previous memory writes, they should say so. This looks like sloppy thinking. But most of the problems we face in IT security result from sloppy thinking - this is just another example of it.
It's all the fault of the stupid user - of course
This is the same stupid argument that has been around since Noah - that the user is responsible for covering the provider against attacks on the infrastructure.
It should be obvious that once the systems are breached, passwords are moot. It's the responsibility of the provider to [a] hash the passwords, [b] seed them before hashing, and [c] harden the hash database server against attack.
Passing the buck to the end user by insisting on unmanageable "strong" passwords while leaving the infrastructure wide open [a] doesn't work, [b] is an unwarrantable imposition on the user and [c] is a poor excuse for incompetent systems management. We have to stop doing it.
"It would cost far less to perform thorough penetration tests than to suffer the loss of trust, fines, disclosure costs and loss of reputation these incidents have resulted in." - Chester Wisniewski at Sophos.
Since SQL injection is solely the result of failing to validate user input - which is the most elementary newbie programming error on Earth - maybe a better way to reduce incidents (and thus costs, loss of reputation &c. &c.) would be to employ people to write your web applications who are actually minimally competent.
While we allow inattentive fools to write bug-ridden code, relying exclusively on post-coding checks to find the foul-ups we'll never raise the abysmal quality of software development.
A much more serious aspect of von Neumann Archtecture
A fundamental attribute of the von Neumann architecture this paper doesn't mention is that a common memory array contains both instruction codes and data. The decision as to whether a word fetched by the processor is to be interpreted as an instruction or as data depends entirely on the previous state of the machine - if the last fetch was the parameter of an instruction, this fetch is an instruction and so on. This represents a huge security vulnerability that has been systematically exploited in many ways for many years - "buffer overflow" and "stack overflow" attacks that cause maliciously injected data to be interpreted as machine instructions dominate the professional attack space. But even accidental loss of instruction pointer integrity can be extremely damaging - causing uncontrolled execution of arbitrary instructions, and it does happen, as in "hey, my machine locked up!".
The major contender architecture - Harvard - has separate instruction and data memories, and is widely used in industrial controllers, for the very reason that they have to be robust. Harvard architecture didn't take off in the office computer space due to the initial high cost of memory, but that's not been a major consideration for some time. I've been waiting for years for a Harvard architecture PC CPU, but in vain. Even a dual-stack operating system that segregated function call and return addresses from function parameters would be a huge step forward, even if it ran on a vN CPU. But nothing's being done. Instead we have numerous questionable sticking plasters such as random memory allocation, stack validation et al, which regularly prove their ineffectiveness due to the extent of the underlying festering wound - an almost unsecurable architecture. von Neumann was not considering security when he came up with his computing model.
Some data is almost impossible for the average user to back up, thanks to the delightful MS Registry and the idiosyncracies of applications. For example, my ftp client must store site credentials somewhere (probably in the registry), but I've never been able to find them. A user space backup won't include this critical data. Oh for a return to .ini files - they were easy to manage and maintain.
Quote: "the watchdog would far rather work with organisations towards this than resort to enforcement"
Surely, working with organisations to get things right _is_ enforcement?
Enforcement is taking steps to make something happen - in this case, to prevent data breaches. The alternative (fines) discussed here is not enforcement - it's punishment. They are not the same thing. However the distinction seems to be lost on almost everyone these days. Real enforcement reduces the need for punishment, but punishment does not serve effectively as enforcement - we have centuries of evidence for this. Extreme punishments have never deterred people in general from offending. And it's a matter of externalities in this case. A person who loses a laptop may get their employer fined, and that might lead to their own dismissal, but the next person in line will not be permanently scared by that into being more careful.
The real reason for this and similar foul-ups from all vendors is the appallingly low level of expertise among developers/programmers. Sure they can _code_ but they clearly can't see the functional implications of the code they create.
Until software development becomes a genuine engineering discipline performed according to sound core principles by attentive, thoughtful and competent people, things will never change for the better, and we'll go on having to apply streams of patches. Would you fly in a plane that required "patching" every few days? Not bloody likely! So why do we tolerate it in software? Probably only because the vendors already have us by the balls, so we hand them our hearts and minds.
perpetuating the fundamental error
This initiative is likely to do little more than perpetuate the error of considering "cyber security" as a technological issue. It isn't - it's a conceptual issue. It's current state of weakness is a function of the same appalling quality of risk judgement that is increasingly evident in national policy decision-making (Katrina, Homeland Security, banking &c.). We have become so dependent on rule-based systems (both technological and social/legislative) that we have effectively ceased to be able to think flexibly and holistically. As a result we race behind the bad guys fixing a cascade of symptoms, unable to recognise, let alone address, the fundamental disease.
Contrary to popular opinion, software development is not such an overwhelmingly complex activity that it's impossible to create error-free code. You just have to pay attention, really understand what you're doing, and, most importantly, actually care about what you're delivering. It seems the majority of developers/programmers don't , don't and don't - not because they use abstracted high-level development tools but because they rely on such tools to absolve them from taking the personal responsibility for getting it right. It's an attitude problem before all, and is no different from the almost universal desire of our student population to get the degree without having to make the effort required to actually learn the subject.
We need people in charge of our security (and that includes not only "security specialists" but also application and service designers, programmers, testers, deployers, service managers and users) who actively seek to bear the requisite responsibility for fulfilling that task . Such people will make sure of their own accord that they are sufficiently competent to do so. Absent that attitude, no training programme will help.
Sure, but you won't find many of these running Windows on the office PC, will you. That's where the biggest target is. Quite apart from which, the conventional OS stack (still pretty much as derived from the C&R C stack) is a huge contributor to the problem, and that's a higher level issue than the processor choice.
Hold on! We're solving the wrong problem
We have an intrinsically insecure architecture right down to chip level, wherein data and instructions are only distinguished from each other by context at runtime, and we've replicated the problem at OS level via the ludicrous stack definition that allows data, parameters and return addresses of functions to reside adjacent to each other.
We've used this architecture in mainstream commercial microsystems since the year dot and it lets us down more and more often as time passes. Most successful exploits rely on abusing this single weakness in one way or another. Fancy tricks like ASLR and DEP are merely plasters that cover an increasingly festering ancient wound.
But there has been an alternative for ages - Harvard architecture, which segregates code and data in separate and completely independent memories. It's widely used in embedded controllers such as the PIC family, and would make practically all exploits of the type discussed here impossible. Why on earth don't we create a mainstream Harvard processor? Why in the interim don't we create a virtual Harvard OS?
This is a fundamental conceptual flaw - not a specific of Flash, IE or anything else at the application level. It's time we dealt with the real problem, not just went on tampering with its symptoms.
- On the matter of shooting down Amazon delivery drones with shotguns
- Review Bring Your Own Disks: The Synology DS214 network storage box
- OHM MY GOD! Move over graphene, here comes '100% PERFECT' stanene
- IT MELTDOWN ruins Cyber Monday for RBS, Natwest customers
- Google's new cloud CRUSHES Amazon in RAM battle