56 posts • joined Thursday 10th September 2009 07:16 GMT
El Reg please note
You might like to take note of this yourselves, starting with your annoying grey bar...
Re: But I am an Englishman after all.
Let me correct that for you:
HTML is not a file format - it's a page description language.
"...the gravity of foreground objects warps and magnifies the light from background objects"
How do you magnify light? Images can be magnified by deflecting light. Light can be amplified, diffused or concentrated, but magnified? Surely "magnified means "made bigger", which could only be accomplished by adding more photons from somewhere (thus effectively meaning the same as "amplified"). Or have I missed something?
A temperature may be high or low - not "hot" or "cold". Temperature is a description or measure of hotness or coldness. You can't have a "fast speed" or an "expensive price" for similar reasons.
where's the paper please
It would be nice if the link actually pointed to the relevant paper, rather than to an index page where it can't be found. As we don't know the title of the paper and a search for "Bayne" on the linked page yields nothing relevant, it's impossible to find it.
Re: Copied Tweets?
In UK copyright law at least, the issue isn't attribution - it's permission. So retweeting might indeed constitute copyright infringement the original tweeter had not given permission and were to decide to be a literalist. Courtesy of the media moguls who are powerful enough to buy the law and just want our money any which way, it's technically almost impossible to avoid copyright infringement unless we stay silent and never write anything in public.
But in practical terms, due to the cost of litigation, it all comes down to money (like just about everything else). Unless it were a recognised catch phrase (e.g. a movie or pop song quote or a commercial strap line) it would be a very wealthy copyright owner to complain about the use of half a dozen words on Twitter. But it's worth reviewing the restrictions imposed recently in relation to a major sporting event in London, see http://www.keystonelaw.co.uk/other/keynotes/2012/june/restrictions-of-olympic-proportions
Although these particular restrictions rest largely on trade mark law, it's clear that the whole IP system is getting out of hand - becoming a source of revenue rather than it's original purpose of protecting creative works against debasement.
Re: Bird numbers
Well this is only "anecdotal", but I had a wide variety of small songbirds visiting my garden bird feeders for several years - I usually had to refill them daily. Then two neighbours introduced three young cats around Christmas time 2011. This year I have only recorded two visits to my bird feeders since January, and the untouched seed goes mouldy in the feeders.
A huge amount of the comment here and elsewhere on this research anthropomorphises cats - accusing them of "murder", "torture" &c. &c. All this misses the point entirely. Cats are much more hard-wired than many of us would like to believe. They are to a large extent stimulus-driven automata, pre-programmed to pounce on small animals that move in their field of awareness.
That means it's the responsibility of the "owner" (although nobody really 'owns' a cat - it simply occupies a territory that you may also occupy) to minimise the damage a cat can do - particularly in densely populated urban environments. The simplest fix is a collar with a bell on it, but it has to be a sensible bell, not the tiny token gesture fitted as standard to most commercial cat collars.
A bell does not so much alert the prey as distract the cat by spoiling its stealth as it springs - provided the cat can hear the bell and it is fitted when the cat is young enough. If it works, operant conditioning eventually sets in, reducing the incidence of the predatory behaviour.
Nevertheless, the biggest problem for prey species is not the behaviour of the individual cat but excessive predator density. Where I live, nine or ten cats have "homes" within an area of one acre (18 residences). This is at least 20 times the natural predator density, and is only sustainable for the predators because the cats are artificially fed. It is however, completely unsustainable for many of the prey species.
not quite the right approach
Despite the prevalent myth of the Superhacker, there's plenty of solid evidence that most breaches are total pushovers. Just for example, Verizon's 2012 report (on 2011 data) concluded that 96% (4% more than the previous year) of attacks were "not highly difficult" and that "97% of breaches were avoidable through simple or intermediate controls".
So what we really need is not a few expensive cyber whiz kids on short term assignment for the duration of the London jamboree, but for ordinary IT staff at all levels to be competent in basic security housekeeping. It would be much safer and vastly more cost-effective, and would also release the real experts to protect us against the occasional attacks that are not so trivial.
However, it's not in the interest of the attackers, the defenders or indeed many security researchers to point out how easy cyber attacks currently are to accomplish, as they would all lose face (and, in many cases, huge revenue streams or big salaries). So we are kept in ignorance by an informal (and albeit uncomfortable) collaboration of deception on the part of pretty much all those who know the real situation. It would be incredibly difficult for government to justify proposed levels of expenditure on "cyber defence" if it was well known that the vast majority of their appallingly frequent security problems stem from the incompetence and slackness of the implementers and defenders of their systems. But we are up against a very determined adversary, so we have only one real choice - face facts or lose.
Maybe this explains why so many web applications are security nightmares? See http://www.infosecurity-magazine.com/blog/2011/12/14/software-insecurity-thrives/474.aspx
What is "rising carbon"?
"...rising carbon precedes accelerating warming"
Perhaps someone could explain what "risng carbon" is? This kind of sloppy thinking contributes to the huge confusion that surrounds "global warming" in the public mind. Please let's get our fundamental terminology right - shouldn't be too hard.
Re: Major version 11 introduces auto update
Yes, it's be rather nice if someone (anyone) could write software without crass errors in it after all these years
It's a small single board computer - so?
I fail to see how this device will help many school kids get to real grips with microprocessor technologies. In the late '60s-early '80s we bought chips, obtained the support manuals, learned the device architectures and instruction sets and built ourselves (admittedly idiosyncratic and limited) microsystems from scratch, and we learned to do all this without formal instruction by trial and error. We could do this solely because the devices were simple and transparently documented.
This device, neat as it is, is extremely complex and non-transparent in terms of hardware and, by virtue of using a high level OS, presents to the user such an abstracted view of the machine that very little more can be learned than would be possible using a conventional PC running linux.
A much better starting point for imparting fundamental principles to school kids would be based on a simple device such as mid-range PIC (for which many affordable demonstrators are already available), coupled with programming in C and assembler. The essential task is not to take the current "hacker kids" to a higher level (they're already self-motivated enough) but to bring a basic understanding of systems principles to as wide a sector of the population as possible - so we must start simple. This offering sits half way home, rather than at the starting line.
Providing useful information
Useful place to report service outage - on the web-based status page!
not "as seriously" but more seriously
I'm concerned about the idea that posting on a social network is in any way comparable to pinning a photo on an internal noticeboard. See http://threatpost.com/en_us/blogs/twenty-something-asks-facebook-his-file-and-gets-it-all-1200-pages-121311 to find out one of the big differences
Not only is making the link a potential invasion of privacy, the mere evisaged use of the means to do so brings it squarely within the provisions of the UK Data Protection Act. This covers information that identifies a living person and information that would do so if combined with other information that might come into the Data Controller's possession. If an established mechanism is in place for obtaining the further information, it's a cut and dried case - except if there are exemptions for law enforecement. There might be via other legislation.
trigger a risk?
"may in turn trigger a governance and compliance risk." How do you "trigger" something that's an intrinsic attribute of an object or process? "Incident" may be what is meant here. The word "risk" is so widely misused in the IT community I'm surprised it still means anything at all.
"don't worry - be happy..."
Reminds me of the production lines in the movie THX 1138
"There can be ... 120TB of data written to the 240GB Ultra"
That's only 500 full-volume write cycles. Which makes such a device very poor for large volume writes such as disk backups, and, given the way SSD handles erasure, only moderate for small volume dynamically updated data. So what''s it best for? Storing your photos and MP3s I guess.
"...staging a public hanging of whoever set up robots.txt on its Website..."
Anyone who thinks robots.txt is a security feature that would have protected against this kind of leak needs their head examined. The real problem here is a wide-open database.
Which one per cent?
"...only going to be able to simulate about 1 per cent of the complexity inherent in the human brain"
So which one per cent are they going to choose? The classic error of artificial intelligence gurus that keeps the intelligence truly artificial is to consider the brain as a single entity with intellect as its prime purpose. The brain is actually an integrated assemblage of several organs - evolved independently and performing multiple separate functions (admittedly with many local overlaps). But fundamentally, as Robt. Ornstein pointed out some 30 years ago, the brain is a body controller. The rest is extra. So an arbitrary replica of one per cent of the number and interconnections of synapses in an average brain dedicated to intellectual processing is not a model of the brain - nor would a similar replica of 100% of the number be.
So this is all good fun, but really has nothing to do with a decent quality human brain. The Californian definition of artificial intelligence draws from the Californian definition of intelligence, and that of Southampton similarly - but they may not be identical, and neither may be all that representative of the real thing. Mr. Spock was a non-feasible fantasy too.
Plus, think of the energy consumption. I can work till lunchtime on a couple of slices of toast and some coffee. A million CPUs (even ARM CPUs) are going to clock up some serious electricity bills. Then you have maintenance - we'll be back to the reliability problems of the early vacuum tube computers. So why bother Steve?
More than just "cookies"
Having participated in a UK forum on this legislation I feel I should point out that Mr. Roper is mistaken on an important count. The European legislation does _not_ just relate to cookies in the strict technical sense - it relates of all tracking methods, and the exemption for functionality is being very narrowly interpreted.
The underlying aim of the legislation is self-management of personal privacy, so that makes perfect sense. I have actually raised the issue of server-side session-to-session state with the ICO and have been told it does come within the remit of the legislation unless it is strictly and solely used for direct benefit to the user.
A Clear and Present Threat
There are so many ways connectivity can fail -
The benefits of a fully redundant cloud data centre are only any good if you can reach it. But there's more at stake than just access to the cloud. We're rapidly replacing a wide range of well-proven infrastructure with faster, more convenient but much less reliable alternatives - and also converging all our communications onto fewer and fewer of those less reliable channels. There'll be a tipping point very soon.
Are these your only assets?
Not a bad notion - but it doesn't cover the whole ground by a long chalk. Your most important assets are your business information. These need to appear on the inventory too, otherwise you're just talking about empty boxes.
ISO 27001/2 requires you to identify your "most important information assets", but how can you do that unless you know about all of them? For much too long we have concentrated on the technologies at the expense of their business purpose. Information is what matters to your business - your IT is just a tool for making use of it.
What was the project really?
"Amalgamation of four separate authorities, 270 offices ... reduced into four hubs..." Was this primarily a Windows 7 rollout, or a business restructuring with Windows 7 thrown in? I think the distinction is critical to the argument for its results. Can it really be said that moving to Windows 7 per se will save "some £85 million over 25 years" as mfraz has queried, or is it the reduction of an excessively complex business structure to something more practicable that might achieve this? Why should Microsoft get the credit for a business rationalisation programme undertaken by Wiltshire County Council?
Wots "electrical culture" then?
What a horrendous mess
The whole concept of TLDs was to have a small - repeat small - forest of strictly hierarchical trees that could be easily parsed and verified. Instead we'll end up with a massive jungle of inconsistencies that will unverifiable, opening the doors to confusion, malicious abuse and plain old-fashioned error.
Even the basic task of parsing a user-supplied URL for validity on a web form will be impossible, as there will be no predictable extent or content for the TLD.
But what the heck? We already have the same problem with RFC 5322 - compliant email addresses - right from the start. The entire infrastructure is creaking to ruin as we bow low to the almighty buck.
Nice to be free of that responsibility
The Cabinet Office's interim executive director for digital government - quote: "I don't sit around calculating how much we will save"
Isn't that part of his remit then? How comforting that must be.
"...to be stolen by an attacker by reading unitialised data from graphics memory". First, I'm wondering what "unitialised data" are. Supposing they mean uninitialised memory, it's impossible to "steal" data from uninitialised memory - by definition it doesn't contain data as you haven't put any there. If they mean residual data from previous memory writes, they should say so. This looks like sloppy thinking. But most of the problems we face in IT security result from sloppy thinking - this is just another example of it.
It's all the fault of the stupid user - of course
This is the same stupid argument that has been around since Noah - that the user is responsible for covering the provider against attacks on the infrastructure.
It should be obvious that once the systems are breached, passwords are moot. It's the responsibility of the provider to [a] hash the passwords, [b] seed them before hashing, and [c] harden the hash database server against attack.
Passing the buck to the end user by insisting on unmanageable "strong" passwords while leaving the infrastructure wide open [a] doesn't work, [b] is an unwarrantable imposition on the user and [c] is a poor excuse for incompetent systems management. We have to stop doing it.
"It would cost far less to perform thorough penetration tests than to suffer the loss of trust, fines, disclosure costs and loss of reputation these incidents have resulted in." - Chester Wisniewski at Sophos.
Since SQL injection is solely the result of failing to validate user input - which is the most elementary newbie programming error on Earth - maybe a better way to reduce incidents (and thus costs, loss of reputation &c. &c.) would be to employ people to write your web applications who are actually minimally competent.
While we allow inattentive fools to write bug-ridden code, relying exclusively on post-coding checks to find the foul-ups we'll never raise the abysmal quality of software development.
A much more serious aspect of von Neumann Archtecture
A fundamental attribute of the von Neumann architecture this paper doesn't mention is that a common memory array contains both instruction codes and data. The decision as to whether a word fetched by the processor is to be interpreted as an instruction or as data depends entirely on the previous state of the machine - if the last fetch was the parameter of an instruction, this fetch is an instruction and so on. This represents a huge security vulnerability that has been systematically exploited in many ways for many years - "buffer overflow" and "stack overflow" attacks that cause maliciously injected data to be interpreted as machine instructions dominate the professional attack space. But even accidental loss of instruction pointer integrity can be extremely damaging - causing uncontrolled execution of arbitrary instructions, and it does happen, as in "hey, my machine locked up!".
The major contender architecture - Harvard - has separate instruction and data memories, and is widely used in industrial controllers, for the very reason that they have to be robust. Harvard architecture didn't take off in the office computer space due to the initial high cost of memory, but that's not been a major consideration for some time. I've been waiting for years for a Harvard architecture PC CPU, but in vain. Even a dual-stack operating system that segregated function call and return addresses from function parameters would be a huge step forward, even if it ran on a vN CPU. But nothing's being done. Instead we have numerous questionable sticking plasters such as random memory allocation, stack validation et al, which regularly prove their ineffectiveness due to the extent of the underlying festering wound - an almost unsecurable architecture. von Neumann was not considering security when he came up with his computing model.
Some data is almost impossible for the average user to back up, thanks to the delightful MS Registry and the idiosyncracies of applications. For example, my ftp client must store site credentials somewhere (probably in the registry), but I've never been able to find them. A user space backup won't include this critical data. Oh for a return to .ini files - they were easy to manage and maintain.
Quote: "the watchdog would far rather work with organisations towards this than resort to enforcement"
Surely, working with organisations to get things right _is_ enforcement?
Enforcement is taking steps to make something happen - in this case, to prevent data breaches. The alternative (fines) discussed here is not enforcement - it's punishment. They are not the same thing. However the distinction seems to be lost on almost everyone these days. Real enforcement reduces the need for punishment, but punishment does not serve effectively as enforcement - we have centuries of evidence for this. Extreme punishments have never deterred people in general from offending. And it's a matter of externalities in this case. A person who loses a laptop may get their employer fined, and that might lead to their own dismissal, but the next person in line will not be permanently scared by that into being more careful.
The real reason for this and similar foul-ups from all vendors is the appallingly low level of expertise among developers/programmers. Sure they can _code_ but they clearly can't see the functional implications of the code they create.
Until software development becomes a genuine engineering discipline performed according to sound core principles by attentive, thoughtful and competent people, things will never change for the better, and we'll go on having to apply streams of patches. Would you fly in a plane that required "patching" every few days? Not bloody likely! So why do we tolerate it in software? Probably only because the vendors already have us by the balls, so we hand them our hearts and minds.
perpetuating the fundamental error
This initiative is likely to do little more than perpetuate the error of considering "cyber security" as a technological issue. It isn't - it's a conceptual issue. It's current state of weakness is a function of the same appalling quality of risk judgement that is increasingly evident in national policy decision-making (Katrina, Homeland Security, banking &c.). We have become so dependent on rule-based systems (both technological and social/legislative) that we have effectively ceased to be able to think flexibly and holistically. As a result we race behind the bad guys fixing a cascade of symptoms, unable to recognise, let alone address, the fundamental disease.
Contrary to popular opinion, software development is not such an overwhelmingly complex activity that it's impossible to create error-free code. You just have to pay attention, really understand what you're doing, and, most importantly, actually care about what you're delivering. It seems the majority of developers/programmers don't , don't and don't - not because they use abstracted high-level development tools but because they rely on such tools to absolve them from taking the personal responsibility for getting it right. It's an attitude problem before all, and is no different from the almost universal desire of our student population to get the degree without having to make the effort required to actually learn the subject.
We need people in charge of our security (and that includes not only "security specialists" but also application and service designers, programmers, testers, deployers, service managers and users) who actively seek to bear the requisite responsibility for fulfilling that task . Such people will make sure of their own accord that they are sufficiently competent to do so. Absent that attitude, no training programme will help.
Sure, but you won't find many of these running Windows on the office PC, will you. That's where the biggest target is. Quite apart from which, the conventional OS stack (still pretty much as derived from the C&R C stack) is a huge contributor to the problem, and that's a higher level issue than the processor choice.
Hold on! We're solving the wrong problem
We have an intrinsically insecure architecture right down to chip level, wherein data and instructions are only distinguished from each other by context at runtime, and we've replicated the problem at OS level via the ludicrous stack definition that allows data, parameters and return addresses of functions to reside adjacent to each other.
We've used this architecture in mainstream commercial microsystems since the year dot and it lets us down more and more often as time passes. Most successful exploits rely on abusing this single weakness in one way or another. Fancy tricks like ASLR and DEP are merely plasters that cover an increasingly festering ancient wound.
But there has been an alternative for ages - Harvard architecture, which segregates code and data in separate and completely independent memories. It's widely used in embedded controllers such as the PIC family, and would make practically all exploits of the type discussed here impossible. Why on earth don't we create a mainstream Harvard processor? Why in the interim don't we create a virtual Harvard OS?
This is a fundamental conceptual flaw - not a specific of Flash, IE or anything else at the application level. It's time we dealt with the real problem, not just went on tampering with its symptoms.
password protected but not encrypted
Has anyone stopped to consider how the legitimate user accesses an encrypted drive? Using a password maybe? If so, although the encryption protects against reading the raw drive if removed from the system, it does little more than the password to protect the entire running system.
The strongest protection for an entire system against casual or brute force attack at the login interface is a limitation on password retries, and although this can be specified in system policies it's hardly ever done. Other attack scenarios (and they're numerous) require different approaches. Encryption solves some of them but leaves others untouched.
When will we stop insisting on limited pseudo-panaceas for security without undertaking proper analysis of the realities of the problems?
Serious potential impact
In the days of analogue phones we just talked to each other. Now, we're being driven to use our phones for authentication and financial transactions. The pickings are massively rich, so it's going to be well worth a few thousand dollars to a bunch of "agencies" who will rent out their services to the underworld. Goodbye secure login, goodbye bank balance. Cheques are just so robust by comparison.
The real reasons behind this idiocy
It's easy to accuse people (particularly politicians) of evil motives, but the reality is much sadder than that. These idiotic attempts to centrally control and administer human relationships are invented with the best possible intentions by narrow minded inhuman zombies drawn from the same population as the rest of us.
We have a culture in which competition and aggressiveness dominate and are lauded - in which everyone is expected to exploit or be exploited (in many cases, both). As a result we have poor social cohesion (i.e. nobody much cares about anyone else) and trust is difficult to establish.
The optimum solution to this would be to encourage people to take responsibility for each other within the community (as indeed was done in the not too distant past). That would help people learn again to care and trust, but it's a slow process that takes much longer than the life of a government. It would also entail some dismantling of "consumer culture" that would be resisted hand over fist by the "marketplace". For proof, just look at the attitude of bankers to bonus limitation.
Because government always seeks "quick wins", the alternative that comes most readily to mind is to legislate for control. It doesn't achieve the ostensible objective, but that doesn't matter. It does achieve another objective - a sense of having taken decisive action.
The really sad part is that impersonal rule-based attempts at regulation of social and personal interactions (all the way from "political correctness" to this prospective debacle) undermine the opportunities of ordinary people to learn to fend for themselves by making sound individual judgements based on caring. Just as when pocket calculators became commonplace the skill of mental arithmetic was largely abandoned, people are already relying on database-derived rubber stamps of personal probity rather than paying attention and using their intuition to decide for themselves who is OK and who is creepy. This legislation will only exacerbate the trend.
To anticipate the counter-argument - yes, in the absence of the legislation there would be the occasional accident. That's very sad but unavoidable in life, and there's absolutely no warranty that in its presence their number will be fewer. The regulation of trust is itself no guarantee of safety. Quite the opposite - it will in the long term make people more vulnerable to the abuses it is attempting to control. The worst possible condition (to which we are being inexorably driven by increasing centralised rule-based administration of human relations) is to be physically alive and still at risk but emotionally dead, believing our safety lies in responding as automata to standard stimuli, whether commercial triggers to empty our wallets or social triggers to trust or not trust because a database entry tells us to. To quote Benjamin Franklin "Those who would give up essential liberty to purchase a little temporary safety deserve neither liberty nor safety and will lose both".
No surprise there
The fundamental problem is that the people defining "password strength"  can't do arithmetic and  are stuck in the past. They don't understand what contribution symbol space and field size actually make to the equation so they just go for what "looks complicated", and their assumptions about brute forcing are based on decades-old histories of offline cracking of UNIX password files, which is not the main current threat.
The two greatest single strength factors against brute force at a user interface are limited retries and backoff time. After that, non-obvious password choice (e.g. not "password"). I always recommend an acronym of a private but memorable phrase at least eight words long. The user doesn't have to remember a complex string of arbitrary characters (something our brains are generally bad at). Instead she remembers the phrase (something our brains are quite good at) and reconstructs the password each time she needs it by repeating the phrase to herself as she enters the password.
Assuming nothing but lower case letters, that yields roughly 2x10^11 (2 followed by eleven zeros) possible passwords, and the vast majority will not be dictionary words (unless you intentionally choose a phrase that has a dictionary word as an acronym). So let's arbitrarily and pessimistically throw away half of them to allow for bad choices. It's still 10^11. So statistically a brute forcer will need to make around 5x10^10 attempts. Limit the login interface to three failed attempts per, say, 15 minute interval or 12 per hour, and it will take about 490 thousand years on average to break in. By then you should have had some kind of admin alert from the system.
"IT Governance" is an unfortunate term - it presupposes that business process owners should actually be interested in the technologies for their own sake. They shouldn't be primarily interested in technologies - they should concentrate on making their business processes cheaper/faster/smoother/more robust. The technologies are just means to those ends.
What should matter to business is information, not technologies, so what we really need is Information Governance, not IT Governance. Concentrating on the "IT" causes the technologies to become both the driving force and the bone of contention while the business processes are often left to languish. There is indeed such a thing as "the Business" - we've just got into the habit of ignoring it in favour of a techie-led culture of change for the sake of change. The results are all around us - gross inefficiency, poor returns on IT investment and a landscape littered with failed IT projects.
- FLABBER-JASTED: It's 'jif', NOT '.gif', says man who should know
- If you've bought DRM'd film files from Acetrax, here's the bad news
- Analysis Spam and the Byzantine Empire: How Bitcoin tech REALLY works
- VIDEO Herschel Space Observatory spots galaxies merging
- Apple cored: Samsung sells 10 million Galaxy S4 in a month