"the variance in security controls"
'variance' does not mean 'variability'.
125 posts • joined 10 Sep 2009
'variance' does not mean 'variability'.
the new fivers are already developing permanent sharp creases, as the material seems to be unable to relax after being folded. This could well shorten their effective life. Also,more than one shopkeeper and a bank teller have all told me they're difficult to count quickly because they don't pick up on the fingers like paper.
All he needs now is a fluffy white cat and a volcano to live in
"While biometrics are just another kind of shared secret,..."
Oh no they're not. Any biometric can only serve as only an identifier, not an authenticator. An identifier is permitted to be public (e.g. your name); an authenticator must be private to the legitimate parties (a shared secret).
Two fundamental and essential characteristics of an authenticator are that it can be changed and revoked. As a biometric can not be changed or revoked, and can in many cases not be private (e.g. fingerprints and DNA are left behind everywhere you go) it cannot legitimately be used as an authenticator.
It would be so nice if this basic principle would finally sink in...
"...over a quarter (26 per cent) believed the data encrypted wasn’t valuable or confidential, and hence was not worth paying for."
Why keep it then? If it isn't an asset it's automatically a liability.
You're right there's no simplistic explanation, but this 2008 paper
provides some very interesting insights.
Where does this "ram packed" and "rammed" come from? The train would have been rammed if another train had run into it. This train was just jam packed.
So once again, as almost everywhere we have pseudo-education: folks just off crash courses passing on what they think they remember to those who cannot judge. But I forgot - Teaching is the real skill, subjects are just bags of facts. I obviously wasted decades studying systems engineering.
As a matter of fact I may have, given the culture. I have taught in various regions here in Blighty, and on every occasion bar one I have been handed a "Tutor Pack" containing everything necessary for the course - including crib sheets of acceptable answers to all the test questions. Almost anyone could "deliver' a course from that, without any subject matter expertise. Indeed one of my students complemented me on my ability to answer his questions, stating that my predecessor always reached for and thumbed through the text book when asked anything.
So our problem is not the quality or content of this or that syllabus - they are merely symptoms of shatteringly low expectations of both students and teachers. While we continue to impart very little, very little will result. This may explain to some extent the already abysmal and declining quality of engineering products - particularly in the software driven space. As long ago as the 1920s Owen Barfield coined the term "dashboard knowledge" for the capacity to do things by manipulating knobs and levers without any understanding of how they work, and this is what is primarily being "taught" - "How-tos" rather than the understanding of principles. This directly contributed to the Chernobyl nuclear incident, and is clearly implicated in a huge and proliferating number of broken systems from office software that needs monthly repairs to hackable "internet things" and military drones that think they've landed when they're still in the air.
"using Windows when they can -- most of the holes have been discovered" - if that's the case why do we still have monthly Update Tuesdays? Some of the holes have been discovered, but it's unreasonable to assume "most" as we just can't tell how many more there are. There's never been an OS or a major application from any vendor that has ceased to need patching before it was superseded by a "new version".
Before we assume, as this consultation seems to do, that autonomous vehicles are "the way forward" and all we have to consider are a few procedural and regulatory issues, I'd like to see the following test performed at least once (preferably more than once:
take around 200 autonomous vehicles and set them off in the rush hour alongside other traffic down the six roads to enter the Hemel Hempstead Plough Roundabout (National Grid ref: TL0549706394) with the aim of crossing the roundabout and exiting on various different roads. Then see what happens.
This roundabout consists of six mini-roundabouts surrounding a bidirectional central roundabout, and is quite a challenge for human drivers when it's busy. Any fool computer can drive down a motorway in steady traffic, but this would test its capacities realistically.
Embrace Zombie - the new innovative development framework that allows you to generate terabytes of code without thinking at all. We plan to train 10 million Zombie developers worldwide by this time next year.
Oooops - too late, we already have them...
Here is a sample standard letter for restriction of medical records sharing, created when this scheme was first proposed. It might still be of use.
"I absolutely prohibit in perpetuity any sharing of my medical records with any person, other legal entity or agency, except in the specific cases of  access to my records with my explicit consent or exclusively for therapeutic purposes in support of treatment of a medical condition with which I present or  where required without the option by statute or order of the Court.
For avoidance of doubt, this prohibition applies to any current or proposed scheme of medical records sharing envisaged or planned at the date of this letter and equally to any plan or scheme of medical records sharing to be conceived, invented or proposed at any time in the future."
"The paper concludes that a new approach is needed where policy making should lead technology; not vice versa." - from an organisation promoting this concept in an online paper entitled "Fulltext.pdf"
Might "doing things wronger" include posting a paper for download with a filename of Fulltext.pdf? Relying on the file path to define the nature of the document seems very similar to the sort of thing that is being castigated. Minor example maybe, but it highlights the fundamental problem - failure to think before acting.
The most fundamental principle of the world wide web from the very start was endpoint agnosticism - the ability of any browser to get to the content, independent of presentation. I have no objection to bells and whistles - provided they are optional and don't prevent basic access to the content.
Web developers who create sites like this do their clients a huge disservice - they deny the intelligent and infomed access to the content.
it would be nice to be able to read the original
It'd be really nice if the author were to mention what country he's talking about. I didn't realise this was about Oz until I saw the graph caption near the end. This is not by any means a unique instance.
"You can prioritise blocking attacks.
You can develop processes that let you respond to attacks.
Or you can put most effort into cleaning up after an attack."
Put that way it sounds really stupid - they are not mutually exclusive. You need to do all three in just proportion all the time. Getting the balance right is the key to success, and it may not be a static balance - the priorities can change depending on what's happening right now, so you need to be continuously attuned to the threat space. Ergo, being aware of the changing threat space is always your highest priority.
Nothing discussed here is really infosec - it's ITsec. ITsec is a small part (maybe 30%) of infosec. Conflating the two is the error that almost everyone makes and it results in a technocentric view that fails to deliver real security however much you spend. Infosec is about management of risk - ITsec is about choosing and deploying defensive technologies. Unless this is done with reference to business risk, it will be at best very expensive and at worst both very expensive and a failure.
"24/7/365 support", Talk about over-working the team - 24 hours a day, seven days a week, 365 weeks a ... oh, hang on! Something's not quite right here. In the real support world, you provide either 24/365 or 24/7/52.
As always there has to be a happy medium (something nobody seems to have ever managed to achieve sustainably).
However, what has been happening for some time is that against an objective standard of best available performance, median performance has been declining so we're all becoming "rejects". Here, maybe, is a reason. Instead of, as in the past, creating technologies primarily to enhance innate capacities, for some time we've been creating them to supplant those capacities, so the innate capacities are allowed to atrophy. It's even beginning to show in the quality of the supplanting technologies, as people with atrophied capacities have entered the roles of creator, designer and QA inspector. Evidence of this is readily to hand - witness the appalling quality of software, even in mission- and life-critical systems.
who's running a flat network then? I see this all the time - exclusive reliance on Active Directory for control over access to resources over an otherwise exhaustively interconnected user network. Apparently nobody's heard of network segregation.
"When a web site is able to 'remind' you of your password by emailing it back, that's a symptom of very poor security practices."
Ironically the Register did this the last time I forgot my password. I still have the email containing my password in clear in the body of the message.
"distrust turned to uncertainty; uncertainty to excitement; excitement to disappointment; disappointment to acceptance; acceptance to affection."
Exactly the process of hostage conversion - right up to Stockholm Syndrome
Actually, the distinction seems to be whether this device (the smart phone and/or app) is actually calculating the charge. It has been decided that it is only reporting the charge, which is calculated elswhere - hence it's not a taximeter.
No it doesn't - it serves a good chunk of the world wide web. The web is not the internet.
"...Cyber Essentials – the UK government-backed scheme which protects businesses against the most common threats on the internet."
Cyber Essentials Basic just requires an attestation that specific minimal security technologies (e.g. antivirus and a firewall) and practices (e.g. patching) are in place - not even that they're actually working. Cyber Essentials Plus adds an annual one-off penetration test, which of course does not actually prove they are working properly, only that they haven't absolutely failed at the time of the test. Furthermore, the originators of Cyber Essentials explicitly limited its scope to the most elementary low grade threats, and even there it's only the equivalent of an MOT ("annual vehicle safety test" for those of you in foreign parts).
I actually recommended that the Cyber Essentials Basic attestation should include a CMM-based self-assessment of the level to which these minimal technologies and processes are managed, but the suggestion was ignored. Consequently Cyber Essentials does not really protect against much at all.
Really? It's often a good idea to read the original report before summarising it.
Actually, the malicious code is hiddent in image LINKS.
The very first sentence of the orginal report states this clearly: "Yesterday a vulnerability was discovered that made it possible to inject malicious code into an image link on Imgur."
Come on Reg - you're not a red top!
'carton' (картон) doesn't mean carton (a box) in Russian - it means 'cardboard'.
Why could it be that two letters of mine this year relating to important issues, sent directly to the ministers responsible, have elicited zero response but the govt is dead keen to find out at our expense what some of us have tweeted about it?
Tell all that to the guy who did the forensics in Bookout v. Toyota - several really basic systems design and programming errors. But we're not just blaming coders here, nor just the auto industry - there have recently been some quite spectacular aeronautical software design snafus.
The bottom line is that the whole software development process still fails to meet the standards expected of all other branches of engineering. And falling back on 'testing' is not an appropriate solution. We don't build bridges without doing the math and then just test them by running trucks across (we used to: there's a famous 19th century verse about Crystal Palace, London that goes "... the sappers and miners who marched and who ran ... To test the girders to Plaxton's plan..." but we've advanced beyond that by now.
So the reality is that software engineering is not yet a mature enough discipline to apply with confidence to safety-critical systems. With luck and persistence it may become so, but presently it's too damned dangerous to trust your life to software.
"One of the purported benefits of public cloud is you no longer need to buy and maintain your own servers – they become the responsibility of somebody else."
Oh no they don't - they get to be _managed_ by somebody else, but the responsibility remains firmly in your corporate lap. That actually increases your exposure, as you can't control the screw-ups of your providers.
The simplest one is that GDS has conclusively demonstrated via a succession of projects that they couldn't design their way out of a wet paper bag. Any other explanation needed?
but still clearly incapable of maintaining a coherent train of thought or coping with basic grammar:
"not sell our personal information and preferences for money, and will make it clearer if the company/website intends to do so."
Actually worse for you than that. For a collision-free hashing algorithm the safe limit is for the total length of the clear text to not exceed the length of the hash (in bits). If it does, there _will_ be (not just may be) collisions. So very long plaintexts (regardless of their make-up) actually make the attacker's job more rewarding, as brute forcing a given hash may yield more than one plaintext. Thus the attacker can potentially obtain more credentials from the same number of captured hashes.
However your '50% probability' depends on the hashing algorithm's transfer function having a uniform distribution. I'm not sure whether it does, but I'd be surprised if it did considering the principle of how it works.
And that excuse too. "An attack in this class..." - what class? We don't seem to have any details yet, but as a security professional I'm regularly less than amazed when the latest "sophisticated attack" eventually turns out to have been a total push-over that circumvents deficient or degraded controls. Our biggest problem is that the "defenders" only defend reactively, but the attackers are proactive. If we managed our systems (and our business processes) robustly, a lot of these attacks would bounce off without doing much (or any) harm. But we just skirmish defensively in a guerrilla war in the enemy's territory, so we keep losing.
A nicely conducted piece of statistical research, telling us what we've actually known for years. The entire "character set + template" approach to authentication credential creation is well recognised by both experts in systems and psychologists to be flawed, but we're stuck with it because the people defining login requirements currently have no understanding of either.
The silliest recommendation after "character set + template" is the supposedly random character string. This is grounded in a misunderstanding (and misapplication) of Shannon entropy, and fundamentally fails because (even if generated by a true random process) no-one (OK, maybe one in a million) can remember it. It's actually impossible for a human to create because the mind can't wrap round true randomness - what looks like a "random string" to a human is usually biased to emphasise a small subset of the possible code space.
Even the random word sequence advocates ("horse staple ...") have it wrong. The essence of a robust authentication credential subsists in three requirements:
 it must be long enough to make brute forcing hard - the required length will change with time and the criticality of what is being protected;
 it must be memorable to its creator - so in principle it must mean something to him or her;
 it must not be readily guessable by anyone else - so a problem arises for folks who are not very original ;-)
Within the string space fulfilling these three requirements, the strongest strings against guessing attacks will be the ones that conform least well to a common template. So the best rule set will contain the fewest, simplest rules. Here's my take with commentary in square brackets:
"A logon credential [note that we intentionally don't say 'password'] is not to allow you access to our systems - it's to prevent anyone else gaining access by pretending to be you. It must therefore be easy for you to remember but difficult for anyone else to guess. To achieve this, here are some basic guidelines:
 think up a memorable but not well known phrase or sentence of at least four words totalling at least 15 characters [reasonable length at time of writing, but may need to increase]. This phrase should mean something to you to make it easy to remember, so be imaginative, consider using humour and/or your native language.
certain obvious words are blocked and therefore cannot be used, including [e.g.] your user name, the company name or date words (month and day names) [but keep the excluded words list to a minimum to avoid user frustration].
 you may, but are not obliged to, separate the words in your phrase with non-alpha symbols."
Not the ultimate maybe, but probably a better start than the standard rules that render all words in any dictionary illegal (rather a challenge for a literate user) but permit 'Pa55w0rd!'. I've written about this elsewhere (http://intinfosec.com/library/policies/2011-Instant_Compliance_for_a_Grand.pdf)
The item on Haskell included mention of its use to create a statistical analysis tool for assessing drug rehab clients, during which 'Dr. K' made the statement that it was a surprising use of such a mathematically oriented language. I've seldom heard such a silly statement from a supposed expert - a mathematical approach is essential to solving statistical problems, so a mathematically oriented language would in principle be the ideal choice.
This device (although less abstracted and obscure than the Raspberry Pi) is still too complex to really impart the fundamental concepts of computer technology. Kids would be vastly better served by a simple board carrying an 8-pin or 14 pin PIC, plus the device data sheet. The skills we are primarily short of (even among developers) are much nearer the metal than current programming practice encourages or imparts. A PIC solution would offer two key advantages: it would probably be cheaper, and the device architecture and instruction set are so simple that a child could grasp them in a few days, leading to basic understanding of machine architecture, Boolean logic and the electronics of interfacing, little or none of which is acquired by high level coding practice, particularly at school level.
The offending phish email (on the netcraft site) is not actually very convincing. I'm not going into details as I don't want to assist the perps, but there are several tell-tale signs that anyone who was paying attention would immediately spot. If you're not paying that much attention you'd get stung by anything!
Auntie is never wrong - even about trivial things. There is one presenter of the morning shipping forecast (Louise Lear) who, whenever the same conditions pertain in both the Forth and Tyne regions, always merges them into the non-existent "Forthtyne" with the stress on Forth. I've complained to Auntie several times over a period of several years, and all the responses I've received have simply stated that I'm mistaken.
If Auntie can deny such simple checkable matters of fact, she can deny anything.
I wish I knew where these knowledgeable programmers are hiding. Most "programmers" I've met can't even create bug-free code using "flat pack assembly" dev tools.
The (obviously) prevalent idea that patched=secure is spherical and plural, and always has been. It makes no more sense than "what you don't know can't hurt you" - indeed it's grounded in that false premise.
It's about time we stopped relying on reactive fixes based on blacklisting and got round to creating some real resilience - starting with the ability to write software that isn't littered with exploitable bugs.
It would have been generous to link to the report so we could read it for ourselves!
This is a classic example of the exact opposite of what is really needed. The prevalent technocentric approach to infosec has got us where we are, so doing more of it will not improve our state of security.
What is really needed (and in my experience as a security consultant is almost universally missing) is a robust security management framework consisting of  a strategy that defines the security priorities of the organisation in terms of risk,  tactics for addressing the priorities, and  operational processes that fulfil the requirements defined by the tactics and strategy. The framework essentially needs to include monitoring and feedback to ensure that [a] perceived risk continues to accurately represent reality as things change, [b] control objectives have a realistic chance of protecting against threats, and [c] controls that actually work.
Appointing techie "hackers" to oversee the security of a vast corporate (or indeed a government, as we seem to be doing here in the UK) is about as useful as appointing a bricklayer (however skilled) to oversee the building of a city.
We need to wake up to the reality that information security is primarily a problem of business process management. Yes - we can be attacked via technologies and we use technologies extensively to protect ourselves, but as in the case of JP Morgan http://www.theregister.co.uk/2014/12/23/jpmorgan_breach_probe_latest/ it's in BAU management that the weaknesses mostly manifest themselves.
"the much-scaled back pilot programme" - clearly an attempt by airlines to save on salaries by employing fish.
Just highlights the level of competence of our web developer community - don't write code with care, copy and paste from demos without attention. Explains a lot about the deluge of breaches.l
Accumulating these acronyms does not mean they're intellectuals (although being one is not necessarily a bad thing in a sphere where unconsidered rote learning and rule of thumb still dominate) - it means they've put up the money to take a bunch of computer marked multiple choice pub quizzes. Expertise cannot be evaluated that way, but it does free those who select practitioners from the burden of knowing the subject. It also creates multiple closed shop cliques that can capitalise on the "mysteries" of narrow subsets of infosec - witness PCI DSS, which is in reality little more than basic good practice in infrastructure security and information management - things you should be doing as a matter of course across your whole estate - but has spawned a huge and very lucrative specialist consultancy and conference industry.
BTW, I recently saw an UK advert for a PCI security contractor at 450 quid a day (that's over US$170k per year) that specified "at least two years IT security experience", and a recent survey of the security knowledge of software developers incidentally found that almost 50% of respondents in key fields including banking and systems software development had less than two years experience. It appears therefore that the pub quizzes are a fast track for the inexperienced into lucrative security-related roles where they can earn a lot while perpetuating the insecurity of our infrastructure.
'men have evolved a greater spatial ability to "benefit reproductively ...'
Supposing this is a direct quote, it's pretty sad that scientists (even if only anthropologists) continue to promulgate the fallacy that evolution is directed to defined purposes. If it's not a direct quote, shame on el Reg for doing likewise.