Re: Not quite sure what the patent is for?
There was a DOS database (at least as early as 1987) called Q&A that accepted plain language queries.
233 posts • joined 10 Sep 2009
Not everyone has a "smart phone" (even if that were to provide secure communication, but it doesn't), and for almost everyone email is the equivalent of a postcard written in pencil - it can be read and modified in transit. Indeed those who use "free" web mail" may have their messages trawled for advertisement targeting as a matter of course.
The challenge is not "how to keep the 5% who can't or won't fit the model", it's to ensure that the "model" is fit for purpose and protects the vulnerable, not just created for the convenience of the least affected party regardless of consequences should the transaction miscarry.
a bus timetable?
the date and time of a rock concert?
the address of a retailer of underpants?
Does this not make seeking almost any information for any purpose potentially a criminal act?
It seems insufficient for there to be a "reasonable excuse defence" as this presupposes guilt, which is contrary to both the tradition of English law and the European Convention on Human Rights.
What we need more than anything else in those who legislate are the ability to think clearly and dispassionately and the ability to foresee unintended consequences. Neither ability currently seems much in evidence.
"I would rather use a traditional desktop version"
Every time they break O365 during an "update" or your broadband goes down you get stranded without your data.
On top of which, W2019 (or whatever desktop version) and your files are yours for life, whereas if your O365 subscription payment fails (e.g. you bank with TSB) you lose all your data.
On top of which, using a desktop version you're not sharing your files with a potentially interested third party with more legal clout than yourself to do as it pleases.
On top of which, promotion of continuous update as a benefit merely emphasises the need for constant fixes to a flawed product.
Although admittedly the Kasa encryption algorithm is rubbish, it's clearly not the Caesar cipher. The code illustrated shows that a starting key is XORed with the first data byte, the result is XORed with the next byte, and this is repeated iteratively to the end of the message. Consequently the key changes for each byte of the message. Not that this helps much, but it would be a little bit harder to crack unless the code were known.
For reference, the Caesar cipher implements a fixed shift throughout the message, so every character is substituted by the character n places distant. The only non-obvious element is the keyword (if used) which scrambles the letter order of part of the template alphabet.
As the prospect of 'no deal' looms, it's surprising to me how many small to medium businesses have not taken any steps to even investigate what they need to do to prepare for the contingency in respect of Data Protection. Although there's quite a lot to consider and time is now terribly short, it's not rocket surgery. It'll only be really difficult if your business has so far merely made token gestures to Data Protection compliance. A simple twelve point action plan here sums up the main requirements. They're not hard to understand, so now we all just have to get on with it.
It's an interesting question whether the 'personalisation' of adverts might make their delivery to the targeted individual 'direct marketing' in law. If so, 'personalised' adverts on a web site could require explicit informed consent provided to each advertiser - not just to an intermediary such as Google, which would merely be acting as a service provider to advertisers (and thus as their data processor under the GDPR).
Contrary to popular impressions, the right to erasure or 'right to be forgotten' (Article 17 of the GDPR) doe not exclusively or even primarily apply to search engines. It applies to data controllers in general. Consequently, should a data subject wish to exercise the right, their first recourse should be to the data controller of the document that the search engine spidered, not to the search engine. If the original document is erased or no longer made publicly accessible, it will become unavailable via the search engine and even that listing should eventually expire as a dead link.
Indeed the controller of the personal data would be obliged to inform the search engine: "Where the controller has made the personal data public and is obliged pursuant to paragraph 1 to erase the personal data, the controller, taking account of available technology and the cost of implementation, shall take reasonable steps, including technical measures, to inform controllers which are processing the personal data that the data subject has requested the erasure by such controllers of any links to, or copy or replication of, those personal data." (Article 17 paragraph 2), clearly indicates the chain of responsibility.
The assumption that a search engine provider is responsible for moderating and censoring its results is fortunately impractical, but would be deeply disturbing if it were to become the norm.
"The GDPR requires you to alert the relevant regulatory authorities of any data breach"
Unfortunately, it doesn't. There's an element of discretion available to the data controller: "... unless the controller is able to demonstrate, in accordance with the accountability principle, that the personal data breach is unlikely to result in a risk to the rights and freedoms of natural persons" (Recital 85, Article 33.1). The lack of clear criteria for assessing relevant natures and levels of risk is one of the Regulation's major weaknesses.
The heartbleed bug was not fundamentally a software error, but intrinsic to the specification in the RFC. There was no need to allow a variable length response, and no need to make the requester (the vulnerable party) specify either the response string or its length. Having made these poor decisions the RFC author actually advised that the hazard they created be covered for by the potential victim in their implementation. However, as the required function was merely a keep-alive, a predictable fixed length response generated by the responding party would have sufficed (e.g. its IP address).
This, and the Boeing 737 PK-LQP accident (and the multiple Watchkeeper drone crashes, and the 2015 A400M crash in Seville and the huge number of reported behavioural discrepancies in "autonomous" vehicles and much else) are not software problems, but failures of the engineering mind set.
The fact that software mediated all these incidents is incidental - the flaws were all flaws in thinking. In particular, they were failures of foresight and breadth of vision as to side effects and consequences. In practice this differs little from the decision to site a commercial data centre a couple of hundred metres from the Buncefield gasoline storage depot which exploded in 2005, completely gutting the data centre.
If that really means no staff with security expertise, than I'm genuinely worried, but in my experience security qualifications do not necessarily equate to security expertise. Practically all "security qualifications" I have investigated in detail consist of cramming sessions followed by multiple choice tests.
This has become the norm, presumably because it's cheap to deliver, and pitifully low expectations of "expertise" have resulted. As the author of a course that includes a variety or exam questions, I was saddened recently by feedback from a testing centre that candidates struggle with short answer and essay questions despite scoring highly in multiple choice.
In the real world we need rather a different kind of expertise - to be able first to work out what the question is and then to come up with an appropriate solution without prompting, and to do both reliably under pressure in emergency. The multiple choice "exam" tests the exact opposite - merely the ability to recognise on demand some pre-defined statement you were told no more than a week ago.
This simulacrum of training and expertise is not restricted to infosec - it has infected the whole domain of risk and compliance. You can, for example, become a "certified EU GDPR practitioner" in five days including 2.5 hours of multiple choice testing, thereby, according to at least one training company, becoming equipped to serve as a corporate Data Protection Officer with the authority to render your employer liable to multi-million euro fines.
So let's please have more people with expertise, but let's stop selecting them on the basis of bogus qualifications that signify nothing of value.
I entirely agree with your position, and would add - why is anyone going to control an oscilloscope remotely over any distance greater than the length of a bench and so by implication on a single network segment? You've got to be able to see the device under test and the oscilloscope screen!
Consequently the fact that the interface is not barricaded against cyber attack is pretty trivial. All my Ethernet connected test gear has "insecure" TCP/IP connectivity, but I have an isolated segment in the lab and nothing there touches the outside world.
There's a strong possibility that folks who report bugs like these are having a quiet day. They'd be better off sticking to the huge array of ludicrously insecure devices that perforce connect to the web - aka IoT.
Actually not. One recognised definition of ethics is 'the rules of conduct recognised by certain limited departments of human life' (1789) [Shorter OED 1933]. So in fact nothing much has changed since at least the late 18th century. What we should be talking about is morality.
considering the idiotically obvious vulnerabilities on most of this kit (hard coded hidden account passwords, XSS in the web interface etc. ad nauseam) I agree entirely that liability (even on a purely civil basis) is the only way to raise standards. The UK BSI kite mark did this successfully for electric plugs despite not being mandatory - it just rapidly became impossible by popular demand to sell a plug that hadn't passed the standard.
The fundamental problem being negligence and incompetence at the product development stage, a bunch of technical target feature specifications will achieve very little, as everything is down to the quality of their implementation. Years ago, when SIP was just a twinkle in our eyes, I co-authored a paper pointing out that the real security issue was not so much potential weaknesses in the protocol, but potentially ubiquitous poor quality implementations of it, and our view has been borne out by subsequent experience.
What nobody here (or apparently anyone in government circles) has noted yet is that loss of access to these national databases is only a part of the data processing debacle.
The EU position that an adequacy may be made during the transition period automatically implies that for quite some time between March 30th and the making of a decision British businesses will face serious complications, or indeed may be barred from, transferring personal data out of Europe and quite possibly also from processing any such data received if the processing involves onward transfers to other third countries.
In the worst case, the European party to such transfers will be at liberty to terminate the arrangement, and at best UK businesses will have to enter unfamiliar negotiations with the European side and undertake to fulfil novel responsibilities that nobody has yet explained authoritatively to them - and this with a mere four months to go before Brexit.
I've been pressing the authorities for quite some time to provide explicit reliable guidance before it's too late on what UK businesses have to do to ensure continued lawful transfers of personal data from the EU. But it could already be too late for many, particularly in the SMB sector where my professional experience shows that the entirety of data protection is still much of a mystery. So the inertia of the government in this could quite easily put small businesses out of business or force them to operate unlawfully in order to stay in business.
While facebook et al can indeed be perceived as having problematic business models in respect of data slurping, there's a much bigger problem on the back of that. One could argue that sign up to these services is voluntary, but there's a massive and growing body of web sites that include trackers that (often silently) inform these slurpers of visits to their pages whether or not the visitor has signed up to them.
Consequently fb et al gather masses of profile information about those who have chosen not to subscribe to them, but the origin of this potentially abusive practice is not them alone but the designers and owners of the sites that carry the trackers.
Another abuse is the automatic creation of "accounts" by services that act as intermediaries, a classic example being Eventbrite, which creates an Eventbrite account for anyone who registers for an event using the service and apparently maintains records of all events booked for thereafter by that account holder. This is potentially sensitive profiling on the part of what is in strict terms merely a data processor on behalf of event organisers, and is hard to justify objectively as necessary to the principal parties (the event organiser and the potential attender of an event).
The basic assumption that a business is free to do whatever they want with any information you give them is fundamentally in opposition to the principles of the GDPR, but unfortunately the legislation is not explicit enough to prevent it happening, due to the vagueness of the "legitimate interest" lawful basis which is becoming a catch all for whatever anyone wants to do but can't justify to the data subject.
The fundamental problem here is that site user security is an externality to the web developer, who is often blindly using high level abstracted development tools that generate script-ridden pages, and who maybe never even examine the page source that has been created.
@ Robert Forsyth Well said!
Computer science is not actually being taught
In my teaching experience, beyond the level of "it has a hard disk and memory", I found that nobody (not even the definers of syllabuses) was remotely interested in how computers work, only in what you can do with them, and this attitude persists today despite the trendy label 'computer science'.
The real test of knowledge is not what you can regurgitate, but what you can apply successfully in a novel situation and defend logically if challenged. That's why the PhD still (for the moment) has a viva. Consequently, if exams are failing, maybe it's because they're the wrong exams, rather than due to an intrinsic failing in them in principle.
Exam questions that require the candidate to provide a justification for their answer can work very well, but depend on those marking the exams being able to evaluate the students' justifications. Given the short and shallow training of school-level 'computer science' teachers this is not realistically possible, so it's not just a matter of exams or practicals - the entire school education system is broken, and we have a simulacrum of learning.
"You are handing over ... responsibility for your data ... to an outsider."
Whoever performs the physical actions on your databases or infrastructure, you'll find that in law you remain responsible for your data in pretty much every jurisdiction - e.g. under the GDPR your outsource becomes a data processor for you and can only act under your instructions.
"Fire and forget" outsourcing is a commonplace based on complete failure to understand the nature of responsibility.
The normal process of obtaining an adequacy decision can, from past experience, take a year or more, and Barnier is correct in stating that the EU rules only allow a third country (not a prospective third country) to apply, so we will have to wait until Brexit day before we can do so. There's no good reason why we should get special favours from the EU just because we're British - indeed Barnier has already stated publicly that this will not happen.
Whether or not we ultimately get an adequacy decision is important of course, but what we should be worried about immediately is this hiatus period during which we have to operate as if we had no comparability with Europe on data protection.
I've been actively pressing at the highest levels for specific guidance on what contingency plans and action UK business should take to ensure continued lawful processing of the personal data of subjects in the EU, but apart from a general reply from the Secretary of State to the effect that the existing standard contractual clauses (SCCs) will suffice for controller to controller and EU controller to UK processor transfers, I've so far obtained little other than optimistic superficialities or continued silence, and the clock is ticking.
What everyone seems to have overlooked is that even if the SCCs can be used in all cases (which I rather doubt), it won't be sufficient just to include them in EU/UK contract documents and walk away. The obligations imposed by the SCCs will force many businesses to modify their data protection management regimes (for example, to support the right of the EU party to audit, or to provide for the financial resources evidence requirement of Set 2, II(f)).
Data Protection is one of the few 'compliances' that can't be conducted using a perfunctory annual audit approach, all the more as once we leave the EU any tacit expectation of compliance by virtue of operating under a common regime will cease to be available to us, so scrutiny of relevant data processing by UK organisations will become much more stringent.
As there are only five months left for businesses to get everything in place including renegotiation of contracts, liaison with sub-processors, resourcing, implementing new procedures and the rest, it would be a really kind gesture on the part the officials responsible if they could document some specific detailed guidance that businesses can implement. We actually needed this guidance months ago, but it still seems to be on the back burner despite being at least as important as, and likely to affect many more British businesses than, the Irish border question.
If the US (the founder and technological driver of the internet) is in this parlous position, it does not bode well for others, such as the UK, the governments of which are driving hard to put all services including routes for citizens' fulfilment of statutory obligations, exclusively online.
The problem is further exacerbated by a noticeable growing habit of service providers of 'securing their customers' by limiting access to the only the latest client side tools. A 'your browser is unsupported' message is little use when you're trying to file your tax return to avoid a fine for being late.
All of 'online' business from hardware infrastructure to apps has lost touch with what it's actually for - to allow folks to do their own stuff. It thinks the only people worth serving is the sub-population of geeks for whom keeping up to the bleeding edge is a primary motivation. That's actually a tiny proportion of the global population of users, who are increasingly being failed.
Ultimately, this attitude will be the death of online services.
Actually, we don't want either of these. We want an OS that just works silently in the background without interfering with the jobs we're trying to get done.
Just like we want applications that are easy and intuitive to use and don't keep intruding themselves on our work.
The entire IT industry has forgotten that it only exists to serve the user - not to educate, indoctrinate, convert, 'drive change', or even to 'secure'. That one's the biggest joke - any vendor that has to provide fixes for exploitable bugs every month for the entire life on an OS or application taking it upon itself to implement 'security' on our behalf without our involvement or consent.
Only de facto monopolies can get away with this.
On Twitter. Now anyone who wants to be a nuisance - from bored kids upwards - knows how to do it and will probably try.
Responsible disclosure is making sure those who can fix the problem find out about it while avoiding irresponsible and malicious actors finding out about it until it's been fixed.
'Erm', 'like', 'sort of', 'yerno' and several partners are usually introduced when the brain has run a bit behind the delivery - they're essentially fillers to avoid a pause that might cause an interlocutor to think one had finished speaking. The primary cause of their increasing prevalence is the cultural demand for immediate response without preparation, resulting in the need to formulate the idea while it's being expounded.
I've known of a few excellent lucid speakers who, when asked a question, habitually paused perceptibly to formulate their response before speaking.
As one who speaks quiet often in public, I've developed a technique for eliminating these interjections from one's delivery - record yourself live. When you play the recording back, every time you hear one of them, repeat it aloud immediately. Eventually, you'll condition yourself to pre-empt them mentally.
In order to create robust, secure and usable products, what real software engineers need is a stable development environment, the idiosyncrasies of which they can get to know intimately, not constant churn that keeps them always behind the vendor in knowledge and understanding. Nor do they need a regular changing feed of 'broken things'.
'Keeping up with the competition' has clearly become more important for vendors than serving the users of their products, but in the realm of computer languages (sorry - "platforms"), we should be asking the question 'what the hell's this thing for anyway?
"... 6 months at Her Majesty's Pleasure ..."
There's actually no such thing. Six months is six months. "At Her Majesty's pleasure" would mean "for an indefinite time, until the Monarch decides to release you" - and it's fortunately not been available as a sentence for ages - the last time it was used it would have been "At His Majesty's pleasure".
"with rights for non-US organisations and citizens to pursue offending US organisations"
Although not possibly with the same convenience, EU citizens (under the GDPR) and UK citizens (under the provisions of the GDPR included in the UK Data protection Act 2018) can already challenge US businesses (and indeed any other third country business) on their data protection behaviours. If the matter is considered of sufficient import, it can be taken up by the relevant supervisory authority and ultimately even made a matter for the courts (as in the case of Cambridge Analytica and its various subsidiary sharing partners).
The stumbling block for individuals, as always, is the inevitable disparity of resources that can be brought to bear by the two sides, and no legislation currently in existence moderates this. The winner is always the party that can hang on and continue paying its lawyers longest.
Thanks for pointing out the 'under the radar' tracking - this is much more serious than overt tracking widgets, as it's present on almost every page out there and occurs without even the dubious quid pro quo of being served annoying adverts. I've been trying to get the authorities to consider this for quite some time, but to no effect.
However the whotracksme recommendations, apparently of automating regulatory compliance management ('This would enable browsers to assume the role they should have had baked in: a unified control center for the user to manage consent') just moves the responsibility from trackers to browser providers. The essence of the GDPR is control by the data subject over the processing of their personal data, and that goes way beyond consent alone - even as far as objecting in principle to a kind of processing on the basis of a philosophical position.
You've described the problem very well. The cause is actually quite simple but is likely to be unpalatable.
Software development is the only branch of engineering where the practitioner is entitled (indeed generally expected) to be entirely self taught and not validated to a ratified standard for professional competence. Such training as is available largely concentrates on a language, an 'environment' (i.e. a bunch of proprietary libraries) or a dev system, but the basic principles of sound engineering practice are not likely to be on the syllabus.
When I was practising just over 20 years back as a systems engineer with sole responsibility for automation and data acquisition on projects with multi-million budgets, it was automatically part of the culture to test and verify everything before releasing it into the production environment, just the same as was the case with the mechanical and electrical components of the systems. But the 'webbification' of things has led to the rise of more than one echelon of back bedroom 'developers' who uncritically mash up code from fragments found online, with no consideration of security, robustness, or, in many cases, even ergonomics.
One only has to examine the questions and answers on online 'developer' communities to recognise the shallowness of the general level of understanding, typified by instant launch into example code without any explanation or discussion of objectives or mechanisms. So Dunning and Kruger rule. Ignorance can not identify itself, and all the more when the general standard of ignorance makes it a societal norm.
Can anyone explain why a document titled WEAPON SYSTEMS CYBERSECURITY has the snappy and informative file name 694913.pdf?
I may be old fashioned, but I was under the impression that a file name was supposed to help the potential reader identify the content of the file...
The ultimate futility of point fixes that Kieren McCarthy rightly identifies was highlighted forcefully two years ago to the US Commission on Enhancing National Cybersecurity
Still no change in thinking though ...
Article 27 requires any organisation not established in the Union that processes the personal data of subjects in the Union in order to monitor their behaviour, and particularly in respect of the special categories, to appoint a representative in the Union.
Performing data analytics in political campaigning falls into both these categories (monitoring and special category data). Consequently the ICO could in principle enforce the notice on the representative.
Should there not be a representative, that might in its own right constitute a breach of Article 27.
"IP67 machinery should be able to stand a dunking for 30 minutes in one metre of water, whereas IP68 devices can go down to two metres."
Not quite correct. IP68 protection must equal or exceed IP67 but to what extent depends on passing a manufacturer specified test. For example Bulgin Buccaneer Standard electrical connectors are rated IP68 and certified by the manufacturer to withstand 10m depth for 2 weeks and 100m depth for 12 hours.
A lot of consumer kit that is listed as IP68 only just exceeds IP67 and is thus effectively hype.
This is an extraordinary piece of guidance - fundamentally flawed because:
 standard contractual clauses are standard precisely because they are not drawn up by individual companies but are nationally defined by the Supervisory Authorities as a standard to be used by everyone.
 Only countries in the EU can draw up standard contractual clauses, so unless it does this right now (before Brexit) the UK will not be able to do so.
 As the UK will become a third country, any standard contractual clauses it draws up even now will have to be ratified by all the remaining EU countries as acceptable to them. This will take time!
I have been campaigning for ages to have this hugely serious problem addressed by Government, in the face of fairy godmother optimism that not only will everything 'turn out OK' but that we might even get special concessions from Europe on leaving because we're British.
My latest urgent call to action to the responsible parties in Government can be seen at http://businessinforisk.co.uk/library/BiR-transfers_and_Brexit.html
However I'm not holding my breath for action if the best the powers that be can come up with so far is 'do it yourself sunshine' despite the recommended course of action being completely useless as it will not be lawful under the GDPR.
What our negotiators have failed to understand is that it's the EU, not the UK, that has the whip hand in this. EU businesses can simply refuse to exchange personal data with us after March 2019 unless this problem is fixed prior to Brexit.
Actually, the content does not have to be illegal in order to have it taken it down at source.
The right to erasure (GDPR Article 17) can be exercised by the data subject (and only by the data subject or their legal guardian or representative if they are under age or incapable) on either the search provider or on the sourcing data controller, but only in respect of the processing they themselves perform.
So exercise against Google would, if successful, require prevention of a search result appearing, but exercise against the sourcing data controller (again if successful) would require deletion of the original material.
There is a provision (Article 17, paragraph 2), albeit limited by 'practicability', that requires a data controller to inform any parties with which it has shared the data of the exercise of the right. So supposing that spidering and listing by Google constitutes sharing, exercising the right against the sourcing data controller might be more effective than exercising it against Google, as the sourcing controller would be expected to inform Google of the exercise of the right. If Google were so informed and did not follow suit, it would have to explain why if challenged.
The primary real question is therefore whether a sourcing data controller is 'sharing' with Google, as there is not necessarily a contractual sharing relationship. Indeed the sourcing data controller may not even be aware of Google spidering and recording their site, as Google operates autonomously without reference to the sourcing data controller and will spider and record any site it sees fit. There is, however, an expectation - indeed a presumption that a web site made public will be added to Google's site database. Whether this constitutes data sharing in law is the point I feel should be clarified first and foremost, as it would help to define the obligations of the different parties to the data subject.
".. you end up with even dumber people in the populace over time."
And the real joke is that each successive generation of 'smart' people is drawn from an increasingly dumb population.
This might explain some of the problem, but as is frequently demonstrated in both physics and biology, everything that persists is ultimately self limiting, and if it's not it doesn't persist, so maybe there's still hope for the future.
To steal (i.e. to commit theft) in English law is to 'permanently deprive' the victim of what is stolen. Unless the data were deleted at source by the perps after they got hold of it, it's exfiltration not theft. If the perps go on to take funds using the exflitrated card data that will be theft.
also the random interaction of ultimately dozens plus of lidars, radars and whatever pinging away independently. There's either going to be a huge bandwidth need or we're going to get interference, and that could do a lot of interesting things to the vehicle's appreciation of what's happening round it.
So far we seem only to have run these vehicles individually one at a time among dumb vehicles. Put a hundred or so 'autonomous' cars on the six roads leading into the Plough roundabout in Hemel Hempstead UK among conventional traffic in the rush hour and let's see the outcome.
If you want evidence of what is required for confidence in this technology being safe, see the US Consumer Watchdog report. I quote: 'Google/Waymo claims that its computer-controlled vehicles have logged 300 years of human driving experience. But the testing that would be required in order to match the safety tolerance of commercial airplanes is estimated at over one hundred millennia. A lower level of safety – “a level of 80 percent confidence that the robotic vehicle is 90 percent safer than human drivers on the road,” would still require 11 billion miles of testing (or about 5,000 years), according to researchers at the University of Michigan'.
"The real challenge to making them applicable in the real world lies in the scaling up the manufacturing process."
The real challenge will actually be maintaining predictable behaviour in the face of the incredible amount of interference now present pretty much everywhere on Earth. Any control system that operates at the quantum scale intrinsically has very little if any noise immunity, so its reliable operation is problematic in real-world environments.
Some years back researchers (at Cambridge if I remember correctly) demonstrated a transistor that could be switched by a single electron. The essential question of course would be "which electron - the one we provided, or a randomly passing one?"
Biting the hand that feeds IT © 1998–2019