Re: SMART = Smart Meters Are Real Threat?
Particularly in the light of today's serious revelations about personal data being sold "for 5p".
133 posts • joined 13 May 2009
Particularly in the light of today's serious revelations about personal data being sold "for 5p".
I really don't think anyone cares about their privacy, except in very generalized terms.
There is some truth in that. Somehow we need to convince the man in the street that he needs to care about other people's privacy. He may not be particularly worried about himself being monitored -- the issue is that there are people for whom it does matter a great deal. And many of those could be him under some slightly different circumstances in the future.
As well as the obvious cases like lawyers and journalists, there are the abused wife trying to avoid being tracked down by her policeman husband, the fracking/animal rights/anti-abortion/insert-favourite-left-or-right-wing-cause-here protester trying to prevent the police "randomly" stopping and searching them, the social pot smoker trying to avoid being refused a job.
But there are also cases of complete innocents who could be our punter tomorrow: the person who uses a local plumber who also does business with a someone who turns out to be criminal, the school governor who serves on a committee with a local cleric who turns out to be a radical, the jogger who regularly runs near where someone was attacked, etc. All these innocent people should be protected from harassment and suspicion (and demands to "prove" their innocence) unless there is some actual reason to suspect them.
I don't know how to do it, but we need the man in the street to realize the issue is not their own privacy but that a free society needs other people's privacy protected.
That doesn't work with the metadata.
It does if you use something like Bitmessage. Bitmessage is clunky today, and there are potential concerns about both its security and scalability, but if governments press on with this approach (unreasonable retention, access without warrants, pressure on commercial operators to decrypt) then the open source world will create really secure end-to-end solutions. Access to information about criminals will go DOWN.
On the other hand, if governments pull back from disproportionate actions, then Bitmessage will, like most open source projects, remain clunky, hard to use, possibly insecure, poorly maintained and used by a tiny group of people. I understand why politicians have such short term thinking but not why the spooks let them get away with it.
"... we don't need to copy the methods of those we spent most of the 20th Century fighting against. You cannot save freedom by destroying it."
Yes, that is the crux of the matter. I was a small child in the 1960's in East Anglia, surrounded by US Air Force bases, and I really used to lie awake at night worrying about a nuclear attack. I even drew plans for a nuclear shelter we could build in our garden!
When my parents became aware of my worries, they didn't tell me not to worry because they would stop it happening. They explained why we had to oppose the Communist regimes: these governments were repressive, you couldn't walk down the street without carrying papers to prove who you were, they were spying on their own citizens. All this to a child under 10!
I have never forgotten that. I went on to read Solzhenitsyn as a teenager and I have been committed to freedom of thought, writing and speech ever since. I wish Cameron had had the same explanation.
So, the question of who to vote for to oppose these policies boils down to... which small party, if it finds itelf in coalition, is most likely to choose privacy as the one policy they insist on?
UKIP? Don't make me laugh.
Greens? Strong views on privacy but they will always be more concerned about the environment -- I can see them dropping the privacy issue if they can get some of their environmental policies in.
Lib Dems? Opposing snooping is about the only thing they have been consistently firm on. They seem like the best bet. Unfortunately, the way they are tainted, will they hold the balance of power next time round? Despite that, they seem to be the only choice for anyone who considers this the most important issue at the next election.
Apparently Cameron has said he will not allow "safe spaces" for people to communicate with each other without monitoring. So, where do I go to talk to my friend about how we are going to vote? Where do I go to talk to my MP about raising a government abuse in parliament? Where do I go to talk to my lawyer? Apparently all those conversations should be monitored.
Maybe we should ask him where he plans to go to talk to his friends in the City about his directorship for when he stops being PM? Where will he go to discuss secret trade treaties?
Fortunately, in this case, he is being completely stupid and nothing can come of it. Presumably, however, he is saying this so he can later present the real proposals as "we listened to the objections and have reduced our demands" - back to the original snooper's charter proposals.
At least this should mean the end of the speculation about Theresa May as a future Prime Minister. What was needed, on this day of remembrance and outrage, was something statesmanlike and thoughtful. Instead we got a grubby political land-grab. End of Mrs. May's campaign.
How can they, when these lives have been lost defending free speech, possibly be asking internet companies to monitor and report what we are all saying online? It is just sickening!
Will Charlie Hebdo's next cartoons be reported to the government by their email company as subversive?
I am getting very bored with this whining from the securocrats. I know your job has got quite hard. No amount of whining will change that. Come back when you have:
1) Cleaned up. Owned up about the out-of-control years. Heads have rolled. Some people are in jail (yes, really -- there are no excuses for what has been going on and justice needs to be seen to be done).
2) Changed. Stopped untargetted surveillance. Reduced data retention to 30 days. Got warrants. Put in place an overview regime we can actually trust (yes, that is hard to do -- work out how to do it).
3) Come back with a realistic plan for how you will do your jobs to protect us given that technology means that the bad guys will have access to perfect encryption, high performance dark nets, etc (even if you make them illegal). Note: "I believe in fairies" is not a plan.
According to the speech there are 600 returned jihadists. Even if all of them managed to radicalise 100 other people, those 60,000 would be less than 0.1% of the population. That is not a justification for snooping on 64 million people. At those odds I am happy to take the risk of becoming the victim of a terrorist and just close down GCHQ.
I don't think defeating such fingerprinting is that hard. HSTS seems to be mainly an optimisation (although it has some security benefit if users are in the habit of typing in URLs with http: prefixes). Four things that I think should happen:
1) Browsers should only retain HSTS info for relatively short times (I would choose less than 1 day, others might choose several days, a browser might default to a longer period -- say 1 month). This is a bit harder to make work than you might think because you need to prevent the tracker from just "refreshing" the setting each time you connect to the site.
3) Users should be able to turn HSTS off completely.
4) Plugins like HTTPS Everywhere should turn HSTS off completely and rely on their own capabilities.
That would mitigate the issue for normal users and allow the most privacy conscious to eliminate it completely.
Now is not the time for lofty disengagement or disinterest. Car manufacturers who provide self-driven – and therefore secret – transport are, albeit unwittingly, helping terrorists to co-ordinate genocide and foster fear and instability around the world.
The article seems to be about MDM, not BYOD. If the company needs the level of control over devices that is described as the goal of MDM in the article, then they will need to provide the devices. No one will bring their own device and accept that level of control.
BYOD is about companies being willing to trade off control in exchange for reduced cost and more satisfied employees. You can't have all three at once.
It would have been more useful if the article had been about what level of control is actually feasible with BYOD, not with company-owned devices.
Personally (others may differ), the issue is less about who has the data than what they are allowed to use it for. I don't care about the NHS (or even associated companies) using my data for caring for me, for planning and monitoring its own operations, for research into providing better care, etc. I don't even mind commercial companies using the data for research into new drugs/treaments or even for insurance companies to better plan their costs and become more profitable.
What I care about is that there has to be a legal requirement, with very strong penalties (fines of about 1% of global revenues and/or criminal prosecution of individuals, financial compensation to all individuals involved of several times any extra costs they incurred due to the offence) that the data cannot be used to discriminate between individuals, families, geographic areas. genetic makeup, etc in the provison or cost of products or services (such as insurance), nor for any purpose to do with marketing (such as targetting a person or ethnic group). The same thing goes for other data they may be able to find out (such as the rugby team membership an earlier commentator pointed out).
The purpose of both health insurance, and a national health service, is that the service is available to everyone, equally. Costs are shared, between the healthy and the sick, the old and the young, etc.
Ewan Robson's post talks about legal protections under the DPA, but it is not clear and unambiguous that all the things I mention above are prohibited. Also fines are extermely limited (the limits are small and in reality fines are usually non-existant), especially for a global drug company or insurer. If there is such protection, then I suggest they shout it from the rooftops before restarting this process. If not, abandon it.
Sorry, Bronek, I think you are wrong. Whether "heavily encrypted data" is useless depends a lot on what other information the attacker has. In particular, if the attacker knows some of the plaintext then they may be able to break the encryption much more easily.
For example, a password database might be very securely encrypted. But if the attacker knows (or guesses, and can verify) some usernames and passwords that might lead to easier decryption of the whole thing. And inside information could also be very useful even if the keys themselves are not available.
In other words, the problem is not about how well encrypted the data is, it is about the whole circumstances of the breach. Most of that is not known (and certainly should not be evaluated by the company losing the data). The only reasonable behaviour is to notify everyone involved on every loss of personal data, no matter how well the data is encrypted.
Basic things like "what programs are installed" and "what is the hardware configuration of your PC" are generally collected as part of operating system updates and/or automated troubleshooting systems because they provide clear technical benefits in solving technical issues. It would be pretty insane to say "don't collect this info, because NSA".
We will have to agree to differ. There is absolutely no excuse for sending any information about my computer, what I have installed on it, how often I use it, or what I use it for unless I have asked for help and explicitly understand that this information is needed (in which case I will carefully consider who I ask, like I would carefully consider who I take my PC to for servicing).
In this case, it isn't the NSA I am worried about -- why should Microsoft know? I don't tell Google what software I have installed, I don't tell Amazon where else I shop or what I buy there, I don't tell my car insurance company who I choose to use for home insurance or how many bedrooms it has. Why on earth would I provide any personal information to Microsoft just because I buy their product? This isn't Facebook offering me something for "free" in exchange for personal information.
My (work supplied) iPad2 running iOS 7.1.1 is vulnerable (according to poodletest.com). I can't upgrade to iOS 8 (not only would I need to check all my demos still worked, general opinion seems to be that upgrading an iPad2 is not advisable) but presumably any fix will only be issued for iOS 8.
Must remember not to use it to access any important personal stuff (not just banking, but things like airline check-in).
Hmm, that raises an interesting question... will apps (like the BA app) inherit any fixes that may be supplied for iOS or Android or do the apps themselves need to be updated?
There have been many comments talking about building your own box and running your own software (from Windows, through special NAS distributions to generic linux). Very useful, thank you.
But what about running your own software (I would want a Linux variant) on any of these boxes? I can't help thinking that the hardware is probably chosen well for the NAS job, and I particularly want to reduce power consumption (my current file server is my previous desktop which pumps out a lot of heat -- I don't need graphics, or probably even the powerful CPU/memory it has). But I am not interested in their software -- I need to be able to run a modern Linux.
Can any of these boxes have the installed software replaced by open software? Is there a community around them which does that? I realise it will be more expensive than DIY but, given my background, I will find it easier just installing/using software.
Bristow is clever and comes over as very reasonable. He is also very wrong. We need to be very firm and public about not letting him get away with assertions about losing capability.
As I mentioned last time he spoke, he is claiming he needs the power to put an electronic police tail on EVERY person, all the time, in advance, just in case someone does something bad. Collecting Communication Data is exactly the same as placing a police tail on you: the tail can't hear what you are saying but they track exactly where you go, who else is nearby, who you talk to (and for how long), what posters you stop and read, what shops and other building you go into. If the Snooper's Charter was in effect, the tail can follow you into the buildings and video everything you do there.
This is not some power they used to have which they lost due to modern technology. Previously they might have been able to put a tail on one or two people per county at any one time. So, they had to make actual decisions, allocate actual resources, get actual permission to do it.
Why does anyone let him get away with the claim that this is about "losing capability and coverage"? It is a complete transformation into a police state!
I am planning to use this spyware as the basis of a complaint to the Secretary of State requesting that he gives permission for stripping Adobe DRM.
Clearly this is unacceptable behaviour by ADE, and ADE is the only (legal) way to read books I have purchased which are infected with Adobe DRM. There is nothing in the purchasing of the books which involves me agreeing to spyware. Also, it is well known that there is software easily available to remove Adobe DRM. So, the SoS clearly must give permission for that software to be used so that people can safely exercise their rights to read the books they have purchased.
This is exactly the sort of case of unacceptable TPMs for which the law gives the SoS the ability to grant permission to circumvent a particular TPM.
Peter G is right that systemd is about weighing the advantages of the capabilities it provides vs the disadvantages of its deisgn and implementation. Debian struggled with this, with a public and acrimonious debate, and decided to go with systemd. Not because it was well liked but because it was useful.
There is no realistic prospect of anyone else implementing a (FOSS) alternative to systemd that is as useful. Not while the systemd team continues to exist. So, individuals may decide that they don't want to take on board the horrible design decisions made, but the large distributions are moving to it. Personally, I don't like it but I agree that it is the only viable option.
You are not alone.
Sony is not only on a last-resort list: they are on my never do business with them again list, for the same reason. I have not bought anything from any part of the Sony organisation since the rootkit.
HP are also on the same list, for their DMCA abuse (also over 10 years ago). The list is absolute for my personal purchases but I also do my best not to do business with HP in my professional life as well, as long as I am not damaging the interests of my employer, of course.
The respondents were much more receptive to the idea of mandatory data breach reporting, with 89 per cent in favour of such a regime
That is very depressing. Not because I don't support mandatory breach reporting -- I do, very much. But because if 89% of businesses support it that means that they think it is something that will hurt their competitors and not them. Which means they don't understand anything about protecting their data. There is no way 89% of Oz businesses have even adequate data protection, let alone excellent protection.
And while management don't understand that they are massively at risk, they won't invest.
Still, maybe the first few mandatory breach reports will help them understand.
Unfortunately this piece misses the point. PCs are not the important concern any more. It isn't even tablets and phones. The area to be concerned about is the Internet of Things.
The first reason is scale. PC's are well below one per person. Phones come in at around 1 per person. IoT devices will be tens per person or more. If you are worried about "Unfortunately, it seems that it is only after such an event that something gets done" then it is these devices which will have the most opportunity to cause chaos.
The second reason is that many (not all, of course) IoT devices are going to be in either safety-critical or, at least, seriously-inconveniece-causing environments. They may be controlling important household functions (locks, heating, lighting). More importantly, they will be working in offices, factories, railway stations, etc. Putting threatening messages up on the departure boards at Waterloo station in the rush hour may cause more loss of life than causing a car to crash.
The third, and most important, reason is that these devices need to be cheap. Really cheap. Designed and built down to a cost. And those which are not truly safety-critical (nuclear power station controllers) will not be regulated at all. Their hardware may be simple, their RTOS may not be designed for security, their interfaces will be wide open to simplify (make cheaper) integration, and their software will probably be crap -- more concerned about whether it is selecting and displaying ads correctly than whether it is functioning.
We already see serious security issues in SCADA controllers. We already see serious issues in vehicle engine management systems. Both those might get targetted by regulators. But will non-safety-critical IoT devices ever be safe to use?
I have a Jolla with a physical keyboard Other Half, which I use as my daily phone. A bit bulky but nice to have.
Unfortunately the keyboard was a limited run project by a community developer and hasn't been taken up by Jolla.
No, it isn't necessary to use SSL on sites which do not have logins. But it is the friendly thing to do. Part of the point is not just to protect your traffic, but to move to having most Internet traffic routinely encrypted to make the job of hoovering up all data by tapping backbone links harder. It also reduces the chance for the spooks to say "ah, his data is encrypted -- he must be a terrorist".
And, it makes it safer when you later decide to add a hidden page to Foofie's web site to make the Anarchist's Cookbook available -- no one can watch your traffic to see whether people are reading the poodle's page or the secret page.
Sorry, Martijn Otto, you are fantasising. I am a strong proponent of privacy (see my other posts) and I am a strong supporter of both cash and bitcoin, but I am realistic about bitcoin.
1. The government most certainly can regulate it. As you said, they can regulate exchange. They can also require people to declare bitcoin usage -- and most legitimate users will comply. Bitcoin is much more traceable than cash (although it is easier to anoymise than other financial networks). Money laundering is a serious concern for governments and if large amounts of money are laundered through bitcoin, governments will get very heavy handed with it. You certainly can block a bitcoin transaction: put the participants in jail.
2. Financial institutions have plenty of opportunity to make money from bitcoin. Do you think that the needs for loans and savings accounts go away with bitcoin? Do you think that bitcoin credit cards will have transaction fees any lower than todays credit cards? Banks make a lot of money helping people and businesses handle cash today and they will make just as much money from bitcoin. The only people who could be disintermediated by bitcoin are money transfer networks: but they will find plenty of value to add in making transactions easy, handling taxation and reporting, providing escrow and insurance, etc.
3. You certainly can seize bitcoin. Most user's bitcoins are held in third party wallets, which are easy to seize. Even if they aren't, it is easy enough to order the holder to transfer the bitcoin to a government-controlled wallet.
Bitcoin is certainly very useful, but it doesn't undermine either governments or financial institutions!
According to the article, the cabinet document says "Removing barriers to sharing or linking datasets can help Government to design and implement evidence-based policy – for example to tackle social mobility, assist economic growth and prevent crime".
Those seem like reasonable goals. However, the document then moves on to talk about the real goals... "checking if bus pass claimants are still alive, tackling illegal immigration or sharing information about teenagers involved in gangs". None of those are reasons to ask everyone in the country to sacrifice the right to privacy. None of those are at levels where they are causing the country serious problems, and there appears to be no evidence that they would be reduced by data sharing.
So much for "evidence-based policy".
I'm very sorry to hear about your friend's serious assault. However, why should social workers be treated specially? It could have been the milkman, or a neighbour, who was beaten up.
If a violent criminal is living there, and is likely to assault people, that is a matter for the police to deal with. Unfortunately, some people commit violent acts -- taking away everyone else's human rights is not the solution to that. Of course, we could virtually prevent that sort of thing by keeping everyone locked up under house arrest all the time, but we wouldn't have a functioning society if we did that. The same would be true if every local government official had access to everyone's criminal, health, social care and tax records!
It's a bit harder than it might appear. Adding and removing keyboards to laptops is very common in a corporate environment -- I am always plugging and unplugging keyboards as I move my laptop between desk, conference rooms, carrying it over to someone else's desk to show them something, etc. Several times each day (particularly now that we work in a full hot-desk, open plan environment where you can't even have a phone call without disturbing people so have to go a "phone booth" room each time you get a call). And I typically have the laptop closed while I am doing it and I wouldn't want to open it just to acknowledge a pop-up (and presumably acknowledging it from the new keyboard would defeat the point).
I think there may be more success for a popup if the keyboard seems to be combined with another function -- although plugging in hubs with keyboard, mouse, external disk pre-connected is also common so that has to still be allowed.
I certainly hope Microsoft are working on a way to counter this, but it is not as easy as it may seem.
I think you are missing the point. As an earlier commentator said, what this does is turn today's USB sticks into the equivalent of the old infected floppy.
In the business world today, USB sticks are routinely exchanged between people (in the same company, or between companies). When I meet a customer, it is very common that we will exchange documents on a USB stick (they may want a copy of the presentation I have just given, or I may want a copy of the RFP that his purchasing dept will send me in a few days time). If the customer's PC has been infected, this attack allows them to infect my PC as well, even if I use my own USB stick and without actually opening any documents from the stick.
As for those who mention non-Admin accounts, VMs, or keeping assets separate: I am talking about the business environment. That is completely geared up for doing business -- not for security. I have been in sales/marketing for many years now and have NEVER worked for a company (big or small) where my normal work account on my laptop does not have local admin rights -- locking down the PCs, particularly for home and travelling users, is just too hard (i.e. expensive in support resources and expensive in lost time for the user). Despite best intentions, the company ALWAYS ends up making the tradeoff that all field people accounts have admin access on their own laptop.
That may or may not be a good idea, but it is the way of the world. This attack is very serious in the world of business users in the field.
I note your claims that the industry didn't ask for this, but I find that hard to believe. Of course, I don't think they asked for it to make life easier for consumers, or to help them save money. What the industry wants is remote control. That is the single biggest benefit to the suppliers.
There is no need for smart meters to include remote control: it increases cost, decreases reliability of the meter and massively decreases the reliability of the electrical supply when billing and admin mistakes are included. When I last switched suppliers, the new supplier forgot to take the direct debit and also forgot to send me any bills or even any letters saying I owed them any money. The first I heard of the problem was a phone call at 7AM from a debt collector accusing me of owing money! The supplier accepted full responsibility for their mistake, and paid me compensation for my trouble. But if I had had a smart meter, the first I would have heard was a power cut of, presumably, several days duration as I arranged for them to use the direct debit thay had on file.
I replied to the government consultation saying that the "remote control" feature should be able to be overriden with a physical (purely mechanical) bypass by the consumer, unless they were on a pre-payment tariff, and that under no circumstances should the supplier be able to cut anyone off without sending someone on-site (as well as all the other protections required today). The supplier could offer me cheaper tariffs if I was willing to leave the remote control available, but I would always have the choice of bypassing the remote control (possibly automatically switching to a higher tariff).
I live in a rural area and my electricity supply is unreliable enough already without introducing additional points of failure (physical and administrative).
Only two posts here about privacy? And both with 0 upvotes and one downvote (they both have one upvote now)?
I do not work in London but go there occasionally. I have an anonymous (unregistered) Oyster card. I top up with cash and, if I could be bothered, I could swap unregistered Oyster cards regularly with my friends.
There is no way I am ever going to pay for travel around London with a traceable instrument like a credit or debit card. Freedom to travel is a right, and it must be available anonymously in order to protect the basic human rights of freedom of expression, freedom of peaceful assembly and right to privacy.
I would object very strongly if anonymous travel cost more money than tracked travel -- that is why Oyster provides anonymous cards. Does anyone know how many unregistered cards are in use?
It's not insane -- although it can be very confusing if you don't think in the way that data controllers (under EU legislation) are supposd to think.
1. Facts are facts. In general, you won't get a fact removed from somewhere like a newspaper (if it is true). There is no right to be forgotten.
However, Google is not a data store (as you point out). Google is processing data to provide a service: you type in a search term and Google collects information about that subject and tells you. That is where the legislation kicks in. If the data relates to a person, there are laws about processing it. They certainly aren't perfect but they protect us every day from people abusing information about us.
Amongst those laws are implictions on using data processing to create a profile of someone. As search engines were not envisaged in the laws, it has taken legal arguments to decide what the restrictions are on a profile created by searching the web (and it is perfectly reasonable to believe the decision is wrong -- but it has been made). The decision is that it is similar to other commercial companies which create profiles, like credit reference agencies. They are not allowed to include irrelevant or obsolete data, except in cases where dropping that data would be against the public interest (for example, someone standing for parliament is not likely to be able to get data about criminal offences dropped, even though the offences have expired -- they may be able to get them removed once they stop being a politician, however).
2. Your point only matters for well known people, where it is more likely that there is a public interest argument for retaining the data anyway.
3. Yes, old data can be useful. Lots of things could be useful which are not allowed. In the case of data about people, data protection overrides utility. Get over it.
4. Yes they are. Once Google stop moaning, they will put in place a process, using advice from data protection experts. A few more cases may go to court to get some grey areas sorted out. Then the process will just work.
This issue isn't publishing data, it is processing the published data and creating profiles of people. Google searches do it automatically. What I don't know is whether the same rules would apply to manual data processing. For example, if you were to look at all my postings on El Reg, gather some personal data from those (maybe I have said where I live or how old I am or something), create a Wikipedia page for me and publish that information, would I have the right to get old or irrelevant data permanently removed from that page? I don't know.
Yes, that is exactly what they will do. Just like every other business who handles personal data has to.
Robert & Donn, you may not agree with laws about censorship and about having to remove factually correct information from dossiers, but that is the law in the EU. I realise it is not the US way, but in the EU personal data is strictly regulated and being "fair" to people trumps freedom of speech -- not the other way around. For example, it is a true fact that a person who lived previously in my house went bankrupt. However, as that person is in no way related to me, credit reference agencies are not permitted to record that information, even though it is true, as that might adversely affect my credit score.
If that information was on the internet, and someone did a Google search of my name and used that information when making a decision to give me credit, shouldn't I be able to prevent Google making that visible? If not, wouldn't that allow Google to compete unfairly against regulated credit agencies?
Today, the credit reference agencies know the rules and apply them: they make the decision, not a court, unless you disagree and sue them. Google will need to set up a similar process -- after a few borderline cases are decided by the courts, it will all settle down.
There are businesses in the EU who create or hold information dossiers on people (credit reference agencies, headhunters, etc). Those businesses are subject to strict laws about personal data processing (including rights to have old or incorrect information removed from the dossier), which create non-trivial costs for them.
The decision seems to be based on the interpretation that a Google search of a name creates, in real-time, a similar sort of dossier on a person. You can argue whether that is a sensible interpretation, but I do have some sympathy with it: I can see a future (with some smarter Google algoritms) where a Google search could replace a credit reference check.
If that interpretation is valid, then clearly Google need to be subject to the same data protection laws and processes that the other dossier-makers are subject to. Including the right to have false or old information left out of the dossier. And they should clearly have to bear the cost of that, just like the credit reference companies do. Just because their process constructs the dossiers in real-time instead of cumulatively over the years, doesn't change the rights of the subject of the dossier.
If I have set Do Not Track, and I disable or regularly delete Cookies then I am making an unambiguous statement that I do not permit tracking. Any company trying to workround that (whether using canvas, or flash cookies, or anything else) is then abusing their access to my computer. I have not given permission for that. The deliberate action is illegal, whatever the technology. They are, of course, welcome to deny me access to their website if they wish -- but they are not permitted to hack me.
Many companies claim that creating URLs which are not published links and which leak information is illegal hacking of their website by users. If that is the case, then mis-using browser features to track me when I have explicitly refused permission is also illegal hacking.
Why haven't the data protection authorities made a clear statement that any sort of web tracking not based on cookies is illegal and that companies will be prosecuted under data protection laws.
I just tried creating an account. It still says "Are you sure you entered your name correctly?".
It also still wants a date of birth, and a gender. Neither of which am I willing to supply to any sort of social networking.
In my view, data retention is the modern equivalent of putting a tail on someone: the tail can't hear what you say but they record everywhere you go, how long you spend there, who you talk to, which shop windows you look in, which buildings you enter. 64 million police tails. 24 hours a day.
One newspaper report said MI5 are expecting 500 returning jihadists from Syria. 500. Apparently that makes it proportional to tail 64 million people, 24 hours a day, because of 500 potential terrorists. Even if all of them managed to radicalise 100 other people, those 50,000 would be less than 0.1% of the population.
There is no way that jihadists (or even all terrorists) can be any sort of justification for blanket data retention.
Of course, the spooks and police know this. So what is the real reason? Apparently 10% of the population (6.5M) are trade union members -- maybe it is them who the government really want to track?
May is hopeless -- and her merging of snoops and police access doesn't help her or anyone else wanting a sensible debate on this subject.
NCA's Bristow, on the other hand, is much more concerning. He seems to be a sensible man, and the arguments in his speech are well made and effective.
Those of us who disagree with him need to be equally good in our arguing against his vision of a police state. In my view, the public don't understand what using Communication Data means. Collecting Communication Data is exactly the same as placing a police tail on you: the tail can't hear what you are saying but they track exactly where you go, who else is nearby, who you talk to (and for how long), what posters you stop and read, what shops and other building you go into. If the Snooper's Charter was in effect, the tail can follow you into the buildings and video everything you do there.
Unlike a real police tail, this is not reserved for criminals or even suspects. The tail is put on EVERYONE. Even children. 24 hours a day. At home, work, out and about. Just in case you turn out later to have been a paedophile.
Having a permanent tail on everyone seems like the clearest example of a police state that I have seen.
I, for one, am very willing to sacrifice some protection to avoid living in that police state.
@AC -- you are quite right about people not understanding why freedom from surveillance is critical. My "road to Damascus" monent came when looking around the Stasi museum in Leipzig and realising just how close the Stasi came to being able to stop the "Monday demonstrations" (which led to the fall of the Berlin wall, https://en.wikipedia.org/wiki/Monday_demonstrations_in_East_Germany) due to their mass surveillance -- and they were using manual processes not computerised processing and tracking. The people at those demonstrations were not rebels or activists -- they were ordinary people who's emails "no one would be interested in".
Imagine if a small party (like UKIP, or the Greens -- whichever is your particular demon) was able to hold the balance of power after the next election, formed a coalition, acquired a strong, charismatic leader and started forcing through policies "for the country's good". All very sensible, honest and decent, no doubt. But isn't there a risk that real debate and substantial protest would not be allowed once they had got the national security apparatus to believe they were doing the right thing for the country?
I suspect this really means, "If we receive a demand we can issue a quote and make sure we get paid for granting access"
That is a start. After all, the closest thing to democratic oversight of security agencies is budget control. Making sure that excessive surveillance costs considerable money would help to limit it.
Just exactly what do you expect out intelligence services to do? How do you expect them to do it?
I expect them to stop mass and untargetted surveillance. Surveillance within the UK should require a warrant, issued by a court not a politician, and be limited to a named target person. Surveillance of our allies should be exceptional -- it should require authorisation from the Prime Minister (who would bear responsibility for authorising it when it eventually came out, as all secrets do). Surveillance of non-allies would be more routine and would not require warrants but it should still be limited and focused on specific targets or purposes: there should be a robust and effective programme for making sure that non-relevant data is destroyed, not archived and certainly not shared with others (so that, for example, the CIA cannot use this to get round their own government's restrictions). All of the above policies should be publically debated and published, with oversight from parliament.
Unfortunately, it is unlikely we can directly enforce these restrictions. They should be in place, with very visible punishments for senior management when they are inevitably ignored (on the basis that whistleblowers will expose some proportion of abuse). However, the only real lever we, the people through parliament, have is money: GCHQ and MI5/6 budgets need to be cut substantially as a public response to the Snowden revelations, and there needs to be continuous effective oversight of their budgets. BT & Vodafone will not work for free, and other MoD agencies will be unkeen on hiding spy budgets within their budgets, so there is an opportunity to limit their activities at least in some way through money.
The budget, and the activities, of the intelligence services should be proportionate to the real threat and very focused on the most critical threats to public safety. It certainly doesn't include "serious financial fraud"!
Do you seriously think that anyone at GCHQ has the time, or interest, to look into the average El Reg commentard's extra-martial philanderings?
Are you being deliberately difficult or do you really not realise what the issue is with allowing untargetted data collection?
Of course GCHQ is not intereted in your, or my, email or our personal failings. Not unless we become a "person of interest". For example, write an exposé article for El Reg, or get our MP to ask an embarassing question, or investigate corruption, or campaign for or against abortion, or animal rights, or organise a national strike. At that point, it would be very convenient for the government if they could look back at everything we (and our friends and family) ever did or wrote and try to find some way to discredit us.
I am not worried for myself, I am worried for investigative journalists, campaigning lawyers, radical politicians, or anyone else who should be being given the full protection of the law but instead are being shafted by it. Government ministers are the last people who should be able to authorise wide surveillance powers -- that should be an emergency power, only used in time of overwhelming national need, authorised by parliament and made in public.
I was talking to someone at a conference the other day who is selling facial recognition (and other things like gait recognition for when it can't get a clear view of your face) to supermarkets to add to the ubiqitious cameras they have in the shops and feed information into their already massive big data business intelligence systems. The supermarkets plan to not only link it to their loyalty card databases but also track you as you walk around the shop to see what route you take, which displays you stop at, etc. And not just statistically -- you.
This isn't the future -- the cameras are here now, the recognition software is here now, and the SI companies are looking forward to big contracts connecting it all together.
Haven't you missed the point -- or am I confused? [Or maybe both]
What the policeman seems to be saying is that the fraud amounts may be tiny in financial terms, and well within the budget the banks have planned, but that it is causing an increase in very visible crimes (burglary, mugging, etc), particularly among children. I can certainly imagine that if kids have worked out that they can often use these cards for small purchases (say £10), then they may have become very popular, even though the banks are also perfectly happy to cover their customer's losses because it is only a very small total amount of fraud.
That seems to be an unexpected impact on society that could be quite important.
If Google just punted total bollocks stats to all their customers, how many would actually notice?
My company tracks how many visitors come from various Google adwords, so we would notice. As far as I know, we have no idea how many ads are served (if we are told that we don't use the information) but we do look at how many visits happen, month-by-month (and sometimes, for specific campaigns, day-by-day). We then decide if what we are being charged is worth continuing with (and, by the way, it generally is -- when compared with other methods of getting visitors such as email marketing or newsletter advertising).
Ken, despite your scepticism, this is indeed a real technique.
In my experience it is not used for the big phone number that appears at the top of the page (and which you might remember or write down and call later) but for specific applications. It is routinely used for "click-to-call", where you click on a button and your phone dials a number -- in that case the number can be allocated knowing that the call is happening immediately.
It is also used for some other cases where numbers are likely to be either called soon or not at all -- things like customer service. The re-use times are measured in minutes, and a pool of 1000 numbers are likely to be plenty.
In all cases, the caller is queried to make sure the details automatically appearing on the agent's screen along with the call are correct -- so it isn't the end of the world if the matching doesn't work properly sometimes.
@imanidiot: THAT is why they need to change the word. Of course you don't want to give up your car. I am not sure I do either. But there are so many benefits to society that governments will make it MUCH more favourable for you to use your "pod" for more and more things (commuting, going on holiday, ...) that eventually you will find you haven't driven your real car for three months. At that point you might decide you don't need your car any more.
But to get that point, they need to first sell you a "pod" as a supplement to your car, not as a replacement. Maybe first of all for commuting, where a 25MPH speed limit is fine because most of the commute is spent in traffic jams, and so you aren't worried about safety because the speeds are low, and it is great to be able to drink a cup of coffee and look at the sports pages on the way into work.