The first time
I've ever felt justified, or even motivated, to use the phrase "Political Correctness Gone Mad"
217 posts • joined 2 May 2007
I've ever felt justified, or even motivated, to use the phrase "Political Correctness Gone Mad"
investigations like this are all part of the ongoing Accountability Theatre
Until all such intel and data gathering entities are legally required to make their data auditable with digital immutability, reviewed, on demand, by impartial juries (not the State and its poodles), the routine civil abuses and steady growth of authoritarian Police States will continue apace...
First line of defence for me and most of my clients.
This particular issue is trivial for SB users. If you've set the relevant sandbox to delete on exit, whatever google et al have dumped onto your machine (cookie caches, profiles, unwanted updates, plugins etc etc) all evaporate on exit.
More important than that, in the ten years or so since I started bullying my clients into using it, it has caught and prevented at least a dozen ransomware attacks and several dozens other malicious attempts to infect users. Typically, the ransomware will exhibit its normal behaviour (eg lock screen with warning that your hard drive has been encrypted and you need to pay bitcoins to this address to recover blah blah) and my client calls me in a panic. The usual fix is "right click on the Sandboxie icon and choose terminate all programs". Threat and sweat eliminated instantly.
It's also particularly good for testing out software that you're not sure you can trust. Install it into its own sandboxie (which you set NOT to delete on exit) then run as normal. If it does anything suspicious, it can't cause harm outside the box.
Unscrupulous users have suggested that it's also a good way to run "30 day trial" software forever (delete on expiry, rinse and repeat) but you didn't hear that from me.
The only downside is that it is so good at preventing change that you have to remember to disable the Sandbox to permit those changes you actually want (like browser updates, adding plugins etc)
I would say it has prevented far more damage than all my other routine defences put together (firewalls, av, anti-keyloggers, etc)
it's not open source and I can't imagine Microsoft permitting a formal security audit.
Given their close connection with the TLAs I'd place a reasonable bet that there's a backdoor in the code, but that's just my paranoia. More importantly, unlike open source alternatives like Veracrypt, there is no way to prove the absence of a back door.
I really don't get it. Anyone using bitlocker clearly has some desire for security and/or privacy, which implies a little bit more awareness of the issues than the common herd. How can they not be aware of that fundamental trust problem?
The only thing I can think of is that they're concerned about script kiddies or thieves or family members getting access to their data but don't mind if it's Microsoft or the Government. Weird!
Not that I use Chrome for anything but the occasional test. When I need the chrome engine, I use SRWare Iron which studiously strips out the standard Chrome poison.
But I have clients who use Chrome and I have managed to persuade some of them to use Keepass.
If Chrome is able to log those users in without consent, it implies they're keeping our passwords in plaintext. (or, possibly, encrypted but with a key of their own) as opposed to the usual salted hash.
Anyone know the score on that?
the upside to the Snowden revelations was the massive uptake of End to End encryption. Still only a small percentage, but we're now seeing millions of Whatsapp/Telegram/Signal etc users, instead of the few tens of thousands who were using it.
The upside of crass authoritarianism like this proposed childish version of Age Verification will be, as you suggest, a massive uptake in VPN technology.
These assaults on basic liberties are training citizens in the vital art of subverting and bypassing government. Not yet quite at critical mass, but every little helps. Hopefully we can get there before it's too late and you've got a generation of nanny-state raised kids who don't know any better
can someone please explain what is SUPPOSED to happen as a result of google's "delisting"?
I came across this BBC Page a few months back, in a similar context. It lists all the pages google has allegedly delisted.
I tried a dozen or so of the links. You get to the BBC story. It's usually fairly obvious who would have an interest in suppressing the story. So then I went to google and pasted in their name to see what would come up. In all but one case the BBC story itself came up in the first page of results. In ALL cases, some other equally damaging reference to the person/story also appeared on the first results page.
So what exactly is the alleged effect/benefit of the delisting?
yes to this and...
Had the plaintiff been (instead) a "person of interest" to the FBI and they'd requested his entire history, I somehow doubt that FB would have dared give them the same response...
One of the hardest things to explain to the "If you've nothing to hide" fools is that if anyone can discover where you are, they also know where you aren't. Which, along with remotely "casing the joint" (google street view ferinstance), gives them all they need to know about when to break in.
The level of detail they can get from this extra level of surveillance is the icing on the cake. Now they can figure out what time you go to bed, get up, leave the house etc etc. Even more detailed than the "Smart Meters" they're trying to impose.
Welcome to Panopticon World...
The mere fact that they're prepared to do this has rendered their entire enterprise fundamentally untrustworthy. (or "even more so" for those who had already lost faith)
From this point on, until and unless they give us access to their code, we will never know whether, where, when and why similar algorithmic controls are being targeted against us in the "free" West. There are certainly many western governments, and many authoritarian advocates on both the right and left, who would welcome "search censorship" with open arms
Governments only really have one tool in their box (that they can understand well enough to deploy) and that is the hammer of Coercion. Naturally every problem they are confronted with becomes a nail.
And, in the context of Fake News and Social Media, that approach clearly and verifiably works. The Chinese don't have much of a problem with Fake News on Weibo. Better still they don't even have the problem of Real News that might be embarrassing to the government.
That's the kind of WinWin scenario that's bound to appeal to authoritarians around the planet, including, obviously, our own authoritarians in Parliament (not just Government).
The effective strategy for dealing with the problem is a minor variant on the strategy for dealing with Accountability Theatre. It wouldn't "prevent" Fake News (which is the authoritarian solution) so much as expose it and leave it dangling in the wind when set alongside Real News stories with verifiable sources and audit trails. And, of course, it would make much more difficult the suppression of Real News.
So don't expect to see Governments embracing this approach any time soon. But there's nothing to stop the "honest" media treading that path. It's in their interests more than most
"The use of this technology should be transparent because if used in certain ways, it can distort democracy.
Every electronic device on our streets "monitoring/collecting data" e.g. electronic road signs, should have marker where you can look up online exactly what resolution of video/image/audio is being collected. What processing of the image is taking place - facial/ANPR, what cross-referencing is taking place against say, Government databases.
Where this data is being stored, what is the purpose/justification and who has access to it and what is the criteria being used to access such data/images. How many times has this data has been accessed.
You get the point."
Well said Sir (or Madam) We certainly do!
I made the same point about the Tesco's hold on your personal data in my "Datastophe" blog back in 2007. But I also made the point that it is nowhere near as sensitive (or valuable) as the Data (then, recently) "mislaid" by the HMRC (see same blog)
I didn't make the point then which I do nowadays. Governments are - universally - the biggest bullies in the playground. The only reason we need to tolerate them at all is that, when they work remotely like they're supposed to, they help protect us from the other, lesser, bullies.
But increasingly, they are a) hoovering up increasing volumes of our personal data, either illegally, or only legally after a hasty adjustment to their laws and b) increasing abusing that data against the citizenry either to suppress dissidence or to exert social control.
The excess hoovering now routinely includes their self appointed "right" to demand our private data from the likes of Tescos (or ISPs, or Banks etc etc) and THAT is the principle reason we should now object to "corporate surveillance"; the mandatory right of the State to add it to their own ballooning collection.
Ultimately, of course the only credible protection against State abuse is going to be solving the problem of Accountability Theatre. I'm hopeful that may be closer than you might think...
To begin with, not relevant to your post but to the article and Levy's stereotypical response; I make my obligatory reference to Accountability Theater, which covers the issue of Surveillance amongst others.
In response to your rhetorical quiestion: "Do you really want public employees making decisions about what is "moral or right" rather than "legal"?"
I draw your attention to the Nuremberg trials where it was made explicit, in international law, that no citizen can use, in their defence, the argument that they committed the obviously immoral act only because they were "following orders" (legal or otherwise). This imposes a direct obligation on each citizen explicity to consider the wider moral implications of their actions, over and above the Law of their land.
Clearly, for example, if the Law mandates the persecution of a class, race, religion or gender on no other basis than those attributes, it follows, from the Nuremberg judgements, that is the duty of the citizen to challenge and disobey such laws.
So yes, we do want public employes, when making decisions on how to implement public policy, first to understand the law and what it mandates but second to consider whether in the circumstances of a given case, implementing the law as mandated would itself breach the implied higher laws of International ethics.
An obvious example of where precisely such employee overrides should have taken place (in the UK) has been aired in considerable detail recently in the context of the Windrush scandal, where civil servants have (for the most part) enthusiastically implemented the "hostile regime" designed primarily by the current Prime Minister during her role as Home Secretary.
In my view both the politicians who mandated that regime, and the civil servants who implemented it have all committed serious criminal offences worthy of incarceration (though it would have more been fitting, had the option still existed, to have deported them to a prison colony)
you're missing a major point. Which is not unreasonable, given that Mutant 59 didn't make the point in the first place, or perhaps I should say "didn't make the point strongly enough".
These micro-payments alone would net the likes of google and facebook billions per year. That kind of money will attract AND FUND genuinely honest alternatives who regard their obligation to their users (who will probably also own the service) as fiduciary rather than predatory.
Frankly I strongly approve both strategies: Chaff to reduce the value of data to the parasites, and micro-payments to encourage the development of honest services.
Of course, nobody will read this as I'm posting a day too late and the tide's gone out but I want to put it on record anyway.
This is a relatively simple example of another problem which can be fixed by the solution to Accountability Theatre.
Had the solution been in place for this instance, every data item or collection they'd ever received, together with all correspondence and recorded conversations about the project (including, for example, the internal emails from their Academic Colleaugues at Cambridge, protesting at the "get rich quick" scheme) would have been hashed on receipt or creation and those hashes committed to an immutable audit trail. Mandatory access controls would have ensured that no data could be processed (or, in appropriate cases even accessed) without confirmation that its hash was duly recorded, along with identity and proof of access.
This process would render doubts and discussion about the length of time it takes to get warrants utterly irrelevant as the audit trail would either confirm the completeness of material - or reveal which items were missing or tampered with. As I say, (Relatively) Simples.
Solving the larger problem of Facebook (et al) leeching private data from their victims is not quite so simple, by virtue of scale. But the ability to prove, indisputably, who has agreed to, or authorised or implemented or paid for (whatever) would go a long way to forcing transparency into their murky world.
The key phrase in your contribution is:
"It's like weapon, it can be a gun in the hands of a police officer saving you, or an AR-15 in the hands of a murderer shooting at you, if there is no sensible regulations and controls."
What you seem to be unaware of is that there ARE no SENSIBLE regulations and controls on the police (or any other agents of the state who might use technology like this on your phone/laptop/desktop etc)
We'd all be a lot more comfortable with State Surveillance if we knew (and could prove) that those doing the surveillance were themselves under the strictest form of surveillance. That's why I keep rabbiting on about Accountability Theatre.
We've been using dropbox for several years as the collection and distribution mechanism for our clients (our software creates encrypted customer backups and dumps them in the dropbox where we collect them and store them in 3 offline silos; we also use dropbox to distribute updates to our software)
Began to get nervous following the Snowden revelatations and started looking around for alternatives using owner controlled encryption. Eventually found Sync (sync.com). We're now using paid 1Tb accounts on both though we're gradually migrating it all across to Sync. So far very impressed with them. Did a reasonable amount of due diligence and the security seems to stack up, though I've not seem them peer reviewed by the crypto community.
Much better level of control over who gets to see what and one feature I particularly like is that while we pay for the Tb account, we can share ALL of that with users who only sign up for the free 5Gb account. And I mean share as in full read write access, not just links to files.
But what we're increasingly using it for is secure communications. Create the document somewhere in an unshared area of your Sync box and you can send "privacy enhanced links" to your contacts, specifyiing passwords, expiry dates and download limits - with (anon) notification on download. I've actually nagged sync into going one step further and offering the option of email verified one time passwords, with notification, which would then make it a very easy way to deal securely with confidential and private material, complete with proof of delivery. They've put it on their "to think aboout" list.
In part my motive for this spiel is to raise awareness among fellow readers that there are alternatives to Dropbox we can trust but also to nudge more people into using their communication features and adding their nags to mine!
what HAS prompted this response?
In contrast to Dan55's assertion that they must be haemorrhaging users, I see no evidence of that. Indeed, I'm in a running battle with colleagues family and friends to get them to desert Skype BECAUSE it doesn't include E2EE and that I object even to the possibility that the NSA can eavesdrop on our calls at will. Most people don't give a damn.
So - tinfoil hats on please - the only obvious reason I can think of for Microsoft's sudden apparent support for conversational privacy - is much the same as the reason we thought Microsoft had bought Skype in the first place - i.e. to provide access on demand to the TLAs. I suspect the intention is make it look like E2EE and market it as such and thus avoid a rush to true E2EE which is the TLAs worst nightmare.
So your point is critical. Without trusted independent verification of the source code and a means of verifying that the version we're actually using conforms to that code, their claims will be meaningless.
And I suggest that one way we can measure the authenticity of this project is to watch the reactions of the TLAs and authoritarian politicians. If they campaign against it - to the point that Microsoft are forced to defend the project in court - then it might just be real. If the response is muted, the conclusion will be obvious...
In either case, the Code verification is mandatory for the purposes of Trust.
are you saying there IS an 'EFF Panopticlick' option? (i.e. something which defeats the browser id attack) If so, I, for one would bite yer arm off for a link...
So far I've been to the 'EFF Panopticlick' page but other than the depressing evidence that I still haven't managed to defeat their identifier test, could see nothing that suggests solution or even mitigation...
Generously we should allow them a year from first recruitment.
If after that time they haven't pronounced on the major elements of the Surveillance State - such as ANPR - and ruled its implementation unethical on the basis of its obvious Accountability Theatre, then its credibility will be on a par with fig leaves...
The easiest way to control most of the features you hate (or love) in Windoze is to install Winaerotweaker possibly combined with Spybot Antibeacon to kill telemetry. The solution to Search is Everything. All these are free.
WinAT contains about 200 settings organised by functional area (eg Desktop, Context Menu, Network etc)
Here are some of the features I personally favour:
Disable nearly ALL the "Call Home" features
App lookup in Store
Auto update of Store Apps
and Block all Ads
(although if you're still paranoid, install Anti-Beacon and remember to select all the items on the 2nd tab as well)
Disable driver updates (the ones most likely to bork your system/s)
Disable Windows updates easily (easiest is to set Ethernet connection as "metered") (more detailed version below)
Verbose logon messages (so if something delays shut down or startup you can usually identify the culprit)
Show seconds on your taskbar clock (didn't even know that was possible till I spotted it in WinAT)
Add various to the Context menus eg
File Hashing menu (brilliant if you a regular hash checker, which I am)
"Kill Not Responding Tasks"
Shutdown menu (and change default behaviour)
Remove the Shortcut and Shortcut arrow from your desktop icons
More detail on controlling Windows Updates:
Setting Ethernet as metered will halt the update process till you OK it but doesn't control what gets delivered.
For total blockage of Windows Updates, disable the service but if you merely wish to control when it happens and (partially restrict what gets updated)
It treats the update process as a troubleshooter but don't let that deter you.
Run it when you know updates are available. Choose the "Hide Updates" option when its finished checking for updates. Tick those you do NOT want, Close the "troubleshooter".Then permit the update in the normal way.
For even tighter control (pro users and up only) use gpedit
/admin templates/windows components/windows update/configure automatic updates
click enabled and choose "2 - Notify for download and auto install"
you almost return control of the Windows update process to where it used to be pre W10
As for Everything, I cannot figure out why Microsoft hasn't bought him out.
It's genuinely a life changer for anyone with millions of files on their system (I currently have 7.6 million). It does what you kind of expected file search programs to do before you actually had to use one. i.e. INSTANTLY find all occurrences of relevant matches anywhere on your system. I'd really love to know how he's done it because he's clearly using the technology far better than Microsoft do. Example: I'd read someone raving in similarly favourable terms about it and sceptically thought, yeah, right. I'll try it out not expecting it to deliver.
Installed in seconds. Told me it was indexing my system. I thought fair enough - expected it to take days (like microsoft's indexing) or at least hours. It took less than a minute for my (then) 6.25 million files spread across 16 drives/partitions.
I didn't believe it, so I began to test it. Found files in places I didn't even know existed.
It has vastly improved my file management by helping me to avoid unnecessary duplication and reminding me where I store files relating to arbitrary topics. Who needs the Windows Search joke?
Genuine curiousity. This table of International Broadband speeds shows 19 countries with faster average download speeds than the US (and 30 faster than the UK).
Does any one of those permit the kind of throttling and content based restrictions which Pai is arguing will "improve" internet performance? I haven't studied their broadband policies but I haven't heard of anyone else having to resort to this kind of nonsense to achieve a better service.
So, on what basis, other than the favouring of selected vested interests, can the proposal be defended? More importantly, why aren't questions like that being aired in America?
didn't spot your comment till after I'd replied to Rob V
if you look at the examples I provide in that response, you'll understand that we're talking about the routine anonymous protection of digital data. Ours is a very light-weight solution where it is much easier to keep track of the hashes you've used to protect individual data items,The PK solution too clumsy for what we anticipate will eventually be perhaps half a billion such transactions a day.
You might be interested in the comment I made a few weeks back (and the links therein)
who probably won't get to read this because the crowd has moved on, but I'll put the reply here for the record, if for no other reason than being able to refer back to it myself at some later date
Another key feature of our solution is that we never hold or publish sensitive data. All we guarantee is proof of integrity of the data protected by the system. We have no idea what those data are and we don't need or want to know.
It's broadly suitable for anyone wishing to be able to prove - if challenged at a later date - that the relevant data remains as it when registered.
Here are some of the things I've personally considered it useful to protect, anonymously:
Ensuring I can win any "their word against mine" arguments:
eg recordings of sensitive skype conversations I've had - the most significant of which were with sundry commercial services who have failed to deliver on (whatever) or threatened me with sanctions over perceived failures on my part (eg a 3 year row I had with Npower)
or more often, even when not in dispute, just wishing to ensure I had verifiable evidence of the exchange.
dash cam footage I've captured of extremely dangerous driving by other motorists (some of which I've passed to the Police)
dash cam footage of an accident where I was at fault but was a minor collision (I sent that to my Insurance company. I needed to ensure that the other party didn't overclaim the damage)
drafts of intellectual property concepts I'm working on at various stages, but not yet ready to publish
covert recordings of interviews conducted between a disabled relative and a DWP agent performing an assessment of her condition with the intent of reviewing her benefit entitlements
Sundry predictions I've made where I anticipated needing to be able to prove that I'd made the prediction ahead of the actual event **
and so on.
In nearly all of the cases above, there was no need or desire on my part to publish either the material or my association with it. It was merely a sensible precaution.
Other examples I haven't personally used include the protection of photographs, music, poetry and literature, and any other digitally captured creative work, particularly in draft form
Contracts where neither party seeks or needs publicity
Entire audit trails - for example the accounts for a commercial company - including all the detail they would never normally publsh. (But if challenged, can use the proof of integrity to show that an entire data set remains as it was at the relevant date)
In fact the list is endless. It is telling that in today's world even some Reg readers find it difficult to understand why Anonymity is a perfectly valid and reasonable requirement and how that doesn't conflict with people still wanting to be able to prove their claims if challenged. It's an example of what I call Anonymous Accountability.
**such as my 2015 prediction that the Republicans would nominate Trump. I didn't predict his actual election though! I was confident that the repubs were rabid enough to nominate him but I was also confident that the Americans as a whole were not stupid enough to elect him. Definitely got that one wrong!
as it happens, I'm working on something very similar, which, if I get it right, will also deal with the problem of things like anonymous proof of various attributes like Age, Nationality, gender, arbitrary memberships, etc
Of course, I can't tell you too much, or I'd have to kill you, but I'll give you one use case for free.
Our system will allow authors to register their "ownership" of a document anonymously, with a view to third parties to whom the document is distributed being able to prove its integrity. It also allows them to revoke that registration later as having been superceded by a later version of their document. Obviously, we don't want anyone but the legitimate author to be able to issue such updates/revocations. Hence the need for anonymous authentication where, in this case, all you're proving is that you are the same entity who created the original document...
this suggestion is predicated on the notion that a nude photo without a face (or name) is rarely a hostage to fortune.
if users could submit one or two "face only" photographs, with some sensible evidence that it was indeed their own face (eg an automated web cam session using the face recognition they're already experimenting with), then farcebook could introduce a new rule.
No photograph which includes a recognizable face(nude or otherwise) can be posted, except by the owner of the face, or with the explicit recorded permission of the face-holder. That would kill many birds with one stone...
and, in addition...
why not have a simple rule along the lines of:
if any project requires ongoing support from more than (n) personnel after year(y), then the contract should include the training of suitably vetted or recruited in-house staff with, say, a 12 month hand-over period...
I'm sure one size wouldn't fit all, but as a template, that's the kind of model that might begin to wean us off the current model.
In addition to MrBanana's comment, the 15 month delay might well be justified as the date by which the compensation will be handled AUTOMATICALLY, but there's no reason at all why we shouldn't be able to lodge MANUAL claims today...
Would be nice to add a small legal tweak to the effect that any claims not dealt with within, say, 30 days, will automatically be approved if submitted to a small claims court (with appropriate evidence of course).
That should make the buggers' eyes water...
as they seek to enshrine, in law, what can only be described as Accountability Theatre
I strongly recommend, for anyone who didn't see it when it first emerged in April 2015, the excellent John Oliver take on Surveillance, which includes his visit to Moscow to meet Ed Snowden. It is the best non-technical description of the significance of all the main issues that I've ever seen
It's an educated guess, because I don't claim inside knowledge.
But I reach that conclusion on the basis that they're not making it freely available for home use. To be fair, it doesn't look like Microsoft have anything but honorable motives on this occasion (although I would question their own security - if the FBI comes calling are they in a position NOT to release such images?) (one of the many questions Facebook will also have to answer)
They make the software available in various cloud offerings and have donated it to a Missing Child charity amongst others. So why aren't they simply allowing us all to download a copy and do our own hashing and upload the results instead of the image - as suggested in the first post on this thread (John Robson)
I can think of only two possible explanations. First is that the process is so power hungry, you'd need a Bitcoin mining rig to run it. That doesn't look feasible from what I've read about the process. Looks like it might take about as long as creating a couple of thousand hashes. Under a second on most desktops.
The second is that they don't want it in our sticky little hands because it would relatively trivial to find ways to modify target images in such a way that they wouldn't be detected, so to preserve the value of the service, they don't want the great unwashed to access it.
In short, they're relying on "Security Through Obscurity" and, like most such attempts, that'll work for a few months, until the obscurity is cracked...
Oh, and by the way, the (partial )solution to sharing intimate private images is sharing one time keys which BOTH/ALL parties have to re-combine to access the images/data (as outlined in Digital Telepathy)
is just as evil as political Accountability Theatre especially when the commercial entity is more powerful than most governments.
The solution is the same. Yes you may have legitimate secrets which need protecting but that doesn't mean that NO ONE outside your organisation should be allowed to access them. It just means we need a publicly trusted audit team to do the job on our behalf. In the case of Google and similar commercial giants, that implies a team of a few dozen, at least 2/3 of whom will need serious IT analytical skills including Security analysis. We also need representation from one or two Civil Liberty specialists (eg ACLU, LIBERTY etc) and all need to be bound by NDAs - unless they find evidence of illegality.
They need the time and budget to do the job properly and they need the right to access ANY parts of the system at ANY time, under appropriate secure monitoring of their own activities, in order to be able to confirm that everything is/was compliant with relevant regulation and remains so.
None of which would aid either competitors or gamers.
The commentards will have moved on by now but I'll still put this on record.
Yes I am deadly serious.
Bit surprised at the hostility.
Although I didn't know it when I started using this techique (2008) Google actually patented it back in 1998
So I'm not proposing an entirely novel concept. I've since seen references to it being used in many "serious" authentication or confirmation dialogues where it is vital to be sure that the user really is awake. So it is ideally suited to the Level 3 driving scenario
If such techniques are NOT used then (as implied by some of the other responses, and suggested by some of the developers) we should skip Level 3 and go straight to Level 4. (where the cars are certified to be able to take complete control of the vehicle for any preplalnned route)
The problem with that is they need the experience gained in Level 3 to get to level 4. Skipping it would probably add up to 5 years to the Level 4 development schedule
as I've said elsewhere:
in some of the software I develop I use deliberate random errors in certain dialogues, to spot humans trying to answer questions without inspection or thought.
It occurs to me that something similar is required for the "Level 3" driverless cars (which are supposed to be able to handle almost all situations but still need close human monitoring). i.e. the software should regularly (but randomly) send false alarms to the control panel and measure the time and accuracy with which the human deals with them. If their response time exceeds a safe threshold, take the earliest opportunity to park the car and cede full control to the human (with an auto reset of, say, the next day?)
The first time I tried this, I expected user hostility. Instead they treated it as a game and told us that it made an otherwise tedious task much more interesting and entertaining. I suspect the same could happen in the Level 3 scenario...
Surveillance of citizens is indeed a problem, but not THE problem.
This answer applies just as much - if not more - to those carrying out the surveillance of citizens, as to the Cops the comment was triggered by...
for the connected classes
much more significant win for the Microsoft PR team.
You're most kind
One point I'd emphasise is that for private citizens (or even authorities acting outside their working "parameters") the recording would only be trusted because it will (MUST) have been demonstrated conclusively, that no one - especially including the authorities - can ever access their data without their uncoerced and informed consent. That's the difficult bit (and that sentence is my contender for understatement of the year)
But I genuinely believe it is possible to get there from here...
"complaints falling by up to 90% etc "
I agree. The balance of experience seems to be better than favourable already. But it is being done in a half assed way. They're capturing evidence and are essentially ignoring the rules of evidence, and the opportunity to make it mathematically verifiable. That omission is likely to be by design. They are probably aware that a bullet proof audit trail will severely constrain their freedom to abuse.
1 wearing body cams is made mandatory
2 the law is changed in line with my fictional "History of Digital Telepathy"
Citizen - Innocent Until Proved Guilty
Authority - Guilty Until Proved Innocent
where the digital recording of EVERY activity by any authority in the conduct of their official duties is mandated and proven by entries on an immutable database, available for inspection by (publicly) trusted independent Auditors (to eliminate Accountability Theatre)
Where it is made illegal for an instruction to be given without that recording, and illegal to follow such instructions without confirmation that the recording exists (most of which can be automated)
So that, whenever an authority is accused of stepping over the mark, everyone will know that they must have a recording. They would not, however, be obliged to reveal it. But we the people (in our role as Jury) would be entitled to read such refusal as admission of guilt.
Of course, there is the legitimate problem raised by equipment failure, which suggests that no single point of failure should be permitted. i.e. two recording systems (at least) should always be available and if either one breaks, the authority should suspend their activities at the earliest opportunity to get it repaired or replaced.
And that, ladies and germs, is how we might wrest control from government and start making their lives as much a misery as they've spent the last few thousand years doing to us....
at 68%, it's only interesting enough to justify further experiments, not to rewrite the text books.
They don't generally regard these things as settled till they reach or exceed 95% (that, for example, was the trigger for the official announcement of the Higgs Boson)
he was driving in an open top car (or with the windows wide open) through a built up area.
Otherwise, the authorities should have been told as (legally) forcefully as possible, to fuck right off...
ah, that's interesting.
Is it not possible (I naively assumed this was routine) to have a "provisional" authorisation code which would deal with that situation? (Ideally confirmed by a "signature" from the customer, but let's not run before we can walk...)
Someone help me understand...
I presume they don't store payment card details. (if that assumption is wrong, then all bets are off and I withdraw my question)
So, assuming they don't, yes they need to process the data, but presumably that's done in a couple of secure sessions (one with the customer, one with the Card Issuer) but once they've received a payment authorisation, they have no further legit use for the data. So how has an attacker breached their defences? Are the secure communication protocols broken? or what...
Completely Agree but don't have the space or time to answer the questions in your final para
The short version is
1 Incentivise the use of private notarised personal data "wallets" securely stored in various devices and capable of providing the answer to some questions without revealing actual data (eg whether someone is above or below an age constraint can be revealed without revealing date of birth). Also capable - with the co-operation of couriers who buy into the idea in order to feed off the "privacy preferred" market - of supplying one time "address keys" which even the courier can expose only in sufficient detail for their current sorting requirements. (but the merchant or supplier never gets to see or store)
2 in the few instances where data really does need to be warehoused, compartmentalise it so that one warehouse may hold, for example, address data but not names or other private data; while another might hold dates of birth etc. (Only linkable with more one time keys etc)
3 impose strict video-logged access controls on such data warehouses so that if any human access the protected data, (publicly) trusted auditors will always be a) notified and b) able to discover exactly who, when, why and where they accessed the data (and, of course, have full legal rights to blow the whistle if they spot anything underhand).
Biting the hand that feeds IT © 1998–2018