Data anonymisation does not have to provide a 100 per cent guarantee to individuals' privacy in order for it to be lawful for organisations to disclose the information, the UK's data protection watchdog has said. The view of the Information Commissioner's Office (ICO), detailed in a new code of practice (108-page/2.15MB PDF) on …
If you make sure you "pre-anonymise" your data so that even you have no clue what you're uploading then there will be no way these agencies can find out!
Of course a downside could be value of said random data, but hey; at least its anonymous...
Ok, a little more seriously: the real solution should be obvious enough; if you require privacy then why share your information in the first place?
Both Google (YouTube) as well as Microsoft (Microsoft ID) are both currently doing their best to persuade me to use my real name and information. Google is a bit more intrusive than MS on this, but even so; they're both persistent.
And I keep telling them "forget it". Right up to a point where I might go "Screw it" (YouTube) but we're not there yet. How many people do allow themselves to be suckered in only to wonder a few months later how its possible that their name seems to end up all over the place ?
Privacy starts with yourself.
ico sells out
if there is a "remote" chance
The problem is that the "remote chance" be quite close.
Consider anonymized data that could be provided by HMRC, NHS,DWP, etc. plus that from data-mining companies such as credit reference companies, Google, major retailers, etc. Add statistical analysis of the power used by Google Translate, and de-anonymization could be almost trivial. Once hosted in a friendly jurisdiction, aggregated personal data could be sold to anyone willing to pay. It would not even have to be very accurate -- that has never daunted the credit reference agencies.
The question is *how* remote?
I think in some cases it's as long as it takes for someone to run a *very* simple query against 2 publicly available datasets.
There's a *lot* of wiggle room in this statement.
If an organisation releases anonymised data on an individual, and that individual suffers a measurable loss (reputational damage, financial loss or whatever) is there any liability back on the organisation that released the anonymised data?
If there isn't, is there any reason for a data controller to have any care whatsoever on the quality of the anonymisation that they perform? For example, if they chose to anonymise a data set by replacing an ID by its MD5 hash. Arguably this requires "disproportionate effort" to re-identify the individuals, but it's not actually that hard. I'm worried that we'll see datasets that have weak anonymity being traded specifically because the anonymity can be broken.
Given the generally toothless nature of ICO, is there any point in organisations doing more than paying lip service to ICO rules? Are they better off putting the money they would spend on data security into a fund to pay ICO fees if they get caught?
missing the point
the point is that it is trivial to un-anonymise data in most cases so it shouldn't be made available in the first place not that companies need to spend more effort trying to anonymise it.
Re: missing the point
Congratulations! You win the prize, please let me know who/where to send your prize.
As useful as a Chocolate Fireguard.
- Breaking news: Google exec veep in terrifying SKY PLUNGE DRAMA
- Geek's Guide to Britain Kingston's aviation empire: From industry firsts to Airfix heroes
- Analysis Happy 2nd birthday, Windows 8 and Surface: Anatomy of a disaster
- Google CEO Larry Page gives Sundar Pichai keys to the kingdom
- Something for the Weekend, Sir? SKYPE has the HOTS for my NAKED WIFE