Re: Nobody has yet asked the obvious:
From information I have seen elsewhere, Gatwick provides an app for mobile phones but this was affected too.
497 posts • joined 15 Apr 2010
From information I have seen elsewhere, Gatwick provides an app for mobile phones but this was affected too.
You must be in Manchester then.
London Stansted is not in London. It's all a marketing strategy to convince foreigners they are landing in London.
Caching is not the answer to a fundamental failure of resilient network design.
because the law which was in effect at the time they committed the breach/abuse, limits the fine to £500,000. They cannot be fined more than the law permits. That is why.
Yes, it is possible the ICO may continue to be toothless and fine lightly.
But consider this.
Any data subject in the EU that wishes to make a complaint about a data abuse or breach, has the power to report the breach to ANY GDPR supervisory authority in the EU, not just the ICO.
The GDPR regulation requires that supervisory authorities across member states share information and work together.
If the ICO develops a reputation for being weak on issuing penalties, UK data subjects can take their complaints to other supervisory authorities outside of the UK.
That is probably true for the Data Protection Act, which is now defunct. But GDPR was specifically developed with social media companies in mind, given the way the data was being shared. This was recognised by the EU. Under GDPR, there is no single fixed maximum fine which applies to everybody.
The maximum fine payable by any company is dependent upon their company turnover.
The fine payable, is determined by the ICO, taking many factors in to consideration, including how cooperative the company has been with the ICO, and lies between zero and the upper limit calculated from the company's global turnover.
There is a maximum fine under the now defunct Data Protection Act, there is no maximum fine under GDPR. There is an upper limit which is determined by a percentage of the company's turnover, and the fine, in pounds sterling, can be anywhere from 0 to that upper limit, but the higher the company turnover, the higher the upper limit There is no limit to the upper limit.
In Facebook's case the fine they would pay under GDPR would be anywhere from zero to $1.6 billion.
A company with a higher turnover, the upper limit on the fine would be higher.
That is true, but under the Data Protection Act £500,000 is the most they can fine.
Under GDPR, fines can be much larger, and in Facebook's case, because their turnover is so high, the maximum fine would be $1.6 billion dollars.
It might be a free service but that does not give the company providing that service the right to break the law.
The law sets out everybody's expectations, it's a standard from which everybody works and complies. The public knows what their rights are and the suppliers of services know what they have to provide.
It's completely inappropriate then to say "There is a legal standard which you must follow, but if you're providing a free service, you can totally ignore it". How do customers know what their rights are if the providers of free services are given complete carte blanche to ignore the standard and do whatever they want?
What is particularly worrying about the shadow accounts, is that firstly people didn't consent to Facebook collecting their data on them, and data subjects have no way to request that Facebook cease processing and storing the data.
These are both in themselves breaches of the GDPR regulation.
If the data was unencrypted then they HAVE done a bad job.
The lawyer is right about law not being applied retrospectively, but there is an interesting legal issue here. That of when they reported the breach. They could have reported the breach under DPA but they left it and reported it under GDPR. So which is relevant, when the breach occurred, or when they detected it, or when they reported it?
They have known about a possible data breach since last year. The company's data protection team must be staffed by morons. They could have reported the breach under the Data Protection Act and received a maximum of £500,000 fine, now they have chosen to report the breach under GDPR the fine could theoretically run into the hundreds of millions of £££. Why? Because their turnover is £10billion
It seems to me FB have two issues to contend with: Firstly this court case and secondly compliance with GDPR which they must have in place by 25th May. They can't use this court case as a delaying tactic to comply with GDPR. They have had two years in which to prepare for GDPR.
But nobody can bring a case under GDPR right now, so this case can only be prosecuted under whatever existing data protection legislation Ireland has in place.
Yes, it's kind of moot, or will be in a few weeks, but the claimant is acting under existing legislation and FB need to fight their case based on that legislation. It will be up to the claimant whether he wants to discontinue proceedings feeling that matters have been overtaken by GDPR on 25th May.
I doubt the claimant will be able to change the case and say he wants to progress the case under GDPR law. Need to start new case.
You're combining two acronyns: SNAFU, FUBAR.
Easy to detect fraud. They record details of all transactions and the time and date of when it was performed.
All they have to do is sit back and wait for people to complain money has been stolen from their account. Then when the complaints roll in, investigate those complaints relating to transactions which occurred during the time window of the change.
The problem is always going to be that when you construct test data how realistic is it? I did some work on a system a few weeks ago and I could not obtain a data model of my source database and only in time I discovered problems in the live dataset, which I needed to cater for. Had I constructed test data I would have built it to what I expected the data model to be and my software would have failed.
An employee working with sensitive live data simply has to sign an NDA, now that doesn't guarantee they won't steal the information, so you have to also consider who the people are that are working on that data, and which country they are in. And worst case, you can pseudoanonymise it by tokenisation.
What is the problem with using live/real data in a test system? As long as it is protected in all the usual ways. And as long as you ensure the test system is kept separate and isolated from the production system so you don't inadvertently update the production system with test transactions from the testing?
Are you really advocating bringing down the bank? Shame on you....
Top brass hasn't a clue what is going on. They rarely are IT professionals.
When my employer offshored system support to the Asian subcontinent to save money, we saw one key effect - the duration of operational outages increased dramatically.
One incident took several months to investigate;I got involved. Gave them a few pointers, made them think, and within 24 hours they had found the fault.
There are huge cultural problems with using low cost workers from the Asian subcontinent, and you shouldn't use them for any kind of support or development. I s'pose I shouldn't complain too much: their incompetence kept me in work.
I am not sure in this case if that would s the case, but if it was, they would never admit it.
Hold on. There may be irony involved because of the government's appaling record of IT failure, but that does not preclude the government from criticising companies. These companies need telling off and holding to account, and the government is the right party to do that.
It is just a shame that the people they employ on House of Commons Select Committee hearings know nothing about IT.
Agreed. Politicians don't specify or build the systems.
Blame the civil service for constantly changing their minds about what they want.
Blame the companies building the systems for poor project management, and poor technical staff.
Different accounts in different banks and make sure those banks really are using separate IT systems,
The government can demand what it likes, companies like this hide behind obfuscation. They rarely disclose what actually happened.
Look at the big BA scandal recently, they bluffed their way through with an incomplete explanation claiming they had a power surge when too many of their systems were turned back on at the same time, but they never disclosed what caused the original power failure and why their battery and generator systems did not kick in.
That's a novel definition of BBC I hadn't come across before.
And that folks is why you do pilot projects.
GDPR does not restrict you in what you hold in a log file, and it does not restrict you to how long you keep the log files for. IETF guidance is a complete load of nonsense, written by people that have not studied GDPR.
I wouldn't claim to be an expert in international law, but the mechanisms are in place to fine companies outside of the EU, which are processing data on EU citizens, which are not complying with GDPR.
Probably down to treaty agreements between countries.
6 lawful reasons for non sensitive PII data. 10 lawful reasons for sensitive (or special) data :)
You don't always need to have consent from the data subjects.
The 3 days is utter crap. GDPR does not mandate any retention time for anything.
You can store PII data in log files for as long as you want so long as you can justify it.
>The retention period itself isn't the main factor. It's what you're doing with the logs in that time that really >matters, and how you enforce that pattern of use.
No that is not right.
You can do whatever you want with the logs so long as the data subjects whose data in those logs have given you consent. One of the lawful reasons that you can provide for processing data is "Consent". The other main lawful reason is "To satisfy the performance under a contract", in other words, you are collecting, processing (which includes storing) the personal data into order to deliver the service to them.
What you can't do is, collect PII data from a user, tell them you are collecting it for it to be used for a particular purpose, and then later, do something different with the data which the user doesn't know about. If you want to do something new with the data, use it in a new process or for some other purpose, you need to go back to the user (data subject) and ask for their permission.
The retention time issue comes under a different principle of GDPR. And it is a fundamentally important principle of GDPR. You should only keep PII data for as long as is necessary, and you need to be able to justify why you are keeping it for that length of time.
You are pretty much correct.
But I think summarising the GDPR in a couple of paragraphs like that is too simplistic. There are a set of principles and data subject rights that need to be adhered. A full description of those cannot be provided in a couple of paragraphs. You have covered in your text one right and one principle only.
Personally identifiable information is any information from which a living individual can be identified.
-> GDPR doesn't cover any data on dead people.
Things like IP addresses, names, addresses, email addresses which contain a person's name from which they can be identified, even if a business related email address), post codes, medical information, political affiliations.
You raise an interesting point in relation to aggregation. The key question is this: Can a person be identified from the data (whatever that data is, aggregated or not). The answer might be no, but the the next question is, if this dataset is combined with another at some point in the future, can the person then be identified? If the answer is yes, and that aggregation of datasets occurs and you haven't taken adequate steps to protect that data from a breach, then you are at risk of being fined under GDPR.
This is incorrect.
If the data is to be used for either of:marketing purposes, transference to a third party, or the processing of sensitive data EXPLICIT consent IS required, not may be required.
ISPs store certain data for 12 months because they are obliged to by law.
GDPR does not conflict with this. GDPR says you can store the data for as long as you need, so long as you can justify it, and one permitted justification is having to comply with other law.
It is actually a good law, it places control back into the hands of the public and away from meglamoaniacs like Zuckerberg.
You are missing the point. You don't need to do anything to log files. That is just the IETF talking complete bollox because they don't understand GDPR.
The only reason for doing something to a log file is to take you out of the scope of GDPR.
If you have no personally identifiable information in a log file then you fall outside of the scope of GDPR, period.
If you have PII in a log file then you have to comply with the data subject rights, the principles of GDPR and the entire law.
You can legitimately hold PII in a log file for as long as you need it. There is in fact no time restriction specified in law.
Port numbers are not PII, so you can ignore the IETF on that point.
EU data subjects can challenge you on your retention policy, if not happy with it report it to a supervisory authority in an EU country, and then they will investigate. If you cannot adequately justify it then you could end up being fined. I would suggest a more humble approach.
Go read my post on this issue.
You can store logs for as long as you want, so long as you can justify it and storing them for a certain amount of time to comply with other laws is perfectly acceptable. Your justification is that you have a legal reason for doing so.
Doctor Syntax is entirely correct.
You have to first identify who is the data controller and data processor in a data relationship.
Then as data controller you have to write into your contracts with your data processors GDPR terms. So you have to tell them what you expect of them.
And additionally, it is not enough to take their word that they are complying with GDPR, you as DC have to check they are. Audits might be necessary.
The 3 days figure (72 hours) is the length of time you have to report a breach to the ICO having DETECTED a breach.
You can store logs for as long as you want. If those logs contain PII then you have to store them for only as long as necessary and be be able to justify the retention time.
In relation to GDPR and retention, you don't do what ISPs tell you because they tell you. They do not have the authority to overrule the law.
You can store data for as long as you want, so long as you can justify it, but the principle is you should delete it when you no longer need it to do the job you are doing with it.
If you have a legal obligation to store data for say tax purposes for 6 years, then even though your relationship with the data subject to whom that data belongs comes to an end, then you can continue to store it for the full 6 years claiming it is being retained for legal reasons. If the data subject contacts you and requests the data be deleted, you can refuse citing you have a legal reason to retain it.
But an ISP cannot order you to keep the data for a certain amount of time. An ISP ordering you to keep data does not absolve you from the effect of the law if a data subject makes a complaint to the ICO.
You can store stuff for as long as you like as long as you can justify it. GDPR does not specify any time restrictions on retention.
GDPR does not restrict the type of data written into log files.
The question is around the type of information you are writing into a log file and whether that is considered to be personally identifiable information.
You might adopt a strategy of not storing any PII, and if you can achieve that, then you don't need to comply with GDPR.
Once you store a single item of PII then you have to comply.
A full IP address of a piece of equipment belonging to a natural living person, which enables that person to be identified is considered to be PII.
You should continue to store as much as you need in a log file to enable that log file to do its job of providing you with sufficient information for you to debug a problem.
The statement about not storing port numbers is utter nonsense: port numbers cannot be used to identify a living person.
Yes, I did stump up money! I was also there in the House of Commons that day to talk to my MP.
I am not saying it should not be challenged, I believe it should be. If you don't challenge it, you don't win.
What I'm saying is, and illustrating by real past example of how easy it is for the government to defeat the challenge.
You obviously did not read my post properly. I WAS involved in the judicial review of IR35 legislation many years ago. I simply stated what actually happened in that review, what the lead counsel for the government said. One of their arguments was quite simply that that primary legislation is created by parliament, and that parliament (in theory) represents the people. Courts, judges are unelected.
They have no mandate, no authority to strike down legislation created by parliament, unless said legislation is in contravention with other law and is considered to be unconstitutional.
It was an argument which was successfully used before years ago, it was effective then, and I'm sure it would be effective again today, if it were to be used.
Don't be fooled by the illusion of democracy we live in. If Tony Blair can start a war on a false illegal pretext, and a large number of people die, and then hold multiple inquiries which clear him, then I am pretty sure strings can be pulled, particularly in the area of the secret intelligence services, that will ensure this challenge is not successful.
Biting the hand that feeds IT © 1998–2018