Following the enactment of the EU General Data Protection Regulation (GDPR) on 25 May 2018, there has been a marked shift in the way organisations treat the risk to personal data.
In the days before GDPR, cyber security was beginning to be viewed by boards as a business risk, but often the impact of data loss could not adequately be quantified and the costs of upping the cyber defences were not based on a legal framework and hence were less easy to justify.
The advent of GDPR, with the promise of large fines based on annual turnover on one hand and a clear regulatory framework allowing compliance to be demonstrated on the other, made the business risk case much easier. Even if organisations didn’t care about the protection of the personal data they processed before the introduction of GDPR, they do now.
But simple compliance with the GDPR does not necessarily guarantee the personal privacy of employees and customers. The UK Information Commissioner’s Office (ICO) now uses the term “data protection by design” rather than “privacy by design”. This is at least partly because “privacy” is to some extent subjective, which data protection rules cannot be. Also, some who claim compliance to the data protection rules may be compliant with the letter of the regulations but are not necessarily following the spirit of them.
For example, in the case of consent, the requirement states: “In general, it should be as easy for them [the data subject] to withdraw consent as it was for you to obtain consent.” This is not something I have experienced so far, possibly because it is quite hard to achieve. Most people see this when we browse a new website and are asked to “accept all cookies”. This is very easy to do by simply clicking a button. If you change your mind, going back and revoking permission can be a challenge in many cases.
But consent is only one of the legal bases for processing personal data. Others include a contractual or legal liability that can only be met in this way, or “legitimate interest” which has also been brought to the forefront with the need to request consent.
As part of the cookie consent process, there is often a “legitimate interest” button tucked away at the bottom of the screen. Legitimate interest is the processing of personal data as part of the legitimate interest of an individual, third party or company in delivering a service, or because it has wider social benefits. Something a consumer would expect to be necessary to be able to consume the service, or the commercial interests of the provider.
Paddy Francis, Airbus CyberSecurity
The organisation processing the data must, however, be able to show that there is no harm to those whose data is processed and that there is no less intrusive way to achieve the objective of the service. Legitimate interest can be a useful legal basis for processing without explicit consent where there is no contractual relationship or legal requirement. However, it still needs to be declared and the reasons justified. This may, in part, be the reason that some websites include legitimate interest in their consent process. Nonetheless, while cookies requiring consent are “off” by default, those relating to legitimate interest are generally “on”.
A real-world example where legitimate interest might be used, but could be difficult to justify, is the virus scanning of emails leaving an organisation. It could be argued that it is in the interests of the reputation of the company and any third parties receiving infected emails, but at the same time an identifiable individual might believe that unwittingly sending such an email might be held against them.
Another example is where an employee has a folder on their desktop named “personal” that they use to support their own business activities. Would the employer have a legitimate interest in accessing a user’s “personal” folders? And if they did, would the evidence gained by doing so be admissible in an employment tribunal?
The first scenario might better be covered by consent and the second by a contract of employment giving the employer that right.
Applying GDPR principles
“Lawfulness, fairness and transparency” form the first of the seven principles of GDPR, which clearly need to be considered upfront in any new project and maintained through the life of a system as it evolves. The other six principles are: purpose limitation; data minimisation; accuracy; storage limitations; integrity and confidentiality; and accountability.
While the whole of the GDPR needs to be considered from day one of a new project, “purpose limitation” and “data minimisation” are probably the most important to consider first.
Understanding the purpose of any venture is critical. With GDPR, this refers to the purpose for which you are collecting and processing personal data. Without understanding this, you cannot know what data you need to collect and will not be able to establish the legal basis for collecting and processing it.
The purpose must be documented and set out in a privacy policy, or equivalent available to all users. Identifying all the purposes for which data will need to be processed at the outset is important, because processing the data for other purposes later on will mean getting further consent and updating your documentation.
Data minimisation, on the other hand, is about minimising the data collected to only that necessary for the purpose. The word “necessary” here is also important, because it means the solution should minimise what is necessary to collect. That is, if choosing solution A requires the collection of more personal data compared to solution B, the fact that the data collected is necessary for solution A does not meet the requirements of data minimisation if it is not necessary for solution B. This is important not only to comply with the GDPR, but also to minimise the amount of personal data that needs protection and ideally minimise, or eliminate, the need to collect or hold sensitive personal data.
Pseudonymising user data
Another approach to protecting users’ data is the concept of pseudonymisation. For example, a set of medical data could have the user’s identity replaced with a unique random pseudo-identity and the relationship between the user’s identity and pseudo-identity stored separately. The data would then be regarded as pseudonymised. It would not be fully anonymised, because the user would still be indirectly identifiable using the mapping data. However, the pseudonymised data could be processed safely providing the mapping to their real identity were kept safe. If it was never necessary to identify the specific user, then the mapping to the real identity need not be kept and the data might be considered fully anonymised.
However, some care is needed here, because if the data processor does not hold the mapping, but it still exists elsewhere, it would not be considered fully anonymous. Also, if the data contained other information that could be used to identify an individual by correlation with other data – for example, date of birth and postal code correlated with the electoral register – then the data would still only be pseudo-anonymous, so it would need to be protected as personal data.
Ever shifting privacy approaches
GDPR has certainly had an impact in the three years since its introduction. Organisations have adapted their approaches to personal data to conform to the regulation, and the authorities have increasingly held them to account for breaches of those regulations.
There has also been an increased interest in privacy from the public, with information and reporting around the various data breaches and the announcement made earlier this year by Apple that it is removing the ability for advertisers to track user activity across apps and devices.
While I don’t expect the regulation to change significantly in the future, its continued enforcement and growing public understanding of privacy in the digital world is likely to continue to change our approach to privacy for some time to come.