Algorithms and the GDPR: what are the pitfalls?

Published on 2 April 2020 categories 

This blog was written by Lukas Pannekoek

The use of algorithms by the business community has increased significantly in recent years. More and more companies are seeing how algorithms make their business processes a lot more efficient and cheaper. When algorithms process personal data, companies should be aware that the GDPR also applies. Practice has shown that it is often a difficult task to determine whether these algorithms comply with privacy rules. This blog describes the risks that exist when using algorithms and which rules apply. 

An example of algorithmic decision-making

The easiest way to illustrate the subject is to first give an example. Take, for example, banks and lenders. They increasingly use algorithms to check the creditworthiness of their customers when applying for a loan. A big advantage is that the decision making around the loan is much quicker. Customers know in no time whether they are eligible for a loan and the bank can, of course, handle it with much less staff. After all, a smart algorithm makes human intervention largely unnecessary.

In such a case, does an algorithm only offer advantages? If the loan is granted, the customer will not complain. But what if his application is rejected? Then, of course, he would like to know how that decision came about. In that case, it is up to the bank to transparently and comprehensibly justify why. The only question is if this is possible for all algorithms work in practice. After all, scientific research has shown that algorithmic systems can sometimes generate opaque, but also unfair, biased and even discriminatory outcomes. How is that possible?

Risks when using algorithms

First, the dataset with which the algorithm works may contain biases of the programmers. The data underlying an algorithm are often the personal choices of the programmers. In this way, personal norms and values can unintentionally influence the algorithm. In this way, an algorithm can still take on a subjective character, while a neutral algorithm should be the starting point.

Moreover, some algorithms are so complex that even the people who work with them can’t understand why the algorithm gives a certain result. People are talking about a black box. When such a complex and opaque algorithm also processes personal data, a data subject no longer has any insight into what happens to his or her personal data. This is contrary to the AVG’s principle of transparency. After all, it must be clear to a data subject how an organisation processes his or her personal data. 

As an organisation, you naturally want to prevent algorithms from unintentionally discriminating or becoming opaque. The Dutch Data Protection Authority (‘AP’) recently again emphasised that the AVG also applies to algorithms that process personal data. Since the AP can impose high fines for violations of the GDPR, it is important to comply with this. What should you pay particular attention to?

Rules set by the GDPR to algorithms

Algorithms regularly systematically and comprehensively assess personal aspects of involvement and evaluate them on the basis of automated processing. For this reason, it is first of all necessary to determine whether a decision is automated within the meaning of the GDPR and whether the algorithm takes decisions that significantly affect the data subject or produce legal effects. Taking such fully automated decisions (without human intervention) is in principle prohibited, unless one of the exceptions can be invoked. For more information, see the blogs of Lora Mourcous and Micha Schimmel

Data protection impact assessment mandatory

When using algorithmic systems, a Data Protection Impact Assessment (‘DPIA’) is mandatory in many cases, see, among others, this list. In the cases mentioned on the list, the GDPR always requires a DPIA. A DPIA is an instrument to identify the privacy risks of a data processing operation in advance. It is a comprehensive study that consists of various components. For example, a systematic description of the intended processing and processing purposes must be provided. This means substantiating why certain personal data are used for an algorithm, what the purpose of the algorithm is and why it is necessary for an organisation to use the algorithm.

In addition, a DPIA should identify the risks that may arise for the rights and freedoms of individuals. When designing the algorithm, it is therefore important to prevent and mitigate these risks as much as possible by taking appropriate safeguards and measures. You must therefore be able to substantiate how the algorithm continues to function fairly and what measures you take if an algorithm does inadvertently produce unfair results. It may therefore be useful, for example, to have an algorithm checked regularly by an expert third party and to record the results. This is a kind of periodic check for algorithms. When the processing of personal data poses a high risk, it is important to always take sufficient measures. If this does not succeed, then as an organisation you are obliged to first consult with the AP before the algorithm can become operational. 

Rights of data subject

If it is established that the algorithm may be used, the rights of data subjects must be safeguarded. Indeed, data subjects have the right to control the processing of their personal data under the GDPR. For example, by means of the right of inspection, the right to be forgotten, and the right to object to the data processing. It is therefore advisable to organise the organisation and business processes around the algorithm in a smart way, so that you can quickly deal with requests of data subjects of this kind. Moreover, as an organisation you are also obliged to do so.

Furthermore, the GDPR stipulates that the processing of personal data must take place in a “proper” manner. In the case of algorithms, this means that a data controller must very actively justify and justify why the algorithm is fair. Also, the use should not lead to inappropriate outcomes. Whether this is the case depends on many different circumstances. It is therefore often a complex assessment that requires a great deal of specialist knowledge. In the case of self-learning algorithms, this assessment is even more difficult because outcomes are always based on new insights from the algorithm. So continue to ensure active accountability that also tests the changing algorithm for fairness and document the process.

In conclusion

Algorithms that process personal data are subject to the necessary requirements. Organisations are therefore well advised to (continue to) adhere to these algorithms against the GDPR requirements. In this blog some requirements have been mentioned, but depending on the situation additional rules may be relevant. Do you have questions about algorithms and rules about the processing of personal data? Then please contact us. 

Share:

publications

Related posts