Under the General Data Protection Regulation (‘AVG’), specific attention is paid to profiling and automated decision making. Profiling and automated decision making are being used in an increasing number of sectors. Think of banks, healthcare, insurance companies, marketing and advertising. Technological developments and the possibilities of big data analysis and artificial intelligence have made it even easier to create profiles and make automated decisions.
Profiling and automated decision making can be useful for data subjects (those to whom personal data relates) and organisations. After all, organisations are better able to segment their market and target groups and can thus better tailor their products and services to individual needs, which ultimately benefit individuals.
However, profiling and automated decision making can also entail risks of deprivation of data subjects. For example, many people often do not know that they are being profiled and do not understand how it works. In addition, it can lead to a person being put in a box and even, if data is incorrect or incomplete, to incorrect predictions and (unjustified) refusal of delivery of certain products or services and even discrimination.
What is profiling and automated decision making?
The AVG does not only focus on the decisions made as a result of automated processing or profiling. It applies to the collection of data for profiling, as well as the application of those profiles to individuals.
The AVG defines profiling as “any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements.”
With regard to automated decision making, the AVG provides, in a specific provision, that data subjects have the right not to be subject to a decision based solely on automated processing, which may include profiling, which has legal consequences for them or which significantly affect them in a similar way, such as the automatic refusal of a credit application submitted online or the processing of applications via the Internet without human intervention.
When it comes to profiling and automated decision-making, therefore, three forms are conceivable:
- Creating profiles;
- Decision-making on the basis of profiles (for example: a single person decides whether the loan can be granted on the basis of a profile);
- A decision based solely on automated processing, whether or not based on a profile (for example: an algorithm determines whether the loan can be granted and the decision is automatically shared with the data subject, without human intervention).
Automated decision making only
The prohibition on automated decision-making only applies if there is no human intervention at all. If a fully automated process makes a recommendation with regard to a data subject, but an employee first assesses other factors when making the final decision, the decision is not based solely on automated processing.
Please note that as an organisation (processing controller) you cannot circumvent the prohibition by “simulating” human intervention. For example, if someone applies automatically generated profiles to data subjects without any actual influence on the result, this is still a decision based solely on automated processing.
In order to qualify as human intervention, the data controller must ensure that any monitoring of the decision is meaningful, rather than merely a gesture. It should be carried out by someone authorised to amend the decision. As part of the analysis, they should consider all available input and output data.
Legal effect or significant effect
The prohibition of exclusively automated decision-making only applies if the decision has legal effect, e.g. the refusal or granting of rent or child benefit or the automatic blocking of your mobile phone because the bill has not been paid, or significantly affects the person concerned in any other way.
With regard to the latter, the AVG mentions only two examples in the recital: the automatic refusal of a credit application submitted online or the processing of applications via the internet without human intervention. However, determining on this basis whether a decision ‘significantly affects’ a data subject is not so easy. When it comes to credit applications, it would also mean that this would include not only an automatic credit check when applying for a mortgage, but also when it comes to renting a bicycle abroad or buying a television in instalments.
Automated decision making is particularly important in online advertising. The European regulator says that in principle there is no significant impact in the case of targeted advertising. Think, for example, of an advertisement for an online fashion webshop based on a simple demographic profile: ‘women in Amsterdam’. However, there are also conceivable situations in which a targeted advertisement significantly affects a person involved. This depends on the size of the profile, the expectations of the persons, the way the advertisement is delivered or the specific vulnerabilities of the person.
Processing that generally has little effect on individuals can in practice have a significant impact on certain groups of society, such as minority groups or vulnerable adults. For example, someone who is in financial difficulties and regularly sees advertisements for online gambling may sign up for these offers and possibly incur further debts.
However, automated decision making, whether or not based on profiling, must be possible where it is expressly permitted by law applicable to the controller, inter alia, for the purposes of monitoring and preventing tax fraud and evasion, and to ensure the safety and reliability of a service provided by the controller, or if necessary for the conclusion or execution of a contract between the data subject and a controller, or where the controller has given his express consent.
It is therefore not so easy to assess when an automated decision falls under the GDPR ban. The European supervisor attempted to clarify this in guidelines issued in October 2017, but the expectation is that this will have to be further clarified, whether or not by the national supervisory authorities.