Advertising on online platforms is very important for both platforms and advertisers. Advertising revenue is the primary source of income for social media platforms. For advertisers, the large number of users of the platform and the ability to target is appealing.
This not only applies to well-willing advertisers, however, but also to fraudsters.
Since 2018, Facebook users in the Netherlands have been confronted with fake advertisements. With so-called recommendations from well-known Dutch people in the advertisement, users are tempted to make an investment in a product or service. However, the recommendation is false and without the consent of the Dutch celebrity. The consumer is left destitute; the payment is lost and the advertiser is often without a trace.
But what about the platform that facilitates the advertisements? The platform also earns from the fake advertisements. Is the platform obliged to take measures? And can the platform be held liable for the damages? Two recent cases against Facebook from 2019 and 2020 provide more insight.
3 Million euros. That’s how high the total damage turned out to be of 300 people who thought they had invested in bitcoins. With the investment the victims were promised to become wealthy “just like these Dutch celebrities”. The text referred to the picture of the advertisement on which television producer John de Mol was portrayed. In reality it turned out that De Mol had nothing to do with this and the investors never saw the money or the bitcoins again.
A recently published article shows how difficult it is to act against these fraudsters. First of all, the identity of the fraudsters is difficult to trace. Secondly, it often concerns foreign parties, mostly based in Cyprus. The Dutch Authority for the Financial Markets (AFM), for example, does not supervise companies located in another Member State.
At the end of May 2019, De Mol therefore brought a case against Facebook. De Mol was of the opinion that Facebook is obliged to no longer allow the misleading bitcoin advertisements, with his name and image, on its platform. However, Facebook itself felt that it already did enough against the fake advertisements.
Does not have to, may not and cannot
Facebook’s defence was in essence: the platform does not have to take preventive measures, the prohibition of general filtering obligations does not allow it to take preventive measures and, in view of the technical tools at its disposal, it cannot take such preventive measures.
The court in preliminary relief proceedings held otherwise. Facebook does not act as a neutral conduit, but has an advertising policy to which it also actively acts. In addition, the claim to take measures is sufficiently specified, so that it does not include a general filtering obligation. Finally, from the fact that since the announcement of the preliminary relief proceedings hardly any similar advertisements appeared, the court in concluded that it is indeed possible for Facebook to take additional measures.
For an extensive analysis of this judgment, read the legal (Dutch) note of SOLV partner Douwe Linders here.
Again in 2020 the Amsterdam Court in preliminary relief proceedings was asked the question whether Facebook (now) takes sufficient measures against fake advertisements.
This case also concerned bitcoin advertisements, but with the image and name of a presenter of Eenvandaag. The presenter and his employer, broadcaster AVROTROS, asserted that Facebook was acting unlawfully; the platform (still) takes insufficient measures to keep the fake advertisements in question off the platform.
Facebook had immediately removed the four fake advertisements in question from its platform. The dispute focuses in particular on the question whether more can be required of Facebook. Can Facebook be expected to stop these advertisements from appearing on the platform at all?
One of the reasons is that the fraudsters are becoming increasingly sophisticated. Cloaking is particularly popular. The misleading landing page of the advertisement is then ‘cloaked’ behind the innocent-looking advertisement. In this way, the advertisement itself is not visible to Facebook’s control systems.
To counter this phenomenon, Facebook has set up an anti-cloaking team and integrated a function that allows users to easily report unlawful advertisements. These are then immediately removed. Facebook has also instituted proceedings in the United States against the providers of cloaking services.
Despite the measures taken, four fraudulent bitcoin advertisements have appeared with the name of the presenter in question. Of all four advertisements, it is certain that cloacking was used.
The incidental character by which the advertisements appeared and the fact that they were promptly removed made the court decide that Facebook has done sufficient within its power to repel the advertisements. Moreover, the presenter and AVROTROS did not state what additional measures should have to be taken.
Insufficient measures: is Facebook liable?
In the last case mentioned, Facebook can not be held liable for the damages. It had taken sufficient measures to prevent the fake advertisements on its platform.
However, this may be different in the case of John de Mol. In that case the court ruled that Facebook had not taken sufficient measures at the time. This opened the door for the victims, who had suffered damage in this particular case, to (jointly) file a claim for damages with Facebook.
However, the platform did lodge an appeal in response to the judgment in John de Mol’s case. To be continued.