top of page
  • Sabrina Kutscher

Deepfakes as Weapons Against Crime?

The Dutch police made some headlines a few weeks ago when AI and deepfake technologies were employed to solve a cold case murder. More specifically, police officials created a deepfake of 13-year-old Sedar Soares who was killed in Rotterdam in 2003. Police suspects that the teenager fell victim to gang violence in the area, however, they are still missing important evidence that would help them in identifying the murderer. Therefore, AI technology was used to appeal to those who could contribute to solving the case by creating an emotional video of Soares and his friends and family in which he directly addresses the viewer asking for information. From a legal perspective, however, the question arises as to how the construction of a deepfake of a deceased person relates to privacy.


What Does “Deepfake” Mean?

The word “deepfake” is originally a combination of two words, namely deep learning and fake. Deep learning employs artificial neural networks to closely mimic human neural activities which are used for machine learning. Deepfakes can be both images or videos, in which the images or audio are convincingly manipulated. Often, such videos are manipulated to depict public figures, such as for example Barack Obama, and make them say or do whatever the author wants. In other words, images of peoples’ faces can be inserted in photos or videos without most people realizing that it is fake. Deepfakes have been mostly used in the context of pornography, for example to exchange the faces of pornstars with celebrities. Increasingly, however, this technology has also been used in election campaigns and to spread fake news. Thus, harm of deepfake technology use isn’t just limited to individuals anymore, but has also spread on a societal level as numerous people online can be made to believe anything if the deepfake is good enough.


The Issue of Privacy

One of the biggest dangers of deepfakes, is the issue of privacy. With most people being registered on social media platforms and uploading high-quality profile pictures, it is fairly easy to create deepfakes with publicly available photos. This, however, is highly invasive and harmful as people could create deepfakes of almost everything. Whereas some of these actions clearly constitute libel, it is difficult to initiate legal proceedings. This is because it is challenging to identify those responsible for the deepfakes due to the vastness and anonymity of the internet and social media.


Fortunately, the EU has recognized the risks associated with these technologies by updating its Code of Practice on Disinformation to respond more appropriately to inter alia the issue of deepfakes. Although the Code of Practice started out as a soft law instrument which was based on voluntary membership, the EU has emphasized its intention to make it more rigid with the introduction of the Digital Services Act (DSA) last April. Once the DSA has entered into force, the EU is able to monitor compliance with the Code of Practice and holds the power to impose hefty fines on its signatories. More specifically, signatories to the updated framework are obliged to tackle disinformation in the form of deepfakes with effective tools or otherwise risk a fine of up to 6% of their annual turnover. The hope is that such hefty fines will constitute a significant incentive for social media platforms to fight the spread of deepfakes.


Deepfakes Put to Good Use

Notwithstanding the privacy risks, the Dutch police demonstrate how deepfakes can also be used for good causes. Especially when it comes to cold cases, which are often difficult to solve, media tools like these not only trigger a stronger emotional response in the public but they may also stimulate peoples’ memories in hope of new receiving leads. Yet, one ethical concern that should be addressed in this regard is the lack of consent by the people in question. In cases concerning missing or deceased persons, it is impossible to ask these people for their consent to use their data for deepfake technology. Although in this case, Soares’ family was supportive of such use and even collaborated with the police for the video, this won’t always be a given. Perhaps this points to a legal gap that needs to be addressed in the future to ensure that the victims are okay with the police creating depfakes of them. One solution could be to make declaration forms publicly available in which people can express their prefence as to whether their images may be used for deepfake technology or not.


bottom of page