Artificial Intelligence (AI) is a phenomenon that has become embedded in human life, and this symbiotic relationship between technology and humanity is here to stay. One such use of AI is deepfakes. The use of AI for deepfakes is arguably one of the most controversial topics because it raises ethical issues. Deepfakes are images or recordings that have been convincingly altered and manipulated to misrepresent someone as doing or saying something that they did not actually do or say. These manipulations thrive in the political arena and recently in the pornography industry, in which women’s faces are masked onto other bodies to create video illusions that cause non-consensual sexual-image abuse and other harms. It is no surprise that the malicious use of deepfake technology has prompted regulatory legislation like the United States National Defense Authorization Act (NDAA), and the recent ratification of amendments to the Digital Services Act (DSA) on criminalizing malicious deepfakes. Scholars, advocates, and victims continue to call for more specific and stricter laws to regulate deepfakes and assign penalties for non-adherence. This paper presents a timely analysis of deepfake pornography as a type of image-based sexual abuse, and of the position of the law on malicious use of deepfake technology. Data protection concerns under the General Data Protection Regulation, and policy recommendations and measures for redress, control, and eradication are also addressed.
"Artificial Intelligence-Altered Videos (Deepfakes), Image-Based Sexual Abuse, and Data Privacy Concerns,"
Journal of International Women's Studies: Vol. 25:
2, Article 11.
Available at: https://vc.bridgew.edu/jiws/vol25/iss2/11