Denmark is proposing new legislation to amend its digital copyright law to combat the rising threat of deepfakes, aiming to protect individuals’ rights over their digital identities from the harmful impact of AI-generated deepfakes.
Deepfakes, which utilize artificial intelligence (AI) to create realistic fake images, videos, and audio recordings, are increasingly being used to spread fake news, enable financial fraud, and facilitate cybercrime, resulting in substantial financial losses. The Danish government intends to submit the amendment in the autumn and anticipates cross-party support, highlighting the urgency of addressing the deepfake threat.
The term “deepfake” originates from the combination of “deep learning” and “fake,” referring to both the AI technology employed and the resulting false content. Deepfakes can either alter existing content or generate entirely new content, potentially causing significant harm. For instance, superimposing faces in a film scene can infringe upon an individual’s right to their image.
Deepfakes pose a significant threat due to their potential for spreading fake news. Examples include deepfakes of former US President Joe Biden and Ukrainian President Volodymyr Zelenskyy, which lend credibility to false information by appearing to originate from trustworthy sources. Resemble.ai’s research indicates that financial fraud and cybercrime are also significant growth areas for deepfake attacks, with 41% of those targeted being public figures, 34% private individuals, predominantly women and children, and 18% organizations.
Notable examples of deepfake attacks include a case where a UK engineering firm, Arup, lost $25 million in a deepfake scam. Criminals used an AI-generated clone of a senior manager to convince a finance employee to transfer funds. Additionally, a fraud attempt on Ferrari, using the AI-generated voice of CEO Benedetto Vigna, was narrowly averted when an employee asked a question that only the real CEO could answer.
Resemble.ai’s deepfake security report for Q2 2025 revealed a significant increase in deepfake attacks, with 487 publicly disclosed deepfake attacks in the second quarter of 2025, a 41% increase from the previous quarter and over 300% year-on-year. Direct financial losses from deepfake scams have reached nearly $350 million, with attacks doubling every six months.
In response to the increasing threat of deepfakes, policymakers are implementing various measures. In the United States, the Take It Down Act requires the removal of harmful deepfakes within 48 hours and imposes federal criminal penalties for their distribution. The Danish amendment under consideration would allow individuals affected by deepfake content to request its removal, and artists could demand compensation for the unauthorized use of their image.
The Danish amendment is expected to send strong political signals to both Brussels and the wider EU, given Denmark’s current Presidency of the Council of the European Union. The World Economic Forum’s Global Coalition for Digital Safety aims to accelerate public-private collaboration to address harmful content, including deepfakes, and to promote the exchange of best practices in online safety regulation.
The World Economic Forum’s Centre for the Fourth Industrial Revolution (C4IR) has launched the AI Governance Alliance to address the uncertainties surrounding generative AI and the need for robust AI governance frameworks, uniting industry leaders, governments, academic institutions, and civil society organizations to champion responsible global design and release of transparent and inclusive AI systems.




