Microsoft has called on Congress to pass new legislation to combat the growing threat of AI-generated deepfakes.
Points
- Microsoft advocates for a federal deepfake fraud statute.
- Proposes mandatory synthetic content identification tools.
- Suggests amending existing laws to cover AI-generated explicit content.
- FCC bans AI voice robocalls.
- Deepfake issues highlighted by a video of US Vice President Kamala Harris.
Microsoft has urged Congress to pass new legislation targeting AI-generated deepfakes. Brad Smith, Microsoft’s Vice Chair and President, emphasized the urgency of addressing the growing threat posed by deepfake technology.
Microsoft’s report outlines several legal measures to prevent deepfake misuse, including the creation of a federal deepfake fraud statute that would address both civil and criminal aspects of synthetic content fraud. This statute could involve criminal charges, civil seizures, and injunctions.
The report also supports the mandatory use of advanced provenance tools to identify synthetic content, helping the public discern the origin of the information they encounter online. This is crucial for maintaining the credibility of digital information and curbing the spread of fake news.
Brad Smith stated, “Congress should require AI system providers to use state-of-the-art provenance tooling to label synthetic content. This is essential to build trust in the information ecosystem and will help the public better understand whether content is AI-generated or manipulated.”
Additionally, Microsoft proposes updating existing laws on child exploitation and non-consensual explicit images to cover AI-generated content. This ensures that legal frameworks keep pace with technological advancements, protecting vulnerable populations.
Recently, the FCC banned the use of AI-generated voices in robocalls, reflecting increasing regulatory action against AI misuse. This follows an incident where a deepfake video of US Vice President Kamala Harris was widely circulated, exemplifying the dangers of deepfake technology.
Nonprofit organizations like the Center for Democracy and Technology (CDT) are also fighting deepfake abuse. Tim Harper, CDT’s senior policy analyst, noted that 2024 marks a critical turning point for AI in elections, urging preparations to combat technological manipulation.
Analysis
- Regulatory Necessity: The call for a federal deepfake fraud statute highlights the need for updated legal frameworks to address emerging technological threats.
- Public Trust: Mandatory identification of synthetic content is vital for maintaining the integrity of digital information and public trust.
- Technological Safeguards: The proposal to amend existing laws to include AI-generated content reflects the need for comprehensive legal protections in the digital age.