Seeing is not believing anymore. AI architects warn that hyper-realistic digital forgeries, aka deepfakes, will be indistinguishable from reality next year.
In a rapidly evolving digital landscape, the propagation of artificial intelligence tools that enable the effortless creation of fake videos across social media platforms is raising alarms among experts.
According to Axios, “One leading AI architect told us that in private tests, they no longer can distinguish fake from real — which they never thought would be possible so soon.”
Estimates suggest that by 2025, more than 90 percent of online content could be generated by AI, leading to what experts term “model collapse” as AI struggles to differentiate between data produced by other AI systems and human-generated content.
Along with many promises of AI benefiting humanity come major threats.
Consider that AI-powered techniques are already among the top five tools used to conduct fraud online in 2023. They have already opened the floodgate of online sexual exploitation and abuse, especially for girls.
Another risk pertains to the legal system, where evidence may be impeccably fabricated.
Deepfakes of disturbing images have been reportedly used in the ongoing war in Gaza. The technology was used during the war in Ukraine, where an AI-generated image of President Volodymyr Zelensky called on his troops to surrender.
The government seems to acknowledge the risks posed by this technology.
In October, President Biden signed an executive order on AI, marking the federal government’s initial comprehensive regulations on generative AI systems. The order entails mandates for testing advanced AI models to prevent their use in weapons creation, recommendations for incorporating watermarks on AI-generated media, and provisions addressing concerns related to privacy and job displacement. It also launched a new government website, AI.gov, and even created the Blueprint for an AI Bill of Rights.
Companies such as Microsoft, OpenAI, Google, and Meta are among those that have voluntarily committed to AI safety and security measures. Yet, given the widespread accessibility of tools that generate deepfakes, these steps alone are unlikely to sufficiently address the looming threat.