TrueMedia.org, a nonprofit organization, unveiled an AI-driven tool for media authentication on Tuesday, aimed at assisting journalists and fact-checkers in identifying deepfakes and tackling false information before the forthcoming elections in the U.S. and globally.
Launched in January, this impartial entity is guided by Oren Etzioni, a professor at the University of Washington with extensive experience in AI, and receives financial support from Garrett Camp, co-founder of Uber, through his charitable foundation, Camp.org.
Although the tool isn’t perfect, its ability to identify deepfakes is “extremely high,” with about 90% accuracy across images, video, and audio, Etzioni said. TrueMedia.org uses a combination of internally developed technology and AI detection tools from its partners to analyze media and come up with a probability that content is fake.
For example, the tool automatically labeled as “highly suspicious” a known fake video that purported to show Ukraine’s top security official claiming responsibility for the March 22 terrorist attack at a Russian concert hall. The tool stated with 100% confidence that the video contained AI-generated imagery.
“If it’s a deepfake, we’re very likely to catch it,” Etzioni said.
In addition to launching the new tool Tuesday morning, TrueMedia.org reached a memorandum of understanding with Microsoft to share data and resources, collaborating on different AI models and approaches.
Other partners of TrueMedia.org include Hive, Clarity, Reality Defender, OctoAI, AIorNot.com, and Sensity.
The New York Times covered the launch of the tool Tuesday, citing examples including a fake image of Etzioni in the hospital that he generated using an AI tool. It’s “the kind of image he thinks could swing an election if it is applied to Mr. Biden or former President Donald J. Trump just before the election,” the newspaper reported.
The goal is to avoid that kind of outcome by putting TrueMedia.org’s verification tool in the hands of journalists and fact-checkers, helping them quickly debunk fake content, Etzioni said in an interview with GeekWire. This is important not just for the upcoming U.S. presidential election, he said, but also for elections in Europe, India and elsewhere.
TrueMedia.org is controlling access to the tool, in part to keep adversaries from learning too much about its approach and figuring out how to keep their deepfakes from being detected. However, those who are able to use the tool can share links to the assessments of different pieces of content on the TrueMedia.org website.
“We’ve got to get the tool into people’s hands,” Etzioni said, explaining that TrueMedia.org is being as “permissive as possible to rapidly increase the usage of the tool,” while protecting its methods.
In the future, Etzioni said, TrueMedia.org is looking to integrate the tool directly into web browsers through a browser extension, and into social media platforms like Twitter and Reddit to make it even more accessible.
Etzioni, the former CEO of the Allen Institute for AI in Seattle, has worked in artificial intelligence for much of his career, since long before generative AI became part of the popular zeitgeist. He said he’s never before seen a field move as rapidly as AI has recently, with new tools and algorithms emerging all the time.
“It’s very much an arms race, and it’s dynamic on both sides,” Etzioni said. However, he added, “thus far, at least, detection has been able to keep up.”