\
insight
\
\
Insight
\
Reality Defender
Team
All three of these images were posted to Twitter this week. Each photo was suspected of being a deepfake by users. One was posted by Amnesty International, one by a historical photo account, and one is a video still from a random user.
Scroll down if you want the answer, but look at each photo and see if you can spot the deepfake without state-of-the-art deepfake detection technology.
They're all deepfakes. Pardon the trick question.
The first image was lifted from a now-deleted tweet by Amnesty International, who came under fire for using a deepfake to highlight the second anniversary of protest in Colombia.
The second image of rude boys in Brixton circa 1969 was posted to a historical image account, where the account owner later claimed he had no idea such technology existed.
The third image is a still from a video of a deepfake purporting to be a girl who went missing in Poland over a decade ago, albeit in the present day. This has since proven to be tragically false.
Twitter, as with the vast majority of social media platforms, puts the onus on users to detect deepfakes. Social media platforms ask users to flag, report, and/or note the potential use of generative AI in creating content with the potential to do great harm and spread misinformation. Users' tools are their own eyes and nothing more, and these platforms have invested nothing in deepfake detection to proactively remove this content before it's seen and trusted by millions. By the time people saw these photos and video, millions had already accepted their validity and moved on. The damage was already done.
TikTok is the latest platform to add user-generated flagging for deepfakes. Reality Defender Co-Founder and CEO Ben Colman recently wrote about this misguided approach, and why a user-guided solution is not much of a solution at all.
You can read Ben's post about user-detected deepfakes on TikTok here.
The Biden Administration took steps this week to begin to address the multitude of issues stemming from generative AI, starting with the announcement of a $140 million investment in addressing AI-related risks. Executives from OpenAI, Microsoft, and Alphabet also came to Washington to discuss developments in AI with President Biden. Finally, the White House announced their cooperation with a team of AI experts on "the largest red teaming exercise ever for any group of AI models" at this year's Def Con.
Tomorrow is Google's I/O conference, where the company will allegedly announce its new LLM, as well as the implementation of already-announced AI tech across all Google products, per CNBC. We'll do a deep dive on the Google announcements and what they mean in terms of generative content detection on Reality Defender in next week's edition.
In an attempt to root out fake and “bot” users, a tier-one social media platform partnered with Reality Defender to analyze profile images and see how far bad actors and scammers went in their attempts to defraud and deceive millions of users.
Note: If you are a WGA member or know a member of the WGA, please get in touch with us here. We would like to talk to you about something that may be of assistance during the ongoing strike (and afterwards).
Thank you for reading the Reality Defender Newsletter. If you have any questions about Reality Defender, or if you would like to see anything in future issues, please reach out to us here.
\
Insights