Jul 18, 2023

Back From the Dead

Person walking forward

This post was featured in the Reality Defender Newsletter. To receive news, updates, and more on deepfakes and Generative AI in your inbox, subscribe to the Reality Defender Newsletter today.

Brazilian Singer Elis Regina recently appeared in a Volkswagen ad driving side-by-side with her daughter, Maria Rita, to celebrate VW's 70 years in Brazil. The famed artist popularized the post-bossa nova genre of Música popular brasileira, and is one of the best-selling Brazilian artists of all time.

Regina also died very suddenly and tragically in 1982. The ad, which took thousands of hours to make, features a singing deepfake of Regina.

What Does Reality Defender’s Platform Say?

scan of VW ad

What Does Reality Defender’s Team Say?

Not all deepfakes are bad. There are some benign and entertaining uses for this technology.

Though this ad is certainly not the first to feature a deepfake, nor is it anywhere near the first prominent deepfake of a deceased person, it clearly shows just how prominent deepfakes are in popular culture and media use. As deepfakes grow in accessibility and ease of creation, and as a mass audience grows to learn of their existence, expect to see more deepfakes appear in ads, in entertainment, and elsewhere by studios and creative teams looking to mine the past. 

Content creators also have a responsibility in properly labeling usage of deepfake technology — in credits or elsewhere — so as to not confuse the audience. While it’s obvious to fans that the real Elis Regina is not featured in this ad, making consumers and audiences aware of this technology, its existence, and its use in all recorded media — beyond a benign ad — allows people to be more alert and cognizant of instances where deepfakes are weaponized or used to spread disinformation.

Nonetheless, CONAR, the Brazilian ad watchdog, has already opened an inquiry to see if the use of Elis Regina in this instance qualifies as a possible breach of ethics.

OpenAI to Use AP News for Training

OpenAI will train its models on the vast archives of the Associated Press, while the AP will gain access to OpenAI’s technology. It’s worth noting that the AP writes some articles using artificial intelligence, which could, in turn, be used to train future OpenAI models.

Will All Background Actors Be Deepfakes?

Like the Writers Guild of America before it, the Screen Actors Guild (SAG) is also now on strike. During the strike talks and subsequent actions, SAG-AFTRA alleges studios wanted to scan background actors’ likenesses, give them a day’s pay, and use said likenesses in perpetuity. Time will tell what SAG-AFTRA and AMPTP agree on in terms of AI use.

OpenAI v. FTC?

After last week’s lawsuit levied against OpenAI by artists claiming to have their work illegally obtained and trained on, the company now finds itself under scrutiny by the FTC. The organization is looking into how the company trains its models, what data is used, and how it prevents (or does not prevent) false information from being shared with consumers, among other concerns.

More News

  • Director Christopher Nolan, whose film on J. Robert Oppenheimer arrives this week, spoke of similarities between the focus of his film and our current AI boom. (Variety)
  • Elon Musk launched X.AI, a venture to create AI that “understands the true nature of the universe” (which is 42). (The Verge)
  • Democratic lawmakers are calling on the FEC to put an end to deepfakes in campaign ads. (CNN)
  • Google’s Bard (and other chatbots) are currently under EU scrutiny. (TechCrunch)
  • China has eased up a tad on its rules regarding AI creation and use. (Quartz)
  • Common Sense Media, which reviews various aspects of media for parents of young children, will do the same for AI products. (Common Sense Media)
  • The UK’s National Crime Agency says AI could make issues of CSAM worse. (The Guardian)
  • Speaking of, Discord has banned AI-generated CSAM. (Engadget)
  • Universal Music Group has weighed in on AI-related copyright measures. (The Fader)
  • Here’s what happens when AI models are trained several times on the output of AI models. (Tom’s Hardware)
  • James Gang/Eagles guitarist Joe Walsh isn’t worried about AI until it starts trashing hotel rooms. (Vulture)
  • The EU is looking for U.S. legislators to get on board and regulate AI. (Wired)

Thank you for reading the Reality Defender Newsletter. If you have any questions about Reality Defender, or if you would like to see anything in future issues, please reach out to us here.

\ Solutions by Industry
Reality Defender’s purpose-built solutions help defend against deepfakes across all industries
Subscribe to the Reality Defender Newsletter