Tech

Trevor Noah says AI-powered video generators like OpenAI’s Sora could be ‘disastrous’

Summary:

Comedian Trevor Noah warns about legal and ethical risks surrounding AI video-generation tools like OpenAI’s Sora 2 during Microsoft’s Elevate Washington launch. The updated app’s “Cameo” feature enables unauthorized use of human likenesses, sparking Hollywood lawsuits and family outrage over deceased celebrity deepfakes. Denmark’s digital likeness legislation and booming startups like Loti highlight growing privacy concerns as content authenticity erodes. Legal experts predict nuanced copyright battles as generative AI reshapes media verification norms.

What This Means for You:

  • Immediate Privacy Action: Audit your digital footprint and explore opt-out mechanisms in generative AI platforms
  • Media Literacy Upgrade: Apply reverse image search and metadata verification to suspect videos using tools like InVID
  • Legal Safeguards: Consult intellectual property attorneys to draft likeness-rights clauses for contracts and wills
  • Future Outlook: Anticipate state-level digital persona laws by 2026 mirroring Washington’s AI education reforms

Original Post:

Trevor Noah interviews Code.org CEO at Microsoft AI education event
Comedian Trevor Noah interviews Code.org CEO Hadi Partovi at Microsoft’s Redmond campus launch of Elevate WA initiative (GeekWire Photo / Taylor Soper)

Trevor Noah expresses concern about AI video generators like OpenAI’s Sora 2, calling unauthorized likeness use potentially “disastrous” during Microsoft’s Washington education initiative launch. Sora’s “Cameo” update triggers Hollywood lawsuits and public backlash over deceased celebrity deepfakes. Legal expert Kraig Baker warns of casual misuse overwhelming existing publicity laws, especially for inactive estates. NYT’s Brian Chen notes this threatens visual fact extinction, necessitating widespread media skepticism. OpenAI responds with opt-in consent features and creator revenue models while startups like Loti report 30x growth in digital likeness protection services.

Extra Information:

People Also Ask About:

  • How does OpenAI prevent Sora misuse currently? Mandatory watermarking, biometric opt-ins, and public figure blocklists excluding Cameo-approved profiles.
  • Can individuals prevent AI likeness replication? Emerging tools like Loti allow digital fingerprint registration with takedown automation.
  • Are deceased celebrities legally protected? Currently varies by state; new Uniform Post-Mortem Rights Act proposals address this gap.
  • Will deepfakes impact court evidence standards? Federal Rules of Evidence amendments under review require AI content certification chains.
  • Which industries face greatest Sora disruption? Marketing agencies, talent representation, and documentary filmmaking require workflow overhauls.

Expert Opinion:

“The Sora controversy reveals a regulatory time bomb,” notes Stanford Computational Law Lab director Dr. Annemarie Bridy. “Current publicity rights frameworks, built for static media, can’t scale to AI’s recombinant content capabilities. We’ll need dynamic consent ledgers and real-time biometric revocation systems by 2027 to prevent systemic trust collapse.”

Key Terms:

  • Generative AI video authenticity verification
  • Post-mortem digital likeness rights management
  • Opt-in biometric consent frameworks
  • AI-assisted copyright infringement detection
  • Deepfake media literacy curriculum
  • Synthetic media provenance standards
  • Celebrity persona blockchain registries



ORIGINAL SOURCE:

Source link

Search the Web