OpenAI unveils new tool to detect images created by DALL-E 3

OpenAI has unveiled a new tool designed to identify images generated by its text-to-image model, DALL-E 3. Alongside this, they’ve introduced enhanced watermarking techniques to better detect content produced by their systems.

Joining C2PA Steering Committee

OpenAI announced its partnership with C2PA, a widely recognized standard for certifying digital content origins.

By joining the Steering Committee, OpenAI aims to contribute to the development of this standard, enhancing transparency in digital content creation and editing processes.

  • OpenAI has begun incorporating C2PA metadata into images generated by DALL·E 3 and plans to extend this to their video generation model, Sora.
  • While acknowledging that deceptive content creation remains possible, OpenAI emphasizes the importance of metadata in building trust and authenticity in digital content.

To promote understanding and adoption of provenance standards like C2PA, OpenAI is partnering with Microsoft to establish a $2 million societal resilience fund.

This fund will support AI education initiatives by organizations like Older Adults Technology Services from AARP and International IDEA.

Introducing New Provenance Tools

In tandem with C2PA integration, OpenAI develops advanced methods to bolster content integrity:

  • Tamper-Resistant Watermarking: Utilizing invisible signals to mark digital content, making it difficult to alter.
  • Detection Classifiers: AI-powered tools to identify content origins, aiming to resist tampering.

Researcher Access Program

OpenAI invites research labs and journalism nonprofits to test their image detection classifier, designed to identify DALL·E 3 generated images. The goal is to evaluate the tool’s effectiveness, real-world applications, and considerations for use.

Classifier Performance Insights
  • In initial testing, OpenAI’s classifier demonstrates high accuracy, correctly identifying 98% of DALL·E 3 images.
  • It maintains precision despite common image modifications but may experience reduced accuracy with certain alterations.
  • OpenAI acknowledges lower performance when distinguishing images from other AI models, flagging 5-10% incorrectly.
Audio Watermarking

OpenAI has integrated audio watermarking into Voice Engine, their custom voice model, ensuring transparency and security in audio technologies.

Looking Ahead

OpenAI emphasizes the need for collective action to ensure content authenticity. Platforms and creators must retain metadata to provide consumers with transparent information about content sources.

Announcing the updates, OpenAI posted:

Our efforts in provenance represent a facet of a larger industry-wide initiative. We commend the efforts of our fellow research labs and generative AI companies in advancing research in this area. Collaboration and knowledge sharing are crucial for enhancing our understanding and promoting transparency online.


Related Post