OpenAI, the creator of the popular Artificial Intelligence Chatbot, ChatGPT, is currently testing a tool to detect AI-generated contents.
The firm disclosed yesterday that it is receiving applications from users to test its new image detection classifier. Specifically, the tool detects the possibility of an image being generated by OpenAI’s DALL-E 3.
According to the ChatGPT creator, the test will provide insight on the tool’s effectiveness alongside its real-world application.
“Our goal is to enable independent research that assesses the classifier’s effectiveness, analyzes its real-world application, surfaces relevant considerations for such use, and explores the characteristics of AI-generated content.”
Meanwhile, the tool’s development comes as part of the AI firm’s effort to promote content provenance and authenticity.
OpenAI noted that it has recently joined the Steering Committee of the Coalition for Content Provenance and Authenticity (C2PA). The C2PA uses a popularly recognized standard to certify digital contents developed by software companies, camera manufacturers and online platforms.
While it looks forward to improving the standard, OpenAI already includes it in the metadata of some of its contents.
“Earlier this year we began adding C2PA metadata to all images created and edited by DALL·E 3, our latest image model, in ChatGPT and the OpenAI API. We will be integrating C2PA metadata for Sora, our video generation model, when the model is launched broadly as well.”
Similarly, the firm is also incorporating audio watermarking into its AI Audio generator, Voice Engine.
Notably, the move follows increased speculations on the concerns about the authenticity of online contents. Over the years, people have expressed worries about the ability to use AI to manipulate existing media or to create new ones.