OpenAI says can detect images spawned by it

Mon May 06 2024
Mark Cooper (3154 articles)
OpenAI says can detect images spawned by it

OpenAI claims to have the ability to identify images generated by its own system.

Artificial intelligence is making significant advancements in its ability to identify and acknowledge its own creations.

On Tuesday, OpenAI is introducing a new tool capable of identifying if an image was generated using their text-to-image generator, DALL-E 3. According to OpenAI officials, the tool has a high level of accuracy when it comes to detecting DALL-E 3 images. However, even small alterations to a picture can cause confusion, highlighting the ongoing challenge for artificial intelligence companies to keep up with advancements in their own technology.

The proliferation of counterfeit images and other media generated through AI has led to widespread confusion regarding their authenticity, sparking debates on the impact of visual content on future election campaigns in 2024.

Concerns have been raised among policymakers regarding the growing prevalence of AI-generated images on the internet, with tools like DALL-E 3 enabling even faster creation of such content. Several AI startups and tech companies are also developing tools to provide assistance.

“The driving force behind much of this work is the concern surrounding the election,” stated David Robinson, the individual responsible for policy planning at OpenAI. “It’s the primary issue of concern that policymakers frequently hear about.”

On Tuesday, OpenAI announced its participation in an industry group, which was co-founded by Microsoft and Adobe. The group aims to establish content credentials for online images. OpenAI and Microsoft have collaborated to establish a $2 million fund aimed at promoting AI education and fostering societal resilience.

According to OpenAI, their latest tool boasts an impressive accuracy rate of around 98% when it comes to identifying content produced by DALL-E 3, as long as the image remains unaltered. When those images are captured or edited, the classifier’s success rate may decrease slightly, but it can still frequently make an accurate identification.

In an interview, Sandhini Agarwal, an OpenAI researcher specializing in policy, mentioned that the tool’s performance deteriorates in specific circumstances, such as when the color of the images is altered. Agarwal mentioned that OpenAI aims to address those issues by allowing external researchers to contribute.

OpenAI has been conducting extensive internal testing of its classification tool for several months. It does not depend on watermarks, which are signatures that many companies intentionally include in AI images but can often be removed.

OpenAI’s tool can sometimes struggle to differentiate between images created using DALL-E 3 and those generated by competing AI products. According to Agarwal, altering the color of those images can also have a significant impact on performance.

According to the company, a small percentage of the time, the tool mistakenly identifies non-AI generated images as being created by DALL-E 3.

According to researchers from OpenAI and other institutions, it is comparatively simpler to identify whether an image is generated by AI than to perform a similar analysis for AI-generated text. In January of last year, OpenAI released a tool aimed at identifying AI-generated written content. According to OpenAI, this tool had a success rate of detecting bot-written text that was close to 75%, which came as a disappointment after the initial excitement surrounding the launch of ChatGPT two months prior.

OpenAI officials have expressed their ongoing commitment to enhancing that tool.

Mark Cooper

Mark Cooper

Mark Cooper is Political / Stock Market Correspondent. He has been covering Global Stock Markets for more than 6 years.