Google Photos Plans Feature to Identify AI-Generated Images

Anna Bells

Oct-11-2024

Google Photos Plans Feature to Identify AI-Generated Images

Google Photos is rumored to be developing a fresh feature aimed at helping users to identify whether an image has been created or modified through artificial intelligence. This addition will introduce ID resource tags that provide details about the AI origins of visuals alongside the variety of digital sources involved. The tech company based in Mountain View appears to be developing this capability to help combat the spread of deepfake content. However, specific details regarding how users will view this information remain uncertain.

In recent years, deepfakes have become a prevalent form of digital manipulation. These representations—whether images, videos, audio files, or similar content—are crafted or altered using AI technologies with the intent to mislead or disseminate false information. A notable case involved a lawsuit filed by a well-known actor against a company for utilizing deepfake video advertisements featuring the actor endorsing their products without consent.

A recent report highlights a potential feature in the Google Photos app that will enable users to verify if an image in their collection was digitally generated. Although this capability was detected in version 7.3 of the app, it is not yet active, meaning that users with the latest version will not see it currently.

The publication uncovered new XML code within the app's layout files that suggests this advancement. These code identifiers, known as ID resources, are linked to specific elements within the application. One of these identifiers seemingly includes the term “ai_info,” which likely relates to data included in the image's metadata regarding AI generation. It is anticipated that this section will be marked if the image was created using a compliant AI tool.

Additionally, the tag “digital_source_type” is expected to indicate the name of the AI technology or model responsible for generating or enhancing the image, which could be names like Gemini, Midjourney, among others.

The manner in which Google intends to present this information remains unclear. Ideally, it could be incorporated into the image's EXIF metadata to ensure minimal manipulation of this information. However, this approach may make it less accessible for users since they would need to navigate to the metadata section to view it. Alternatively, the app could implement an on-image badge to signify AI-generated content, akin to the method Meta employed on Instagram.

Follow: