Through the magnifying glass: how technology could help users identify fake images online

Jigsaw
Jigsaw
Published in
5 min readJul 26, 2022

Our research shows that tools like automated reverse image search could help distinguish real from fake content

One form of misinformation that we’ve focused on at Jigsaw is the sharing of manipulated images, presented without accompanying text or context. Images may not always make explicit falsifiable claims, but can still support misguided opinions or lead to misplaced outrage, especially when shared online. That’s why we set out to understand how identifying manipulated images impacts the sharing of those images online. We tested whether people would engage with a manipulated image less when they were shown, through a reverse-image search, that the image had been changed since its first appearance online.

At least one recent study suggests around 10% of images shared in politically active WhatsApp groups may be known misinformation. Such message-app based misinformation can directly contribute to real-world harm, as was the case in a spate of misinformation-fueled vigilante killings in India in 2018. The rise of image-based misinformation is especially concerning given that many individuals demonstrate poor discernment of whether an image is real or manipulated. Better solutions for helping internet users navigate image manipulations are needed.

These solutions can take the form of interventions, such as timely warning labels, accuracy prompts, or prebunking before viewers even encounter the misinformation. These interventions have been effective at counteracting misinformation that contains text like misleading headlines, false claims, or quotes out of context. In this study, we investigated an approach that might be particularly well tailored to manipulated images, and for users who want to “see” the truth for themselves: automated reverse image search capabilities that support scalable and frictionless forensic investigation.

Reverse image search tools are widely available, but require manual, slow searches. We tested a potential reverse image search tool that could automatically provide image provenance information — highlighting potentially manipulated images as soon as they appear.

To evaluate the effectiveness of reverse image search in addressing misinformation, we focused on two outcomes: increased discernment of misinformation and reduced engagement with it.

In an online laboratory study conducted on Amazon Mechanical Turk, we showed 152 participants an image as it might appear in a social media feed. Participants judged how likely the image was to be fake and how likely they would be to share the image. They were then presented with the reverse image mock, revealing the latest image they had just seen, as well as the original version of the image as it first appeared on the internet. They then once again reviewed how likely the image was to be fake, and how likely they would be to share it. This study design allowed us to see how the reverse image search changed minds, rather than compare responses across groups.

In an online laboratory study conducted on Amazon Mechanical Turk, we showed participants an image as it might appear in a social media feed. Participants judged how likely the image was to be fake and how likely they would be to share the image. They were then presented with the reverse image mock, revealing the latest image they had just seen, as well as the original version of the image as it first appeared on the internet. They then once again reviewed how likely the image was to be fake, and how likely they would be to share it. This study design allowed us to see how the reverse image search changed minds, rather than compare responses across groups.

Participants judged images found on the internet, representing a range of quality. One third of the images were obviously manipulated, such as the image of a cow resting on top of a car. One third were subtly manipulated, such as the image of a bear chasing a cyclist down a mountain road. Finally, a third of the images had no manipulation, and did not differ from their original upload. This final set of images served as a control for the research.

We found that reverse image search increased discernment of whether images had been manipulated. The intervention was especially powerful for subtly manipulated images, with a 5% decrease in sharing intent (see preprint for details). Open-ended survey responses suggest this is because the obviously manipulated images were clearly fake, and benefited less from the tool.

Why did the tool reduce engagement most for subtly manipulated images? That may have to do with the motivations to share content in the first place. Previous research suggests that people are motivated to share content that is accurate, but also content that is surprising or funny. As one of our participants reflected: “I would share it with friends because it is funny and it is obviously photoshopped.” If accuracy is not the main driver of engagement for a particular image, this tool may have limited effects. However, for people who want to share accurate content, this tool can be very useful, especially for hard-to-spot fakes online.

Overall, these results suggest that an automated approach to image provenance search might help internet users correctly identify ambiguously manipulated images — when the users are genuinely motivated to understand whether a questionable image is real or manipulated. It might not reduce the sharing of overtly manipulated images if users are knowingly sharing fake images for amusement or to make a political point.

Much more research is needed before such a product could be launched. For example, none of our image stimuli contain prominent text, as might be found in a meme. Though there are certainly manipulated images that do contain text — knowing how to counter manipulated images without text remains vitally important. Finally, images with no manipulation can still be presented out of context. This form of misinformation could also be captured with this tool, but was not tested in the present research.

Some open questions for future research revolve around the target users, and the usefulness of the reverse image search tool itself. Who would benefit the most from an image provenance tool? Reducing the friction to discovering an image’s provenance may be more impactful for casually curious users, but less so for those who are highly motivated to investigate the sources of online content. Most importantly, how would the reverse image search tool fare “in the wild?” Would users become “banner blind” to it, engaging with it less or ignoring it altogether, after repeat exposures? Or would it become a well utilized fact-checking tool? Would images with no provenance information be assumed to be authentic? Live experiments using a prototype may be the best approach to answer these questions.

Contributors: Alek Chakroff, Rocky Cole

--

--

Jigsaw
Jigsaw
Editor for

Jigsaw is a unit within Google that explores threats to open societies, and builds technology that inspires scalable solutions.