Info interventions

: A set of approaches, informed by behavioural science research and validated by digital experiments, to build resilience to online harms.

Accuracy prompts

Refocus user attention towards accuracy
Accuracy prompts ask individuals to consider the veracity of a bite-sized piece of content, priming them to remember their own commitment to sharing accurate information when it matters.

Hypothesis

Reminding individuals to think about accuracy when they might be about to engage with false information can boost users' pre-existing accuracy goals.

How it works

  • 1

    1. The individual scrolls through their social feed and comes across content with potential misinformation.

  • 2

    2. An accuracy prompt is triggered and pops up over the content.

  • 3

    3. A bite-sized explanation on why they are seeing the reminder is served to the individual and their attention is shifted to the accuracy of the content with information literacy tips.

  • 4

    4. The individual is now prompted to be more aware and may think twice when coming across similar content in their feed.

Findings

  • 50%
    Those who received accuracy tips were 50% more discerning in sharing habits versus users who did not. (Source: Jigsaw)
  • 11%
    Pre-roll videos on YouTube drove up to an 11% increase in confidence, three weeks after exposure. (Source: Jigsaw)

Examples

We partnered with MIT and The University of Regina on a series of experiments to test whether accuracy prompts work cross-culturally, reduce sharing of false information, increase the sharing of true information online and boost users' confidence in their abilities to navigate information quality.

Online survey experiments were conducted across 16 countries with more than 30,000 participants, in which they were asked to rate their intention to share true and false news headlines.
An early prototype accuracy prompt asking users to reflect on the accuracy of a news headline before continuing to browse.

In partnership with

University of Regina and Hill Levene

Redirect Method

Interrupt online radicalisation
The Redirect Method is a programme aimed at reaching individuals who are vulnerable to recruitment by violent extremist groups. The pilot used ads to redirect users looking for extremist information to curated content that refutes ISIS's recruitment messaging.

Hypothesis

There is a window of opportunity during the radicalisation process where individuals who are researching extremist ideologies can be persuaded by narratives refuting them.

How it works

  • 1

    1. The individual completes an online search using keywords that indicate an interest in extremist propaganda.

  • 2

    2. The Redirect Method is initiated and picks up on the keyword to prompt an intervention.

  • 3

    3. An ad is presented to the individual featuring more information on their topic of interest.

  • 4

    4. Upon clicking the ad, the individual is redirected to content that counters false extremist narratives.

Findings

  • 320,000
    individuals reached over an eight-week pilot. (Source: Jigsaw and Moonshot)
  • 500,000
    minutes of counter-narrative videos served. (Source: Jigsaw and Moonshot)

Examples

Jigsaw and Moonshot developed the Redirect Method's open-source methodology from interviewing ISIS defectors about the role of the Internet in their radicalisation process. The insights informed the design of a pilot programme using AdWords to reach people at risk of radicalisation and used content to serve them with relevant counter-narratives.
The content was uploaded by users from all around the world to confront online radicalisation, and selected by expert practitioners.

Focusing on the slice of ISIS's audience most susceptible to its messaging, our methodology recognises that even content not created for the purpose of counter-messaging can still undermine harmful narratives when curated, organised and targeted effectively. Since 2016, Moonshot has partnered with an array of technology companies, including Facebook, to deploy advertising to those expressing an interest in other online harms, including white supremacy, violent misogyny and conspiracy theories.
A campaign flow showing Moonshot using the Redirect Method to redirect individuals to safer content, in this case, services to exit white supremacist movements.

In partnership with

Moonshot

Authorship feedback

Promote better conversations
Authorship feedback leverages Perspective API – a tool that uses artificial intelligence to detect toxic language – to provide real-time feedback to commenters who are writing posts by highlighting when their comments might be perceived as offensive.

Hypothesis

Giving users a moment to pause, reflect and consider different ways of phrasing their comments can contribute to better conversations online.

How it works

  • 1

    1. The individual writes a comment that is identified as 'toxic' – a rude, disrespectful or unreasonable comment that is likely to make someone leave a discussion.

  • 2

    2. Perspective API picks up on the 'toxic' comment using machine-learning models that identify abusive language.

  • 3

    3. An Authorship feedback message is shown, alerting the individual that their comment has been identified as risky/offensive or is misaligned with the publisher's community guidelines.

  • 4

    4. The individual is encouraged to adjust the language before publishing their comment.

Findings

  • 34%
    of users who received feedback powered by Perspective API chose to edit their comment. (Source: Jigsaw).

Examples

We partnered with several websites and developed a feature that directly integrated into comment publishing systems. As users typed comments, their text was checked through Perspective API before publishing the comment.

If the comment exceeded a predetermined threshold of toxic language measured by Perspective API, the user was offered a reminder and opportunity to rephrase their comment and try again. Post-hoc analysis was conducted on the comments to determine edit rates and overall effect.
Authorship feedback message shown in red below a toxic comment on one of the websites supported by OpenWeb.

In partnership with

OpenWeb
Coral

Prebunking

Increase resistance to manipulation
Prebunking is a technique to preempt manipulation attempts online. By forewarning individuals and equipping them to spot and refute misleading arguments, they gain resilience to being misled in the future.

Hypothesis

Preemptive messages can help individuals identify manipulative narratives and strategies (e.g. the claim that 'vaccines are unnatural' or that refugees steal jobs).

How it works

  • 1

    1. A prebunking video is served to a group of users as an ad in their social media feed.

  • 2

    2. Through short video messages the individual is informed of possible attempts to manipulate them online.

  • 3

    3. The individual is shown a relevant example of a manipulative technique or narrative and then given counter arguments to refute the claim.

  • 4

    4. By analysing how well video viewers recall the techniques in a short survey relative to a control group, we can assess their likelihood to resist manipulative content in the future.

Findings

  • 73%
    of individuals who watched a prebunking video were more likely to consistently spot misinformation online (source: Science advances)
  • 5%
    Prebunking videos as YouTube Ads boosted recognition of manipulation techniques by 5%. (Source: Science advances)

Approach and examples

Through lab testing and live experiments alongside academic partners at The University of Cambridge, Harvard University and American University, we tested prebunking via short video messages to promote resistance to common rhetorical strategies and narratives that are used to perpetuate misinformation.
BBC Media Action, University of Cambridge and Jigsaw developed a 'how-to-prebunk' guide, to give practitioners guidelines and basic requirements on producing their own prebunking messages. Download the PDF

In partnership with

University of Cambridge
BBC Media Action
Demagog, NASK and Jigsaw created a series of videos to counter anti-refugee narratives about Ukrainians living in Central and Eastern Europe. Watch all videos on YouTube.

In partnership with

In collaboration with The University of Cambridge and The University of Bristol, we created five videos to prebunk particular manipulation techniques commonly encountered online. Watch all videos on YouTube.

In partnership with

University of Cambridge
In partnership with scholars at Harvard T.H. Chan School of Public Health and American University, along with trained medical professionals, we pre-emptively corrected common misleading narratives about vaccine safety. Watch all videos on YouTube.

In partnership with

POLARISATION AND EXTREMISM.
RESEARCH AND INNOVATION LAB
YouTube's global media literacy programme, Hit Pause, helps to teach viewers in markets around the world the skills to detect misinformation. The initial set of creatives builds off Jigsaw's work to prebunk common manipulation techniques, such as emotional language. Watch all videos on YouTube.

Created by

YouTube

In partnership with

NAMLE

Jigsaw is a unit within Google that explores threats to open societies and builds technology that inspires scalable solutions.