Main

The prevalence of online misinformation can have important social consequences, such as contributing to greater fatalities during the COVID-19 pandemic10, exacerbating the climate crisis11, and sowing political discord12. Yet the supply of misinformation is often financially motivated. The economic incentive to produce misinformation has been widely conjectured by academics and practitioners to be one of the main reasons websites that publish misinformation (hereafter referred to as ‘misinformation websites’ or ‘misinformation outlets’), masquerading as legitimate news outlets, continue to be prevalent online1,2,3,4. During the 2016 US Presidential election, one operator of a misinformation outlet openly stated “For me, this is all about income”13.

Media reports have anecdotally observed that companies and digital platforms contribute towards financially sustaining misinformation outlets via advertising14,15. Advertising companies can either place their advertisements directly on specific websites or use digital advertising platforms to distribute their advertisements across the internet (Methods, ‘Background on digital advertising’). The vast majority of online display advertising today is done via digital advertising platforms that automatically distribute advertisements across millions of websites16, which may include misinformation outlets. According to a recent industry estimate, for every US$2.16 in digital advertising revenue sent to legitimate newspapers, US advertisers send US$1 to misinformation sites17.

Existing work to counter the proliferation of misinformation online has primarily focused on empowering news consumers3,5 in order to reduce the demand for misinformation through interventions such as fact-checking news articles6, providing crowd-sourced labels8 and nudging users to share more accurate content7. However, a vital question remains regarding how the incentive to produce or supply misinformation may be countered. Indeed, recently, academics have proposed ‘supply-side’ policies for steering platforms away from the revenue models that might contribute towards sustaining harmful content18. Digital platforms have also attempted to decrease advertising revenue going to some misinformation websites19. However, despite these attempts, advertising from well-known companies and organizations continues to appear on misinformation websites, thereby financing such outlets20,21. Moreover, the supply of misinformation is expected to increase with generative AI technologies making it easier to create large volumes of content to earn advertising revenue22,23.

In this Article, we attempt to provide a first step in understanding how to limit the financing of online misinformation via advertising using descriptive and experimental evidence. To tackle the problem of financing online misinformation, it is important to first understand the role of different entities within this ecosystem. In particular, we need to establish whether companies directly place advertisements on misinformation outlets or do so by automating such placement through digital advertising platforms. Although several mainstream digital platforms generate the vast majority of their revenue via advertising3, little is understood about the role of advertising-driven platforms in financing misinformation. To evaluate the relative roles of advertising companies and digital advertising platforms in monetizing misinformation, we construct unique large-scale datasets by combining data on websites publishing misinformation with advertising activity per website over a period of three years.

Next, the extent to which companies can be dissuaded from advertising on misinformation websites depends on how their customers respond to information about the prevalence of companies’ advertising on such websites. As people find out about companies advertising on misinformation websites through news and social media reports20,24, they may reduce their demand for such companies or voice concerns against such practices online25,26. Therefore, it is important to measure the preferences of the people who consume a company’s products or services regardless of whether these consumers visit misinformation websites themselves. To measure these effects, we conducted a survey experiment with a sample of the US population by randomly varying the pieces of factual information we provided to participants. By simultaneously measuring how people shift their consumption and the types of actors (that is, advertisers or digital advertising platforms) that they voice concerns about, we capture how peoples’ reactions change as the degree to which advertisers and advertising platforms are held responsible varies. We also study how consumer responses may vary depending on the intensity of a company’s advertising on misinformation websites by providing company rankings on this dimension.

Finally, whether decision-makers within companies are aware of their company’s advertisements appearing on misinformation outlets and prefer to avoid doing so can have an important role in curbing the financing of misinformation. In recent years, advertisers have often participated in boycotts of advertising-driven platforms such as YouTube, Facebook and Twitter for placing their advertisements next to problematic content27,28. However, there is little systematic measurement of the knowledge and preferences of key decision-makers within companies in this context. To address this gap, we surveyed executives and managers by contacting the alumni of executive education programmes. Moreover, we conducted an information-provision experiment to examine whether decision-makers would increase their demand for a platform-based solution to avoid advertising on misinformation outlets when informed about the role of digital advertising platforms in monetizing misinformation.

We report three sets of findings from our descriptive and experimental analyses. First, our descriptive analysis suggests that misinformation websites are primarily monetized via advertising revenue, with a substantial proportion of companies across several industries appearing on such websites. We further show that the use of digital advertising platforms amplifies the financing of misinformation. Second, we find that people switch consumption away from companies whose advertising appears on misinformation outlets, reducing the demand for such companies. This switching effect persists even when consumers are informed about the role of digital advertising platforms in placing companies’ advertisements on misinformation websites and the role of other advertising companies in financing misinformation. Third, our survey of decision-makers suggests that most of them are ill-informed about the roles of their own company and the digital advertising platforms that they use in financing misinformation outlets. However, decision-makers report a high demand for information on whether their advertisements appeared on misinformation outlets and solutions to avoid doing so. Those who were uncertain and unaware about where their advertising appeared also increased their demand for a platform-based solution to reduce advertising on misinformation websites upon learning how platforms amplify advertising on such websites.

In sum, our results indicate that there is room to decrease the financing of misinformation using two low-cost, scalable interventions. First, improving transparency for advertisers about where their advertisements appear could by itself reduce advertising on misinformation websites, especially among companies who were previously unaware of their advertisements appearing on such outlets and were thus inadvertently financing misinformation. Second, although it is currently possible for consumers to find out about advertising companies financing misinformation through news and social media, platforms could make advertising on misinformation outlets more easily and continuously traceable to the advertising companies involved for consumers. Our results suggest that both simple information disclosures and comparative company rankings can reduce consumer demand away from companies advertising on misinformation websites.

We build on prior work analysing the ecosystem supporting misinformation websites29,30,31,32,33 and programmatic advertising34 by matching millions of instances of advertising companies appearing across thousands of news outlets with data on misinformation websites, thereby providing large-scale evidence of the ecosystem that sustains online misinformation over a consistent period of three years. Additionally, we present descriptive evidence about the relative roles of advertising companies and digital advertising platforms in financing misinformation. Next, our information-provision experiments examine the effects of advertising on misinformation websites for companies and platforms. Previous work has examined the conditions under which people react against companies for failing to operate up to their expectations—for example, due to service quality deterioration26, not fulfilling social responsibilities35, advertising next to violent content36, or taking a political stance37,38. Our research design contributes to this literature in two key ways by: (1) measuring both types of potential consumer responses—that is, ‘exit’ and ‘voice’—that are theorized in the literature25; and (2) doing so using incentive-compatible behavioural outcomes at the individual level, which enables us to capture costly decisions people make and move beyond stated preferences recorded in related experimental research36,39. More broadly, our research suggests an alternative approach to countering misinformation online by suggesting how the monetization of misinformation could be curbed using information interventions. Our study complements and extends prior work on using disclosures40,41 and interventions to counter misinformation5,7 by showing that disclosures about companies advertising on misinformation outlets can shift consumption away from such companies, ultimately incentivizing companies to reduce the financing of misinformation via advertising.

Collection of website and advertising data

To categorize whether a website contains misinformation, we compiled a list of misinformation domains using three different sources: NewsGuard, the Global Disinformation Index (GDI) and websites used in prior work (see Methods, ‘Collecting website data’). NewsGuard and the GDI use automated and manual methods to source and evaluate websites, but each website is rated manually by expert professionals who apply journalistic standards to evaluate online news outlets in a non-partisan and transparent manner.

We collected data on advertiser behaviour from 2019 to 2021 via Oracle’s Moat Pro platform, which includes data collected by ‘crawling’ approximately 10,000 websites daily to create a snapshot of the advertising landscape. Moat’s web crawlers mirror a normal user experience and attempt to visit a representative sample of pages for each website at least once a day. To the best of our knowledge, these data are the gold standard used by many industry stakeholders for competitive analysis. For all the websites in our sample that get non-zero traffic throughout this period and have advertising data available on the Moat Pro platform, we collected monthly data on the advertising companies appearing on each website and digital advertising platforms used by each website.

Our final dataset, which contains data on advertising and misinformation, consists of 5,485 websites (including 1,276 misinformation websites and 4,209 non-misinformation websites) and 42,595 unique advertisers with 9,539,847 instances of advertising companies appearing on news websites between 2019 and 2021. Additionally, for the most active 100 advertisers each year, as identified by Moat Pro, we collected weekly data on the websites that they appeared on and the digital advertising platforms that they used.

Descriptive analysis

Of the websites in our sample, 89.3% were supported by advertising revenue between 2019 and 2021, and the majority of misinformation websites (74.5%) were monetized by advertising during this period. Moreover, among websites rated by NewsGuard, a much smaller percentage of misinformation websites had a paywall (2.7% in the USA and 3.2% globally) relative to non-misinformation websites (25.0% in the USA and 24.0% globally), which indicates a greater reliance on advertising for financing relative to other subscription-based business models among misinformation websites. Although different entities may have specific ideological or financial motivations for propagating online misinformation, data from NewsGuard-rated websites (see Supplementary Table 3) shows that relative to non-misinformation websites, misinformation websites were also more likely to be operated by individuals as opposed to corporate, non-profit or government entities. Given that advertising appears to be the dominant business model that sustains misinformation outlets, it merits a closer look. We find that companies that advertise on misinformation websites span a wide range of industries (Supplementary Table 4) and account for 46% to 82% of overall companies in each industry (Fig. 1a). These include several well-known brands among commonly used household products, technology products and business services, as well as finance, health, government and educational institutions among other industries. Further, the intensity of advertising on misinformation sites is similar (mean = 1.01, 95% confidence interval [0.945, 1.074], t(22) = 0.311, P = 0.759 from one-sample t-test, n = 23) to that on non-misinformation sites for companies across several industries (Fig. 1b).

Fig. 1: Advertising companies appearing on misinformation websites by industry.
figure 1

From 2019 to 2021, we recorded the number of times companies in a given industry appeared on the 5,485 websites in our sample per month. Our final sample of advertisers consists of 42,595 companies and 9,539,847 instances of companies advertising on the websites in our sample. We removed industries where the number of advertising appearances by all companies combined was below the 5th percentile of the total number of advertising appearances, resulting in a total of 23 industries. a, The proportion of companies in each industry that appear on misinformation websites at least once in our sample. b, The advertising intensity on misinformation sites relative to non-misinformation websites for each industry. This is calculated by dividing the proportion of advertisements from companies of that industry appearing on misinformation websites among all advertising appearances on misinformation websites with the same proportion for non-misinformation websites per industry. Therefore, values lower than 1 indicate less, values close to 1 represent similar and values higher than 1 represent greater advertising intensity on misinformation sites relative to non-misinformation websites.

Source Data

Next, we examined the role of digital advertising platforms in financing misinformation. For the one hundred most active advertisers in each year, we collected weekly data on the websites their advertisements appeared on and their use of digital advertising platforms. On average, about 79.8% of advertisers that used digital advertising platforms in a given week appeared on misinformation websites that week. In contrast, among companies that did not use digital advertising platforms in a given week, only 7.74% appeared on misinformation websites on average in a given week (two-sided t-test t(192.12) = 93.903, P < 0.001, n = 144). In other words, companies that used digital advertising platforms were approximately ten times more likely to appear on misinformation websites than companies that did not use digital advertising platforms. Moreover, we account for industry and time trends to find that the use of digital advertising platforms by companies substantially amplifies the likelihood of a company’s advertising appearing on misinformation websites (see Extended Data Table 1).

Effects of advertising on misinformation

Next, our survey experiment aimed to determine potential changes in consumer behaviour based on experimentally varied information about the roles of companies and platforms in financing misinformation via advertising. Using the framework of Hirschman25, we measured how people (1) exit (that is, decrease their consumption), and (2) voice concerns about company or platform practices via online petitions in response to the information provided in an incentive-compatible manner.

Average treatment effects

As detailed in Methods, ‘Consumer experiment design’, participants in our experiment were offered a gift card from a company of their choice. Our primary pre-registered outcome is whether respondents exit by switching their top gift card choice after receiving an information treatment, which takes the value one for people who switch and the value zero for all other participants (n = 4,039). To observe exit outcomes, we focus on company-related information treatments (T1, T3 and T4), where respondents are informed that advertisements from their top choice of gift card company recently appeared on misinformation websites. Table 1, column 1 shows that respondents increasingly exit (that is, increase switching away or decrease demand from) their first choice company relative to control (b = 0.13, 95% confidence interval [0.10, 0.16], P < 0.001) in response to learning about their top choice gift card company’s advertisements appearing on misinformation websites (T1). This effect persists (b = 0.13, 95% confidence interval [0.10, 0.16], P < 0.001; Table 1, column 2) when we control for participants’ demographic and behavioural characteristics in our preferred specification, which enables more precise estimates (see Supplementary Information, ‘Analysis: consumer study outcomes’). We also use text analysis of the responses to a free-form question, which helps to identify the effect of the information intervention more directly. Respondents’ text responses explaining their choice of the gift card reveal that misinformation concerns drive this switching behaviour (Extended Data Fig. 1a).

Table 1 Average treatment effects on exit

Switching behaviour also increases relative to the control group (b = 0.10, 95% confidence interval [0.07, 0.13], P < 0.001) when respondents are told about the substantial role of digital advertising platforms in placing companies’ advertisements on misinformation websites (T3). This switching behaviour persists even though respondents are more likely to state that digital advertising platforms are responsible for placing companies’ advertisements on misinformation websites by four percentage points relative to the control group (b = 0.04, 95% confidence interval [0.02, 0.06], P < 0.001, Extended Data Fig. 1b). This suggests that advertising companies can continue to experience a decline in demand for their products or services despite consumers knowing that digital advertising platforms have a substantial role in placing companies’ advertisements on misinformation websites.

When provided with a ranking of companies in order of their intensity of appearance on misinformation websites (T4), respondents switch away from opting for their top choice gift card company (b = 0.08, 95% confidence interval [0.05, 0.11], P < 0.001). This result shows that the advertising companies can expect to face a decrease in consumption for financing misinformation despite other companies also advertising on misinformation outlets. Respondents are less likely to mention product features that are relevant to the companies they are interested in—for example, healthy food, good prices and availability in the local area, among others (b = 0.07, 95% confidence interval [0.09, −0.05], P < 0.001, Extended Data Fig. 1a). Examining the direction of consumer switching shows that among those who switch their gift card preference (n = 430), those provided with company-ranking information in T4 made the most switches towards companies that less frequently advertised on misinformation websites (b = 0.95, 95% confidence interval [0.19, 1.71], P = 0.015). This result suggests that providing a ranking of advertising companies transparently could steer consumer demand towards companies that advertise less frequently on misinformation websites.

Our results are robust to alternative exit outcomes that include whether participants switch to a product they prefer less than their first choice (Table 1, columns 3 and 4) and whether they switch their choice across product categories (Table 1, columns 5 and 6), further indicating that participants incur a real cost of switching to a company that is not equivalent to their top-ranked one. Although our platform-related information treatment (T2) does not explicitly mention the respondents’ first choice gift company (as in T1, T3 and T4) or its specific use of digital advertising platforms (as in T3), we observe a small amount of switching in T2 relative to the control group (b = 0.03, 95% confidence interval [0.01, 0.05], P = 0.012). This could be because respondents might partially blame their first choice gift card company as it could be top of mind for them42 or assume that the information provided in T2 alluded to the company they had just chosen43. It is important to note that the other outcomes reported in Table 1 in the paper—that is, switching to lower preference gift cards and switching across categories are not statistically significant for T2, which suggests that T2 does not result in treatment effects similar to our other treatments. Overall, we find that companies whose advertisements appear on misinformation websites can face substantial consumer backlash in terms of both exit and voice. Consumers who switched their gift card choice as a result of our information treatments lost about 39.4% of the mean value and 42.9% of the median value of their gift card value on average. Given that the value of the gift card is US$25, a 39.4% decline in the mean value translates to treated consumers losing an equivalent of US$9.85. The distribution of weights assigned to the initial top gift card choice and the final selection is shown in Extended Data Fig. 2, which illustrates a substantial leftward shift in the weight distribution when individuals switch away from their top choice. We also find suggestive evidence for vast differences between consumers’ stated and revealed preferences, as shown in Supplementary Fig. 3. When compared to prior research, our 13 percentage point decline in demand is similar in magnitude to the demand reduction observed from receiving negative product feedback44 and exceeds the magnitude of previously measured changes in demand associated with companies taking a social or political stance37,38.

Next, we examine the effects of the information interventions on our pre-registered voice outcomes captured by individuals signing an online petition to voice concerns about advertising on misinformation websites. Participants were given the option to sign one of four different petitions on Change.org (https://www.change.org/): two company-level petitions advocating that companies in general should block or allow their advertisements from appearing on misinformation outlets, and two similar platform-level petitions. Although we observe petition signatures at the group level, we use clicks on petition links as our primary voice outcome since this information is available at the individual level and most closely matches the proportions of actual signatures (Extended Data Fig. 3). Our results are robust to using alternative petition outcomes, such as intention to sign a petition, self-reported petition signatures and actual signatures (Extended Data Table 2). Of note, we do not analyse actual signatures for the T4 group since Change.org accidentally deleted these petitions after they were recorded.

Relative to the control group, participants were 5 percentage points (36%) significantly more likely to click on the platform petition link when given information about the role of digital advertising platforms in automatically placing advertisements on misinformation websites in the platform (T2) treatment group (Table 2, columns 3 and 4). Text analysis from respondents’ explanation of their petition choice confirms that respondents hold digital advertising platforms more responsible for financing misinformation in T2 relative to the control group (b = 0.02, 95% confidence interval [0.01, 0.04], P = 0.012, Extended Data Fig. 1b). For example, one respondent stated who opted for the platform blocking petition explained their choice by stating that the platform option “involves more than one company.” Another stated that their chosen gift card company is “not the only ad being put on misinformation sites. It is a larger issue that has to do with the platforms used to place ads.” Indeed, signing these petitions is the only way that participants can take any action to hold advertising platforms responsible in response to T2, which explicitly highlights the role of platforms.

Table 2 Average treatment effects on voice

Upon receiving information about all six gift card companies’ advertisements appearing on misinformation websites (T4), participants were significantly more likely to click on petition links suggesting that advertising companies need to block their advertisements from appearing on misinformation websites (Table 2, columns 3 and 4). Based on their open-ended text responses (Extended Data Fig. 1a), respondents increasingly highlighted misinformation-related concerns (b = 0.09, 95% confidence interval [0.07, 0.11], P < 0.001) and placed less emphasis on product usage (b = 0.05, 95% confidence interval [0.07, − 0.03], P < 0.001) and product features (b = 0.07, 95% confidence interval [0.09, − 0.05], P < 0.001). In T4, the treatment intensity for companies, in general, is significantly stronger relative to T1 and T3 since we highlighted that all six gift card companies advertise on misinformation websites (at varying levels). This increase in treatment intensity could explain a higher treatment effect for T4 relative to the null effects for company petitions in the other treatment arms, which only specifically mentioned the respondents’ top choice gift card company.

Heterogeneous treatment effects

Next, we explore heterogeneity in treatment effects along four pre-registered dimensions (gender, political orientation, frequency of use of the company��s products or services, and consumption of misinformation) based on our hypotheses (see Methods, ‘Consumer experiment design’). Focusing on exit (Extended Data Table 3, columns 1–4), we observe positive treatment effects for all groups—that is, male and female, Biden voters and Trump voters, frequent and infrequent users of a company’s products or services, and those who report consuming news from misinformation outlets in our survey and those who do not. As reported in Extended Data Table 3, in line with our predictions, we find stronger treatment effects for exit among women (b = 0.05, P = 0.011) and Biden voters (b = 0.03, P = 0.058) and less strong treatment effects for frequent users (b = 0.05, P = 0.007) and those who consume news from select popular misinformation outlets (b = 0.04, P = 0.097). Respondents who voted for President Biden in the 2020 US Presidential election were also 5 percentage points more likely to voice concerns against company practices (P = 0.04; Extended Data Table 3, column 6). Overall, we believe these heterogeneity results bolster the external validity of our experimental estimates. In particular, we highlight that product-specific factors such as frequency of use can have an important role in the decision to switch or not separately from ideological reasons such as political leaning.

Measuring decision-maker preferences

Given that advertising on misinformation websites is pervasive and could provoke consumer backlash, we next examine what explains the prevalence of this phenomenon among companies. To shed light on this question, we surveyed key strategic decision-makers such as executives and managers at companies by partnering with the executive education programmes at two universities to survey their alumni. In collaboration with our partner organizations, we also verified the job titles of the majority (71%) of our respondents using external sources, which are shown in Extended Data Fig. 4. About 94% of the participants whose job titles we were able to verify served in a top executive role or managerial role at the time of our survey (for example, chief executive, general or operations manager of multiple departments or locations, advertising or sales manager or operations manager) and the remainder were individuals who could influence decision-making within their companies, especially given their interest in learning leadership and managerial skills via executive education programmes.

Baseline beliefs and preferences

We found a wide dispersion in decision-makers’ pre-registered beliefs about the role of companies and platforms in financing misinformation as shown in Supplementary Fig. 6 and 7, which complements prior work showing wide dispersion in decision-makers’ beliefs in other settings45,46. Decision-makers largely overestimate the overall proportion of companies that advertise on misinformation websites and underestimate the role of digital advertising platforms in placing companies’ advertisements on misinformation websites. In particular, respondents estimated that about 64% of companies’ advertisements appeared on misinformation websites on average (Supplementary Table 12). However, our data show that 55% of the 100 most active advertisers appeared on misinformation websites. Regarding the role of digital advertising platforms, respondents estimated that around 44.5% of companies using digital advertising platforms appear on misinformation websites (Supplementary Table 12), whereas 79.8% of companies among the 100 most active advertisers in fact do so. Moreover, only 41% of decision-makers believed that consumers react against companies whose advertisements appear on misinformation websites. These results suggest that decision-makers believe that advertising on misinformation websites is probably commonplace but has little to do with using digital advertising platforms and has limited consequences for the companies involved.

However, in contrast to the average belief that most companies advertised on misinformation websites, respondents substantially underestimated their own company’s likelihood of appearing on misinformation websites. Only 20% of respondents believing that their own company’s advertisements recently appeared on misinformation websites, which indicates the presence of a false uniqueness effect among decision-makers47. We further segmented our results by type of role within the company (Extended Data Table 4). Although our sub-samples were small, these baseline beliefs and characteristics were largely similar across various roles. Among participants who expressed an interest in learning about whether their company’s advertisements appeared on misinformation websites (that is, requested an advertisement check by providing their company name and contact details) and whose companies appeared in our advertising data, approximately 81% of companies appeared on misinformation websites. Moreover, most respondents who were given follow-up information that their companies’ advertisements appeared on misinformation websites reported being surprised by this information (62%), whereas none of those who learned their companies advertisements did not appear on misinformation websites reported being surprised. These figures illustrate that decision-makers are largely uninformed about the high likelihood of their company’s advertisements appearing on misinformation websites. Given these findings about the beliefs of decision-makers, our results suggest that companies may be financing misinformation inadvertently.

Most participants requested an advertisement check by providing their company name and email address (74%). The demand for an advertisement check was high regardless of respondents’ initial beliefs, suggesting a substantial interest in learning about whether their company’s advertisements appeared on misinformation websites. Despite only 41% of respondents agreeing that consumers react against companies whose advertisements appear on misinformation websites, most participants (73%) opted to receive information on how consumers respond to companies whose advertisements appear on misinformation websites with 58% inquiring about exit and 15% enquiring about voice. This suggests that although decision-makers may be unaware of how advertising on misinformation websites can provoke consumer backlash, most of them are interested in learning about the degree of potential backlash. Finally, for our most costly revealed-preference measure—that is, signing up to attend a 15-minute expert-led information session on how companies can avoid advertising on misinformation websites—18% of decision-makers opted to sign up, an arguably high rate given the value of decision-makers’ time and the opportunity cost of attending the session.

Information intervention results

We report the results of our information treatment on our pre-registered outcomes. For the full sample of participants, we estimate positive and statistically significant effects on participants’ posterior beliefs about the role of advertising platforms in placing advertisements on misinformation websites (Table 3, column 1), driven mainly by respondents who believe that their company’s advertisements had not appeared on misinformation websites in the recent past (Table 3, column 3).

Table 3 Average treatment effects of information intervention

We find an overall null effect of our information treatment on participants’ demand for a platform-based solution, as measured by their demand for information on which platforms least frequently place companies’ advertisements on misinformation websites (Table 3, columns 4–6). However, this result masks substantial heterogeneity based on participants’ prior beliefs. Since our information treatment changes beliefs for the subset of participants who believe that their company’s advertisements had not recently appeared on misinformation websites (Table 3, column 3), we further investigated and reported results based on participant’s prior beliefs for this sub-sample in Table 4. Only participants who were uncertain and unaware about their own company’s advertisements appearing on misinformation websites responded positively and significantly to our information treatment by increasing their demand for a platform-based solution by 36 percentage points (b = 0.36, 95% confidence interval [0.11, 0.61], P = 0.008, n = 68), as shown in Table 4, column 4. Our results imply that the way in which participants respond to information about the role of digital advertising platforms in financing misinformation is highly dependent on their prior beliefs about their own company. Such information could make companies switch advertising platforms or pressure the platforms they currently use to enable them to easily steer their advertising away from misinformation outlets. This finding is in line with a lack of attention describing decision-makers’ behaviours across various settings48,49,50. However, these results should be viewed as suggestive and exploratory since the subsample sizes in these regressions are small and these sample splits were not pre-registered.

Table 4 Treatment effects based on prior beliefs

We did not find meaningful treatment effects for our donation preference outcome, which measures the proportion of respondents who prefer that we donate to the GDI instead of DataKind (Supplementary Table 13). Since both GDI and DataKind have similar goals of advancing technology’s ethical and responsible use, respondents may have considered their missions interchangeable. Moreover, unlike our first behavioural outcome, respondents could have considered donating to the GDI less relevant to their own organizations’ needs and more a matter of personal preference.

Discussion

Together, our descriptive and experimental findings offer clear, practical implications. Given the potential for a substantial decline in demand, as demonstrated by our consumer study, advertising companies may wish to account for consumer preferences in placing their advertising across various online outlets and exercise caution while incorporating automation in their business processes via digital advertising platforms. For instance, given that consumers switched to other products upon learning about a company’s advertisements appearing on misinformation websites, companies could use lists of misinformation outlets provided by independent third-party organizations such as NewsGuard and the GDI to limit advertising budgets being spent on misinformation outlets through digital platforms. Moreover, since consumer backlash was particularly strong for women and politically left-leaning consumers, companies targeting such audiences may need to exercise greater caution.

On the basis of our results, we identify two interventions that could reduce the financing of online misinformation. First, digital advertising platforms that run automated auctions could enable advertisers to more easily access data on whether their advertisements appear on misinformation outlets. This would enable advertisers to make advertising placement decisions consistent with their preferences rather than inadvertently financing misinformation51. Second, while it is currently possible for consumers to find out about companies financing misinformation through media reports, digital platforms could improve transparency for consumers about which companies advertise on misinformation outlets. Platforms could provide such information to consumers when they are viewing an advertisment using simple information labels (as in our ‘company only’ information treatment) similar to the ‘sponsored by’ and ‘paid for by’ labels that are presently common on various digital media platforms. Similarly, rank-based information provided in our company-ranking information treatment (T4) could be provided as a ranking of companies in order of intensity of appearing on misinformation websites where customers are selecting products from a menu of choices while shopping. Platforms have provided similar contextual information about companies in other settings—for example, Google Flights displays carbon emissions data alongside flight prices when people select a flight to purchase among several options52. Enabling consumers to view such information at the point of purchase could provide a stronger incentive for companies to steer their advertisements away from such outlets, especially since the effect of negative information can persist for several months53. Overall, these interventions could decrease the inadvertent advertising revenue going towards misinformation outlets, which could eventually lead to such sites ceasing to operate, as observed anecdotally in prior work29.

These interventions could ensure that both consumers and advertisers are provided with information about the consequences of their respective purchasing and advertising placement decisions so that they can account for their preferences. Having access to such information is necessary for an efficiently functioning economic system in accordance with the first fundamental theorem of welfare economics. However, whereas digital platforms are uniquely well-positioned in the ecosystem of consumers, advertisers and publishers to implement information interventions in the form of disclosures and rankings54,55, they may not have incentives to implement such interventions. With the backdrop of mounting pressure from advertisers27,28 and calls for transparency in the programmatic advertising business56, information-based interventions could be incorporated into existing legislation to improve transparency. These include efforts such as the EU Digital Services Act, which includes a Code of Practice on Disinformation with enforceable provisions for different stakeholders in the advertising ecosystem to collectively fight misinformation, and US bills such as the Honest Ads Act and the Competition and Transparency in Digital Advertising (CTDA) Act, which include provisions to improve transparency in political advertising and the digital advertising ecosystem in general. Notably, in recent years, policy proposals that aim to reduce the prevalence of misinformation such as the Combating Misinformation and Disinformation bill in Australia and the bill against fake news in Germany have faced backlashes over posing risks to free speech57,58. Although such proposals face the challenge of striking the right balance between combating misinformation and protecting freedom of expression, the information interventions that we identify could help counter the financial incentive to produce misinformation in the first place by reducing the unintended advertising revenue going towards misinformation outlets. There are many parallels for regulation by information provision to address externalities in other industries, including chemicals (toxic release inventory reporting requirements), automobiles (fuel consumption information), food (nutrition and content labels) and airlines (greenhouse gas emissions), of which several have been demonstrated to be effective in prior work41,59,60.

Previously studies have shown that ‘demand-side’ interventions to counter online misinformation have focused on reducing the consumption and spread of misinformation among news consumers on online platforms. Although interventions such as accuracy prompts and digital literacy tips can increase the quality of news that people share5, this line of work has found limited support for news credibility signals in increasing the demand for credible news61 or in reducing misperceptions among users6. Such constraints in changing user behaviour may also apply to credibility signals like watermarks for detecting AI misinformation. Moreover, whereas such interventions are only effective for the small subset of users who are exposed to misinformation62, our complementary ‘supply-side’ approach targets entities and individuals who might not necessarily consume or spread misinformation themselves.

Relative to existing proposals of supply-side interventions to curb the production of misinformation, which involve social media platforms banning the advertising of false news63 or changing their advertising-driven business model altogether18, we outline a middle path to suggest that accounting for the preferences of advertisers and consumers could help counter the financing of online misinformation. Although platforms could coordinate to identify and deplatform misinformation websites64, prior work suggests that misinformation websites nearly always resurface through alternative providers unless the incentive to produce misinformation is addressed29. Moreover, the information interventions that we identify are also an improvement on the status quo, whereby advertisers and consumers can only implement their preferences by participating in boycotts of digital platforms over their inability to contain misinformation. Allowing advertisers to more easily observe and control whether their advertisements appear on misinformation websites could also limit backlash by enabling advertisers to better implement their preferences rather than participating in one-off short-term advertising boycotts27,28. Additionally, since consistently providing negative information can create lasting associations for consumers65, providing information disclosures on every advertisement for whether the advertising company involved appears on misinformation websites could have a substantial effect on consumer demand over time, providing incentives for advertising companies to reduce advertising on misinformation websites.

Given our findings, we suggest three promising avenues for future research. First, future work could evaluate the effectiveness of our information interventions in the field over a longer time period to quantify the decline in revenue generated by misinformation outlets resulting from increasing transparency for consumers or advertisers. Related to this, future work could also target a wider set of advertisers to validate the robustness of our interventions which would allow for broader generalizability. Second, our results on whether companies are willing to adopt solutions to avoid monetizing misinformation are based on their existing (often incorrect) beliefs about the prevalence of advertising on misinformation websites in general and for their own company. More research is needed to understand how advertising companies would respond in the context of correct beliefs. Third, although our research identifies potential interventions that digital platforms can adopt to curb the monetization of online misinformation, it is unclear whether it is in the interest of digital advertising platforms to do so. Moreover, whether the potential monetary and societal benefits of the information interventions we identify outweigh the revenue platforms generate by serving advertisements on misinformation websites remains to be studied. Overall, the effectiveness of platforms in mitigating misinformation will depend on a multi-pronged approach. Given that misinformation is largely financially motivated and that financially sustaining online misinformation can be substantially harmful for the advertising companies involved, simple low-cost informational interventions such as the ones we identify could go a long way in curbing the supply of online misinformation.

Methods

Background on digital advertising

The predominant business model of several mainstream digital media platforms relies on monetizing attention via advertising3. While these platforms typically offer free content and services to individual consumers, they generate revenue by serving as an intermediary or advertising exchange connecting advertisers with independent websites that want to host advertisements. To do so, platforms run online auctions to algorithmically distribute advertising across websites, known as ‘programmatic advertising’. For example, Google distributes advertising in this manner to more than two million non-Google sites that are part of the Google Display Network. This allows websites to generate revenue for hosting advertising, and they share a percentage of this payment with the platform. In the USA, more than 80% of digital display advertisements are placed programmatically16. We refer to these advertising exchanges as digital advertising platforms and use the term digital platforms to collectively refer to all the services offered by such media platforms.

We examine the role of advertising companies and digital advertising platforms in monetizing online misinformation. While in other forms of (offline) media, advertisers typically have substantial control over where their advertisements appear, advertising placement through digital advertising platforms is mainly automated. Since most companies do not have the capacity to participate in high-frequency advertising auctions that require them to place individual bids for each advertising slot they are interested in, they typically outsource the bidding process to an advertising platform. Such programmatic advertising gives companies relatively less control over where their advertisements end up online. However, companies can take steps to reduce advertising on misinformation websites, such as by only being part of advertising auctions for a select list of credible websites or blocking advertisements from appearing on specific misinformation outlets.

Collecting website data

We collect data on misinformation websites in three steps. First, we use a dataset maintained by NewsGuard. This company rates all the news and information websites that account for 95% of online engagement in each of the five countries where it operates. Journalists and experienced editors manually generate these ratings by reviewing news and information websites according to nine apolitical journalistic criteria. Recent research has used this dataset to identify misinformation websites6,66,67. In this paper, we consider each website that NewsGuard rates as repeatedly publishing false content between 2019 and 2021 to be a misinformation website and all others to be non-misinformation websites, leading to a set of 1,546 misinformation websites and 6,499 non-misinformation websites. To get coverage throughout our study period, we sample websites provided by NewsGuard from the start, middle and end of each year from 2019 to 2021. Additionally, we also sample websites from January 2022 and June 2022 to account for websites that may have existed during our study period and discovered later. Supplementary Table 3 summarizes the characteristics of this dataset. Our NewsGuard dataset contains websites across the political spectrum, including left-leaning websites (for example, https://www.palmerreport.com/ and https://occupydemocrats.com/), politically neutral websites (for example, https://rt.com/ and https://www.nationalenquirer.com), and right-leaning websites (for example, https://www.thegatewaypundit.com/ and http://theconservativetreehouse.com/).

Note that prior research that has used the NewsGuard dataset has often used the term ‘untrustworthy’ to describe websites6,67. Such research has used NewsGuard’s aggregate classification whereby a site that scores below a certain threshold (60 points) on NewsGuard’s weighted score system is labelled as untrustworthy. Instead of using NewsGuard’s overall score for a website, we use the first criterion classified by NewsGuard for each website—that is, whether a website repeatedly publishes false news to identify a set of 1,546 misinformation websites. While 94% of the NewsGuard misinformation websites we identify in this manner are also untrustworthy based on NewsGuard’s classification, only about 52% of the untrustworthy websites are misinformation websites or websites that repeatedly publish false news. Our measure of misinformation is, therefore, more conservative than prior work using NewsGuard’s ‘untrustworthy’ label.

In addition to the NewsGuard dataset, we use a list of websites provided by the GDI. This non-profit organization identifies disinformation by analysing both the content and context of a message, and how they are spread through networks and across platforms68. In this way, GDI maintains a list of monthly-updated websites, which it also shares with interested advertising tech platforms to help reduce advertising on misinformation websites. The GDI list allows us to identify 1,869 additional misinformation websites. Finally, we augment our list of misinformation websites with 396 additional ones used in prior work69,70. Among the websites that NewsGuard rated as non-misinformation (at any point in our sample), 310 websites were considered to be misinformation websites by our other sources or by NewsGuard itself (during a different period in our sample). We categorize these websites as misinformation websites given their risk of producing misinformation.

Altogether, our website dataset consists of 10,310 websites, including 3,811 misinformation and 6,499 non-misinformation websites. Similar to prior work6,67, our final measure of misinformation is at the level of the website or online news outlet. Aggregating article-level information and using website-level metadata is meaningful since it reduces noise when arriving at a website-level measure. Finally, we use data from SEMRush, a leading online analytics platform, to determine the level of monthly traffic received by each website from 2019 to 2021.

Consumer experiment design

This study was reviewed by the Stanford University Institutional Review Board (Protocol No. IRB-63897) and the Carnegie Mellon University Institutional Review Board (protocol no. IRB00000603). Our study was pre-registered at the American Economic Association’s Registry under AEARCTR-0009973. Informed consent was obtained from all participants at the beginning of the survey.

Setting and sample recruitment

We recruited a sample of US internet users via CloudResearch. CloudResearch screened respondents for our study so that they are representative of the US population in terms of age, gender and race based on the US Census (2020). It is important to note that while we recruited our sample to be representative on these dimensions to improve the generalizability and external validity of our results, our sample is a diverse sample of US internet users, which is not necessarily representative of the US population on other dimensions71. To ensure data quality, we include a screener in our survey to check whether participants pay attention to the information provided. Only participants who pass this screener can proceed with the survey. Our total sample includes 4,039 participants, who are randomized into five groups approximately evenly.

The flow of the survey study is shown in Supplementary Fig. 1. We begin by asking participants to report demographics such as age, gender and residence. From a list of trustworthy and misinformation outlets, we then ask participants questions about their behaviours in terms of the news outlets they have used in the past 12 months, their trust in the media (on a 5-point scale), the online services or platforms they have used and the number of petitions they have signed in the past 12 months.

Initial gift card preferences

We then inform participants that one in five (that is, 20% of all respondents) who complete the survey will be offered a US$25 gift card from a company of their choice out of six company options. Respondents are asked to rank the six gift card companies on a scale from their first choice (most preferred) to their sixth choice (least preferred). These six companies belong to one of three categories: fast food, food delivery and ride-sharing. All six companies appeared on the misinformation websites in our sample during the past three years (2019–2021), offer items below US$25, and are commonly used throughout the USA. The order in which the six companies are presented is randomized at the respondent level. As a robustness check, we also ask respondents to assign weights to each of the six gift card options. This question gives respondents greater flexibility by allowing them to indicate the possibility of indifference (that is, equal weights) between any set of options. We then ask participants to confirm which gift card they would like to receive if they are selected to ensure they have consistent preferences regardless of how the question is asked. At this initial elicitation stage, the respondents did not know that they will get another chance to revise their choice. Hence, these choices can be thought of as capturing their revealed preference.

Information treatments

All participants in the experiment are given baseline information on misinformation and advertising. This is meant to ensure that all participants in our experiment are made aware of how we define misinformation along with examples of a few misinformation websites (including right-wing, neutral and left-wing misinformation websites), how misinformation websites are identified, and how companies advertise on misinformation websites (via an illustrative example) and use digital platforms to automate placing advertisements.

Participants are then randomized into one control and four treatment groups, in which the information treatments are all based on factual information from our data and prior research. We use an active control design to isolate the effect of providing information relevant to the practice of specific companies on people’s behaviour9. Participants in the control group are given generic information based on prior research that is unrelated to advertising companies or platforms but relevant to topic of news and misinformation.

In our first ‘company only’ treatment group (T1), participants are given factual information stating that advertisements from their top choice gift card company appeared on misinformation websites in the recent past. Based on their preferences, people may change their final gift card preference away from their initial top-ranked company after receiving this information. It is unclear, however, whether advertising on misinformation websites would cause a sufficient change in consumption patterns and which sets of participants may be more affected.

Our second ‘platform only’ treatment group (T2) informs participants that companies using digital advertising platforms were about 10 times more likely to appear on misinformation websites than companies that did not use such platforms in the recent past. This information treatment measures the effects of digital advertising platforms in financing misinformation news outlets. Since it does not contain information about advertising companies, it practically serves as a second control group for our company-level outcome and aims to measure how people may respond to our platform-related outcome.

Because our descriptive data suggest that the use of digital advertising platforms amplifies advertising revenue for misinformation outlets, we are interested in measuring how consumers respond to a specific advertising company appearing on misinformation websites when also informed of the potential role of digital advertising platforms in placing companies’ advertising on misinformation websites. It is unclear whether consumers will attribute more blame to companies or advertising platforms for financing misinformation websites when informed about the role of the different stakeholders in this ecosystem. For this reason, our third ‘company and platform’ treatment (T3) combines information from our first two treatments (T1 and T2). Similar to T1, participants are given factual information that advertisements from their top choice gift card company appeared on misinformation websites in the recent past. Additionally, we informed participants that their top choice company used digital advertising platforms and companies that used such platforms were about ten times more likely to appear on misinformation websites than companies that did not use digital advertising platforms, as mentioned in T2.

Finally, since several advertising companies appear on misinformation websites, we would like to determine whether informing consumers about other advertising companies also appearing on misinformation websites changes their response towards their top choice company. In our fourth company-ranking treatment (T4), participants are given factual information, which states that “In the recent past, ads from all six companies below repeatedly appeared on misinformation websites in the following order of intensity”, and provided with a ranking from one of three years in our study period—that is, 2019, 2020 or 2021. We personalize these rankings by providing truthful information based on data from different years in the recent past such that the respondents’ top gift card choice company does not appear last in the ranking (that is, is not the company that advertises least on misinformation websites) and in most cases, advertises more intensely on misinformation websites than its potential substitute in the same company category (for example, fast food, food delivery or ride-sharing). Such a treatment allows us to measure potential differences in the direction of consumers switching their gift card choices, such as switching towards companies that advertise more or less intensely on misinformation websites. It could also give consumers reasonable deniability such as “everyone advertises on misinformation websites” leading to ambiguous predictions about the exact impact of the treatment effect.

Outcomes

We measure two pre-registered behavioural outcomes that collectively allow us to measure how people respond to our information treatments in terms of both voice and exit25. After the information treatment, all participants are asked to make their final gift card choice from the same six options they were shown earlier. Our main outcome of interest is whether participants ‘exit’ or switch their gift card preference—that is, whether they select a different gift card after the information treatment than their top choice indicated before the information treatment. To ensure incentive compatibility, participants are (truthfully) told that those randomly selected to receive a gift card will be offered the gift card of their choice at the end of our study. As mentioned above, the probability of being randomly chosen to receive a gift card is 20%. We choose a high probability of receiving a gift card relative to other online experiments since prior work has shown that consumers process choice-relevant information more carefully as realization probability increases72. To make the gift card outcome as realistic as possible, we also had a large value gift card (US$25). The focus of our experiments is on single-shot outcomes. While it would have been interesting to capture longer-term effects, the cost of implementing our gift card outcome for a large sample and expenditure on the other studies made a follow-up study cost-prohibitive.

Secondly, participants are given the option to sign one of several real online petitions that we made and hosted on Change.org. Participants can opt to sign a petition that advocates for either blocking or allowing advertising on misinformation or choose not to sign any petition. Further, participants could choose between two petitions for blocking advertisements on misinformation websites, suggesting that either: (1) advertising companies, or (2) digital advertising platforms, need to block advertisements from appearing on misinformation websites. Overall, participants selected among the following five choices: (1) “Companies like X need to block their ads from appearing on misinformation websites.”, where X is their top choice gift card company; (2) “Companies like X need to allow their ads to appear on misinformation websites.”, where X is their top choice gift card company; (3) “Digital ad platforms used by companies need to block ads from appearing on misinformation websites.”; (4) “Digital ad platforms used by companies need to allow ads to appear on misinformation websites.”; and (5) I do not want to sign any petition. To track the number of petition signatures for each of these four petition options across our randomized groups, we provide separate petition links to participants in each randomized group. We record several petition-related outcomes. First, we measure participants’ intention to sign a petition based on the option they select in this question. Participants who pass our attention check and opt to sign a petition are later provided with a link to their petition of choice. This allows tracking whether participants click on the petition link provided. Participants can also self-report whether they signed the petition. Finally, for each randomized group, we can track the total number of actual petition signatures.

Our petition outcomes serves two purposes. While our gift card outcome measures how people change their consumption behaviour in response to the information provided, people may also respond to our information treat ments in alternative ways—for example, by voicing their concerns or supplying information to the parties involved25,26. Given that the process of signing a petition is costly, participants’ responses to this outcome would constitute a meaningful measure similar to petition measures used in prior experimental work73,74. Second, since participants must choose between signing either company or platform petitions, this outcome allows us to measure whether or not, across our treatments, people hold advertising companies more responsible for financing misinformation than the digital advertising platforms that automatically place advertisements for companies.

In addition to our behavioural outcomes, we also record participants’ stated preferences. To do so, we ask participants about their degree of agreement with several statements about misinformation on a seven-point scale ranging from ‘strongly agree’ to ‘strongly disagree’. These include whether they think: (1) companies have an important role in reducing the spread of misinformation through their advertising practices; and whether (2) digital platforms should give companies the option to avoid advertising on misinformation websites.

Heterogeneous treatment effects

We explore heterogeneity in consumer responses along four pre-registered dimensions. First, prior research recognizes differences in the salience of prosocial motivations across gender75, with women being more affected by social-impact messages than men76 and more critical consumers of new media content77. Given these findings, we could expect female participants to be more strongly affected by our information treatments.

Responses to our information treatments may also differ by respondents’ political orientation. According to prior research, conservatives are especially likely to associate the mainstream media with the term ‘fake news’. These perceptions are generally linked to lower trust in media, voting for Trump, and higher belief in conspiracy theories78. Moreover, conservatives are more likely to consume misinformation2 and the supply of misinformation has been found to be higher on the ideological right than on the left79. Consequently, we might expect stronger treatment effects for left-wing respondents.

Consumers who more frequently use a company’s products or services could be presumed to be more loyal towards the company or derive greater utility from its use, which could limit changes in their behaviour37. Alternatively, more frequent consumers may be more strongly affected by our information treatments as they may perceive their usage as supporting such company practices to a greater extent than less frequent consumers.

Finally, we measure whether people’s responses differ by whether they consume misinformation themselves based on whether they reported using misinformation outlets in the initial question asking them to select which news outlets they used in the past 12 months.

Tackling experimental validity concerns

In our incentivized, online setting where we measure behavioural outcomes, we expect experimenter demand effects to be minimal as has been evidenced in the experimental literature80,81. We take several steps to mitigate potential experimenter demand effects, including implementing best practices recommended in prior work9. First, our experiment has a neutral framing throughout the survey since the recruitment of participants. While recruiting participants, we invite them to “take a survey about the news, technology and businesses” without making any specific references to misinformation or its effects. While introducing misinformation websites and how they are identified by independent non-partisan organizations, we include examples of misinformation websites across the political spectrum (including both right-wing and left-wing sites) and provide an illustrative example of misinformation by foreign actors. In drafting the survey instruments, the phrasing of the questions and choices available were as neutral as possible. For example, while introducing our online petitions, we presented participants with the option to sign real petitions that suggest both blocking and allowing advertising on misinformation sites. Indeed, we find that the vast majority of participants believe that the information provided in the survey was unbiased as shown in Supplementary Fig. 4. Only about 10% of participants chose one of the ‘biased’ or ‘very biased’ options when asked to rate the political bias of the survey information provided from a seven-point scale ranging from ‘very right-wing biased’ to ‘very left-wing biased’.

In our active control design, participants in all randomized groups are presented with the same baseline information about misinformation, given misinformation-related information in the information intervention and asked the same questions after the information intervention to emphasize the same topics and minimize potential differences in the understanding of the study across treatment groups. Moreover, to maximize privacy and increase truthful reporting82, respondents complete the surveys on their own devices without the physical presence of a researcher. We also do not collect respondents’ names or contact details (with the exception of eliciting emails to provide gift cards to participants at the end of the study).

In presenting our information interventions and measuring our behavioural outcomes, we take special care to not highlight the names of the specific entities being randomized across groups to avoid emphasizing what is being measured. We do, however, highlight our gift card incentives by putting the gift card information in bold text to ensure incentive compatibility since prior work has found that failing to make incentives conspicuous can vastly undermine their ability to shift behaviour83.

Apart from making the above design choices to minimize experimenter demand effects, we measure their relevance using a survey question. Since demand effects are less likely a concern if participants cannot identify the intent of the study9, we ask participants an open-ended question—that is, “What do you think is the purpose of our study?”. Following prior work84,85, we then analyse the responses to this question to examine whether they differ across treatment groups. To measure potential differences in the respondents’ perceptions of the study, we examine their open-ended text responses about the purpose of the study using a Support Vector Machine classifier, which incorporates several features in text analysis, including word, character, and sentence counts, sentiments, topics (using Gensim) and word embeddings. We predict treatment status using the classifier, keeping 75% of the sample for the training set and the remaining 25% as the test set. The classifier predicts treatment status similar to chance for our main treatment groups relative to the control group, as shown in Supplementary Table 11. These results, which are similar in magnitude to those found in previous research84,85, suggest that our treatments do not substantially affect participants’ perceptions about the purpose of the study. Overall, this analysis gives us confidence that our main experimental findings are unlikely to be driven by experimenter demand effects.

To address external validity concerns, we incorporate additional exit outcomes in the paper, showing that treated individuals switched to lower preference products (Table 1, columns 3 and 4) and products across categories (Table 1, columns 5 and 6) after our information interventions by 8 and 5 percentage points, respectively. We also show in Supplementary Table 8 that as the difference between participants’ highest weighted and second highest weighted gift card choice increases, their switching behaviour decreases. This shows that the weights assigned by participants to their gift card options are capturing meaningful and costly differences in value, highlighting the external validity of our findings. More generally, our pre-registered heterogeneity analysis lends credence to the study’s external validity. In line with expectations, we find that less frequent users and more politically liberal individuals are likelier to switch (see Extended Data Table 3 for the full set of pre-registered heterogeneity results). Moreover, we find that the cost of switching gift cards varies based on participants’ observable characteristics. For example, treated participants who reported not using any of the misinformation news outlets in our survey lost 50% of the median value (US$12.50) of their initial top choice gift card whereas treated participants who reported reading such outlets lost 33.3% of the median value (US$8.33) of their initial top choice gift card. Participants’ text responses also indicate that they believed their choices to be consequential (see Supplementary Tables 1 and 2). As an example, while explaining their choice of gift card, one participant stated, ��Because I would most likely use this gift card on my next visit to… and it is less likely that i would use the others.” Regarding the petition outcome, one participant stated “The source of this problem seems to be from the digital advertising platforms, so I’d rather sign the petition that stops them from putting ads on misinformation websites.”

Decision-maker experiment design

We followed the same IRB review, pre-registration and consent procedures as those used for our consumer study. This study addresses two research questions. First, we aim to measure the existing beliefs and preferences decision-makers have about advertising on misinformation websites. This will help inform whether companies may be inadvertently or willingly sustaining online misinformation. Secondly, we ask: how do decision-makers update their beliefs and demand for a platform-based solution to avoid advertising on misinformation websites in response to information about the role of platforms in amplifying the financing of misinformation? This will suggest whether companies may be more interested in adopting advertising platforms that reduce the financing of misinformation. To this end, we conduct an information-provision experiment9. While past work has examined how firm behaviour regarding market decisions changes in response to new information48,49, it is unclear how information on the role of digital advertising platforms in amplifying advertising on misinformation would affect decision-makers’ non-market strategies.

Setting and sample recruitment

To recruit participants, we partnered with the executive education programmes at the Stanford Graduate School of Business and Heinz College at Carnegie Mellon University. We did so in order to survey senior managers and leaders who could influence strategic decision-making within their firms, in contrast to studies relying heavily on MBA students for understanding decision-making in various contexts such as competition, pricing, strategic alliances and marketing86,87,88,89. Additionally, partnering with two university programmes instead of a specific firm allowed us to access a more diverse sample of companies than prior work that sampled specific types of firms—for example, innovative firms, startups or small businesses90,91,92. Throughout this study, we use the preferences of decision-makers (for example, chief executive officers) as a proxy for company-level preferences since people in such roles shape the outcomes of their companies through their strategic decisions93,94.

Our partner organizations sent emails to their alumni on our behalf. We used neutral language in our study recruitment emails to attract a broad audience of participants to our survey regardless of their initial beliefs and concerns about misinformation, stating our goal as “conducting vital research on the role of digital technologies in impacting your organization” without mentioning misinformation. We received 567 complete responses, of which 90% are kept since they are from currently employed respondents. To ensure data quality, we dropped an additional 13% of responses where participants were inattentive in answering the survey, resulting in a final sample of 442 responses. These participants were determined to be inattentive since they provided an answer greater than 100 when asked to estimate a number out of 100 in the two questions eliciting their prior beliefs about companies and platforms before the information treatment was provided. Our final sample of 442 respondents is from companies that span all the 23 industries in our descriptive analysis. Moreover, as shown in Supplementary Fig. 5, our sample of participants represents a broad array of company sizes and experience levels at their current roles. Additionally, about 22% of the executives in our sample (and 25% of all our participants) are women, which is aligned with the 21% to 26% industry estimates of women in senior roles globally95,96.

Supplementary Fig. 2 shows the design of the survey study. We first elicit participants’ current employment status. All those working in some capacity are allowed to continue the survey, whereas the rest of the participants are screened out. After asking for their main occupation, all participants in the experiment are provided with baseline information on misinformation and advertising similar to that provided in the consumer experiment.

Baseline beliefs and preferences

 In our pre-registration, we highlighted that we would measure the baseline beliefs and preferences of decision-makers. We measure participants’ baseline beliefs about the roles of companies in general, their own company and platforms in general in financing misinformation. Specifically, participants are asked to estimate the number of companies among the most active 100 advertisers whose advertisements appeared on misinformation websites during the past three years (2019–2021). Additionally, we ask participants to report whether they think their company or organization had its advertisements appear on misinformation websites in the past three years. Finally, we measure participants’ beliefs about the role of digital advertising platforms in placing advertisements on misinformation websites. To do so, we first inform participants that during the past three years (2019–2021), out of every 100 companies that did not use digital advertising platforms, eight companies appeared on misinformation websites on average. We then asked participants to provide their best estimate for the number of companies whose advertisements appeared on misinformation websites out of every 100 companies that did use digital advertising platforms.

In addition to recording participants’ stated preferences using self-reported survey measures, we measure participants’ revealed preferences. To ensure incentive compatibility, participants are asked three questions in a randomized order: (1) information demand about consumer responses—that is, whether they would like to learn how consumers respond to companies whose advertisements appear on misinformation websites (based on our consumer survey experiment); (2) advertisement check—that is, whether they would like to know about their own company’s advertisements appearing on misinformation websites in the recent past; and (3) demand for a solution—that is, whether they would like to sign up for a 15-minute information session on how companies can manage where their advertisements appear online. Participants are told they can receive information about consumer responses at the end of the study if they opt to receive it whereas the advertisement check and solution information are provided as a follow-up after the survey. Participants are required to provide their emails and company name for the advertisement check. To sign up for an information session from our industry partner on a potential solution to avoid advertising on misinformation websites, participants sign up on a separate form by providing their emails. Since all three types of information offered are novel and otherwise costly to obtain, we expect respondents’ demand for such information to capture their revealed preferences.

Information intervention

Participants are then randomized into a treatment group, which receives information about the role of digital advertising platforms in placing advertising on misinformation websites, and a control group, which does not receive this information. Based on the dataset we assembled, participants are given factual information that companies that used digital advertising platforms were about ten times more likely to appear on misinformation websites than companies that did not use such platforms in the recent past. This information is identical to the information provided to participants in the T2 (that is, platform only) group in the consumer experiment.

Outcomes

After the information intervention, we first measure participants’ posterior beliefs about the role of digital advertising platforms in placing advertisements on misinformation websites following our pre-registration. Participants are told about the average number of companies whose advertisements appear per month on misinformation websites that are not monetized by digital advertising platforms. They are then asked to estimate the average number of companies whose advertisements appear monthly on misinformation websites that use digital advertising platforms. This question measures whether participants believe that the use of digital advertising platforms amplifies advertising on misinformation websites.

We record two behavioural outcomes, which were pre-registered as our primary outcomes of interest after the information intervention. Our main outcome of interest is the respondents’ demand for a platform-based solution to avoid advertising on misinformation websites. Participants can opt to learn more about two different types of information—that is: (1) which platforms least frequently place companies’ advertising on misinformation websites; and (2) which types of analytics technologies are used to improve advertising performance—or opt not to receive any information. Since participants can only opt to receive one of the two types of information, this question is meant to capture the trade-off between respondents’ concern for avoiding misinformation outlets and their desire to improve advertising performance, respectively. Participants are told that they will be provided with the information they choose at the end of this study. Following the literature in measuring information acquisition97, we measure respondents’ demand for solution information, which serves as a revealed-preference proxy for their interest in implementing a solution for their organization.

Additionally, to measure whether the information treatment increases concern for financing misinformation in general, we record a second behavioural measure. Participants are told that the research team will donate US$100 to one of two organizations after randomly selecting one of the first hundred responses: (1) the GDI; and (2) DataKind, which helps mission-driven organizations increase their impact by unlocking their data science potential ethically and responsibly.

Tackling experimental validity concerns

Similarly to our consumer experiment, this survey was carried out in an online setting, where experimenter demand effects are limited80,81. We followed best practices9 by keeping the treatment language neutral and ensuring the anonymity of the participants wherever possible. We find that most participants believe that the information provided in the survey was unbiased. Only about 7% of participants chose one of the ‘biased’ or ‘very biased’ options when asked to rate the political bias of the survey information provided from a seven-point scale ranging from ‘very right-wing biased’ to ‘very left-wing biased’.

Importantly, to ensure truthful reporting, our main experimental outcomes were incentive-compatible. In particular, respondents who chose our platform solution demand outcome to learn about which platforms least contribute to placing companies’ advertisements on misinformation websites had to face a trade-off between receiving this information and receiving information on improving advertising performance. Additionally, our baseline information demand outcomes elicited before the information intervention were also incentive-compatible in that participants would be asked to follow up on their decisions whether they opted for additional information via email or via an online information session.

These design choices are made to minimize demand effects on our main outcomes of interest. However, it is possible that these effects are still relevant, partially because participants may have an interest in ‘doing the right thing’ on a survey administered by an institution they have a connection with. We measure the relevance of potential demand effects using a survey question mirroring the approach used for our consumer experiment. To measure potential differences in the respondents’ perceptions of the study across our treatment and control groups, we predict treatment status based on respondents’ open-ended text responses about the purpose of the study via a support vector machine classifier, keeping 75% of the sample for the training set and the remaining 25% as the test set. We find that the classifier is only slightly worse than random chance in predicting treatment status (Supplementary Table 16) but similar in magnitude to those in the consumer experiment. Therefore, although experimenter demand effects may still be present, these results suggest that these effects do not drive our findings.

We address the external validity of our findings by verifying the decision-making capacity of our respondents within their organizations and by examining the generalizability of our sample. We find that the vast majority of those whose job titles we verify (94%) serve in executive or managerial roles within their organizations. The regression estimates in Supplementary Tables 18 and 19 show that our results remain qualitatively and quantitatively similar after the exclusion of the small sample of individuals in non-executive and non-managerial roles. Moreover, the verified and self-reported decision-makers are similar across observable characteristics as reported in Supplementary Table 17, suggesting limited selection in our verification process. To examine the generalizability of our sample, we investigate their observable characteristics. As shown in Supplementary Fig. 5, our sample of participants represents a broad array of company sizes and experience levels at their current roles. Additionally, about 22% of the executives in our sample (and 25% of all our participants) are women, which is aligned with the 21% to 26% industry estimates of women in senior roles globally95,96.

Reporting summary

Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.