By Vojtěch Bartoš, et al.
The reluctance of people to get vaccinated represents a fundamental challenge to containing the spread of deadly infectious diseases1,2, including COVID-19. Identifying misperceptions that can fuel vaccine hesitancy and creating effective communication strategies to overcome them are a global public health priority3,4,5. Medical doctors are a trusted source of advice about vaccinations6, but media reports may create an inaccurate impression that vaccine controversy is prevalent among doctors, even when a broad consensus exists7,8. Here we show that public misperceptions about the views of doctors on the COVID-19 vaccines are widespread, and correcting them increases vaccine uptake. We implement a survey among 9,650 doctors in the Czech Republic and find that 90% of doctors trust the vaccines. Next, we show that 90% of respondents in a nationally representative sample (n = 2,101) underestimate doctors’ trust; the most common belief is that only 50% of doctors trust the vaccines. Finally, we integrate randomized provision of information about the true views held by doctors into a longitudinal data collection that regularly monitors vaccination status over 9 months. The treatment recalibrates beliefs and leads to a persistent increase in vaccine uptake. The approach demonstrated in this paper shows how the engagement of professional medical associations, with their unparalleled capacity to elicit individual views of doctors on a large scale, can help to create a cheap, scalable intervention that has lasting positive impacts on health behaviour.
COVID-19 is a salient example of a disease with profound economic, social and health impacts, which can be controlled by large-scale vaccination if enough people choose to be vaccinated. Nevertheless, a large percentage of people are hesitant to get a vaccine, preventing many countries from reaching the threshold necessary to achieve herd immunity9,10. Consequently, rigorous evidence on scalable approaches that can help to overcome people’s hesitancy to take a COVID-19 vaccine is a global policy priority3,4,5. Existing research has made important progress in documenting the roles of providing financial incentives11,12, reminders4,5, information about the efficacy of the vaccines13,14, the role of misinformation15 on the intentions of the public to get vaccinated and, more recently, also on their actual decisions to get a vaccine5 shortly after an intervention. However, little is known about whether cheap, scalable strategies with the potential to cause lasting increases in people’s vaccination demand and uptake exist. A focus on the persistence of the impacts of interventions is especially important for vaccines such as those against COVID-19, which are often distributed in phases to different demographic groups due to capacity constraints, and multiple doses spaced over time are required to avoid declines in protection.
In many surveys across the globe, people report that they strongly trust the views of doctors6. This makes it crucial to understand how people perceive doctors’ views about the COVID-19 vaccine. In this paper, we pursue the hypothesis that reluctance to adopt the vaccine originates, in part, in misperceptions about the distribution of aggregate views of the medical community: many people may fail to recognize that there is a broad consensus in favour of the vaccine among doctors. Furthermore, we argue and show that professional associations can serve as aggregators of individual views in a medical community, by helping to implement surveys eliciting the views of doctors on a large scale. Disseminating information of a broad consensus, when one exists, can lead to people updating their perceptions of doctors’ views and, in turn, may induce lasting changes in vaccination demand and uptake.
Our focus on public misperceptions of the views of doctors is motivated by a widespread concern that media coverage can create uncertainty and polarization in how people perceive expert views, even when a broad consensus actually exists. In terms of traditional media, a desire to appear neutral often motivates journalists to provide a ‘balanced’ view by giving roughly equal time to both sides of an argument7,16, creating an impression of controversy and uncertainty8. Such ‘falsely balanced’ reporting has been shown to be a characteristic element of policy debates ranging from climate change7,16 to health issues, including links between tobacco and cancer, and potential side effects of vaccines8,17. In the context of the COVID-19 vaccines, casual observation suggests that media outlets often feature expert opinions that highlight the efficacy of approved COVID-19 vaccines together with skeptical experts who voice concerns about rapid vaccine development and untested side effects. The media usually do not specify which claims are supported by the wider medical community, leading the World Health Organization to warn media outlets against engaging in false-balance reporting18. Furthermore, polarization of beliefs can arise due to echo chambers—people choosing to be exposed to expert opinions or opinion programmes that fuel their fears of the vaccine or, alternatively, to those who strongly approve of it19,20,21.
We study these issues in the Czech Republic, which is a suitable setting, given the observed level of vaccine hesitancy among a large share of its population, similar to the situation in many other countries. At the time of data collection, the acceptance rate of the vaccine in the Czech Republic was around 65%, compared to 55–90% in other countries globally. At the same time, the Czech Republic ranks close to the median level of trust and satisfaction with medical doctors, based on a comparison of 29 countries6. We provide more background in Section 3.1 of the Supplementary Information.
We start by documenting and quantifying public misperceptions about the views of doctors on the COVID-19 vaccines. Shortly before the COVID-19 vaccine rollout began, we implemented a short online survey among 9,650 doctors. We found strong evidence of consensus: 90% of doctors intend to get vaccinated themselves and 89% trust the approved vaccines. At the same time, we found evidence of systemic and widespread misperceptions of the views held by the medical community among a nationally representative sample of the adult population (n = 2,101): more than 90% of people underestimate doctors’ trust in the vaccines and their vaccination intentions, with most people believing that only 50% of doctors trust the vaccines and intend to be vaccinated.
These findings set the stage for our main experiment, in which we tested whether randomized provision of information about the actual views of doctors can recalibrate public beliefs and, more importantly, cause a lasting increase in vaccination uptake. The experimental design aimed to make progress on two important empirical challenges that are common in experiments on the determinants of demand for COVID-19 vaccines. First, as an intention–behaviour gap has been documented in the context of flu vaccines and other health behaviours22, measuring both vaccination intentions and actual vaccination uptake allows us to test whether treatment effects on vaccination intentions translate into behavioural changes of a similar magnitude. The initial set of studies on COVID-19 vaccination, typically implemented before the vaccines became available, only tested impacts on intentions11,14,15, although recent exceptions exist5,23.
Second, most experiments designed to correct misperceptions about the views of others, and other information provision experiments in various domains, including migration, health and political behaviour, document treatment effects to be substantially smaller when measured with a delay24,25. In theory, the worry is that individual perceptions about the views of doctors might shift between the time when the treatment takes place and when people decide whether to actually get vaccinated, for reasons including regression of perceptions to the mean, biased recall or motivated memory26. Conversely, researchers have suggested that providing facts about a widely shared consensus of trustworthy experts might be resilient to these forces17, as the treatment may reduce incentives to seek new information, and condenses complex information into a simple fact (‘90% of doctors trust the approved vaccines’), which is easy to remember. Understanding whether providing information about medical consensus has temporary or lasting effects on vaccination demand is informative for policy, in terms of whether a one-off information campaign is sufficient, or whether the timing of messages needs to be tailored for different groups of people who become eligible for a vaccine at different points in time, and also whether such an information campaign needs to be repeated in cases of multiple-dose vaccines.
To address these issues, our experiment is integrated into longitudinal data collection with low attrition rates. The treatment was implemented in March 2021. We used data from 12 consecutive survey waves collected from March to November 2021, covering the early period when the vaccine was scarce, later when it gradually became available to more demographic groups, and finally for several months when it was easily available to all adults. This is reflected in the vaccination rates, which increased in our sample from 9% in March to 20% in May and to nearly 70% in July. Then, it grew slowly to 77% at the end of November. This longitudinal, data-collection-intensive approach allows us to estimate: (1) whether disseminating information on the consensus view of the medical community has immediate effects on people’s beliefs and their intentions to get the vaccination shortly after the intervention; (2) whether the effects translate into actually getting vaccinated, even though most of the participants became eligible for the vaccine only many weeks after the intervention; and (3) whether the effects on vaccine uptake are persistent or whether the vaccination rate of untreated individuals eventually catches up, perhaps due to ongoing governmental campaigns, stricter restrictions for individuals who are not vaccinated, or greater potential life disruptions during severe epidemiological periods.
Consensus of the medical community
We conducted a supplementary survey to gather the views of doctors on COVID-19 vaccines in February 2021. The survey was implemented in partnership with the Czech Medical Chamber (CMC), whose contact list includes the whole population of doctors in the country, because membership is compulsory. All doctors who communicate with the CMC electronically (70%) were asked to participate and 9,650 (24% of those contacted) answered the survey. Supplementary Table 1 provides summary statistics and documents that the sample is quite similar, in terms of age, gender, seniority and location, to the overall population of medical doctors in the Czech Republic.
Figure 1 shows the distribution of doctors’ responses. A clear picture arises, suggesting that a broad consensus on COVID-19 vaccines exists in the medical community: 89% trust the vaccine (9% do not know and 2% do not trust it), 90% intend to get vaccinated (6% do not know and 4% do not plan to get vaccinated) and 95% plan to recommend that their patients take a vaccine (5% do not). These responses are broadly similar across gender, age, years of medical practice and size of the locality in which the doctors live: for all sub-groups, we found the share of positive answers to all questions ranges between 85% and 100% (Supplementary Table 2). Using probability weights based on observable characteristics of the entire population of doctors in the country makes very little difference in the estimated distribution of opinions in our survey. Reassuringly, the opinions in our survey are in line with high actual vaccination rates (88%) observed among Czech doctors when vaccines became available27, despite vaccination not being compulsory for any profession, including for doctors.
Our main sample consists of participants in the longitudinal online data collection ‘Life during the pandemic’, organized by the authors in cooperation with PAQ Research; the data were collected by the NMS survey agency (Methods and Supplementary Methods). The information intervention was implemented on 15 March 2021 (wave 0). We used data from 12 consecutive waves of data collection regularly conducted from March to November 2021. This time span covers the period when the vaccination was gradually rolled out and eligibility rules changed regularly, making the vaccine available for more demographic groups (until June 2021), and a period when vaccination was freely available for the entire adult population (from July 2021).
The sample from wave 0 is our ‘base sample’ (n = 2,101). By design, the sample is broadly representative of the adult Czech population in terms of a host of observable characteristics (for summary statistics, see Extended Data Table 1). In addition, the vaccination rate reported in our sample closely mimics the levels and dynamics of the overall adult vaccination rate in the country (Extended Data Fig. 1). This comparison suggests that attitudes to vaccination in our sample are likely to be representative of the larger population, in contrast to surveys based on convenience samples28. Although this pattern is reassuring, we cannot test and fully rule out a possibility that our sample might not be representative in terms of unobservable characteristics affecting receptivity to the information treatment studied. Furthermore, the response rate in the follow-up waves is high, ranging between 76% and 92%. A large portion of participants (n = 1,212; the ‘fixed sample’) took part in all 12 waves of data collection.
The participants were randomly allocated to either the Consensus condition (n = 1,050) or Control condition (n = 1,051) in wave 0. In the Consensus condition, they were provided with a summary of the survey among medical doctors, including three charts that displayed the distribution of doctors’ responses regarding their trust in the vaccines, willingness to get vaccinated themselves and intentions to recommend the vaccine to patients. In the Control condition, the participants did not receive any information about the survey of medical doctors and only filled the regular part of the longitudinal survey.
In all 12 waves, we asked whether respondents got vaccinated against COVID-19. The main outcome variable ‘vaccinated’ is equal to one if the respondent reported having obtained at least one dose of a vaccine against COVID-19. We also elicited prior beliefs on the views of doctors about the vaccines in wave 0 shortly before the information intervention, and posterior beliefs in wave 1 2 weeks afterwards.
Extended Data Table 1 and Supplementary Table 3 show no systematic differences in the set of baseline characteristics pre-registered as control variables. Nevertheless, because the randomization was not stratified on baseline covariates, there are random imbalances in some covariates, as expected. Some of the larger differences are for variables not included in the set of pre-registered control variables. Specifically, before the intervention, compared to participants in the Control condition, the individuals in the Consensus condition were slightly less likely to be vaccinated themselves (standardized mean difference (SMD) = 0.069), and expected a smaller percentage of doctors to trust the vaccine (SMD = 0.072) or to intend to get vaccinated (SMD = 0.090). As these three variables are highly predictive of vaccination uptake, we report two main regression specifications: (1) with the pre-registered set of control variables, and (2) with control variables selected by the LASSO procedure29. To document robustness, we also report estimates with no control variables and with alternative sets of control variables.
Misperceptions about doctors’ views
To quantify misperceptions about the views of doctors on COVID-19 vaccines, we compared the prior beliefs of participants about doctors’ views, measured before the intervention, with the actual views of the doctors from the CMC survey. We found strong evidence of misperceptions. The average, median and modal guesses are that 57%, 60% and 50% of doctors, respectively, want to be vaccinated (Fig. 2a), whereas in reality 90% of doctors do. The average, median and modal guesses about the percentage of doctors who trust the vaccines are 61%, 62% and 50%, respectively (Fig. 2b), whereas in practice 89% of doctors report trusting the vaccines. A vast majority of participants underestimate the percentage of doctors who want to be vaccinated (90%) and those who trust the vaccines (88%).
The distribution of beliefs reveals that the large underestimation does not originate in two distinct groups of participants holding opposite views of the medical consensus—one group thinking that most doctors have positive views about the vaccines and the other group thinking that most doctors are skeptical about them. Instead, most people expect a wide diversity of attitudes across individual doctors. Of participants, 81% believe that the percentage of doctors who want to be vaccinated is between 20% and 80%. For beliefs about doctors’ trust in the vaccines, this number is 76%. Furthermore, these misperceptions are widespread across all demographic groups based on age, gender, education, income and geographical regions (Supplementary Table 4).
We found several intuitive descriptive patterns that increase confidence in our measures of beliefs. First, beliefs about the vaccination intentions of doctors and their trust in the vaccines are strongly positively correlated (r(2,099) = 0.60, P < 0.001). Second, beliefs about doctor’s trust and vaccination intentions are highly predictive of respondents’ own intentions and uptake (Supplementary Table 4). In the next sub-section, we explore whether this relationship is causal. Third, in Supplementary Fig. 1, we show that misperceptions about the doctor’s views are unlikely to arise due to the inattention of participants to the questions. The results are very similar when we excluded the 4% of participants who did not pass all of the attention checks embedded in the survey, and when we excluded the 10% of participants with the shortest response times.
Intervention impacts on vaccination
We first established the effects of the intervention on posterior beliefs about the views and vaccination intentions of doctors shortly after the intervention. We found that the information provided shifts expectations about the views of doctors (Fig. 3a and Supplementary Table 5). Two weeks after the intervention (in wave 1), the Consensus condition increased beliefs about the share of doctors who trust the vaccines by 5 percentage points (p.p.) (P < 0.001) and beliefs about the share of doctors who want to get vaccinated by 6 p.p. (P < 0.001). Next, the Consensus condition increased the prevalence of people intending to get vaccinated by around 3 p.p. (P = 0.039; Fig. 3b and Supplementary Table 6). When we restricted the sample to those who participated in all waves, we found the point estimate to be slightly larger (5 p.p., P = 0.001).
Next, we found a systematic, robust and lasting treatment effect on vaccine uptake. Four months after the intervention, when vaccines became available to all adults, we found that participants in the Consensus condition were around 4 p.p. more likely to be vaccinated than those in the Control condition (Figs. 4 and 5). As expected, owing to the gradual rollout of the vaccine during the March to June period, the effect emerged gradually (Extended Data Table 2 provides more information about changes in vaccine eligibility rules). The difference in the uptake rates between the Consensus and Control conditions steadily increased to 4–5 p.p. in July and remained relatively stable thereafter (Fig. 4 and Extended Data Table 3).
In Fig. 5 and Extended Data Table 4, we report results from pooled regressions to utilize data from all six waves implemented in July to November, include wave fixed effects and cluster standard errors at the individual level. The estimated treatment effect is significant for both main specifications—when we control for a set of variables selected by the LASSO procedure (P = 0.005) and when we control for the pre-registered set of variables (P = 0.026). The effect is similar when estimated in each of these waves separately (Fig. 4).
The estimated effect size is slightly larger (4.4 p.p.) when we used the specification with LASSO-selected control variables than when we used the specification with pre-registered control variables (3.5 p.p.). Figure 5 shows that this is because the LASSO procedure selects baseline beliefs and vaccination status as relevant control variables, whereas these variables are not included in the pre-registered set. Consequently, both approaches document robust positive treatment effect between 3.5 and 4.4 p.p. Readers who believe that researchers should control for random imbalances in important baseline variables may favour the upper bound, whereas readers concerned about departures from pre-registered analyses may favour the lower bound.
Our finding of a positive treatment effect does not rely on a specific choice of control variables or estimation strategy. First, the effect is very similar when we controlled for various sets of baseline variables other than the pre-registered and LASSO-selected sets, as well as when we controlled for none (Fig. 5 and Extended Data Table 4). Second, the effect is significant at conventional levels when we calculated P values using the randomization inference method (Extended Data Tables 3 and 5). Third, the estimated treatment effect is 5.4 p.p. (P = 0.008) when we used baseline data about vaccination rates, and used a difference-in-difference estimation (Supplementary Table 7). Furthermore, the results are robust to excluding participants who arguably paid less attention (Extended Data Table 5). As in the analysis of vaccination intentions, the estimated effects on uptake are slightly larger when we restricted the analysis to those who participated in all 12 waves.
Differential attrition cannot explain our findings. First, we found that the participation rate is relatively high and does not differ across the Consensus and Control conditions on average. There is also no evidence of differential attrition by baseline covariates, suggesting that different types of individuals were not participating in the Consensus and Control conditions (Supplementary Table 8). We found this pattern for participation in each of the 11 follow-up waves separately as well as when we focused on participation in all waves (being in the fixed sample). As a sensitivity test, we imputed missing vaccination status for those who did not participate in some of the waves and assumed either that (1) their vaccination status has not changed since the last wave for which the data are available, or that (2) their status is the same as the one reported in the earliest next wave for which the data are available. The first approach allowed us to impute all the missing information because we know the vaccination status of each participant in the initial wave. The second approach allowed us to impute the missing information, except in cases when a respondent did not participate in the last wave. The effects are robust (Extended Data Table 5).
The effect of the Consensus condition on uptake is lasting. First, although in the main estimates we focused on the likelihood of respondents getting at least one vaccine dose, a qualitatively similar and significant effect emerges when we focused on the likelihood of participants getting two doses (Extended Data Fig. 2). Second, the treatment effect emerges during a 3-month period, due to availability restrictions, and then is stable across all six follow-up waves covering the July to November period (Fig. 4). Thus, the main effect is not driven by differences in the timing of getting vaccinated. Last, in the September and November waves, we asked about the intentions of participants to get a booster dose. The estimated effect is very similar in magnitude as the effect on uptake of the first dose (around 4 p.p.), suggesting that the information intervention elevates vaccination demand even 9 months after it was implemented (Extended Data Fig. 2).
Documenting such persistence has interesting implications. As the demand for vaccination in the Control condition does not catch up with the Consensus condition over such a long period, the results suggest that the type of vaccine hesitancy reduced by the Consensus condition is resilient to policies, campaigns or any life disruptions that participants were exposed to during the period studied. This includes a severe COVID-19 wave that took place in November 2021 in the Czech Republic, which resulted in one of the highest national mortality rates in global comparisons (see Section 3.1 of the Supplementary Information and Extended Data Fig. 3).
The point estimates of around 4 p.p. imply a relatively large effect size, especially in light of the low costs of the intervention. As the vaccination rate in the Control condition was 70–75% during the July to November period, the Consensus condition reduces the number of those who are not vaccinated by 13–16%. To compare, providing truthful information about the vaccination intentions of other people was shown to increase intentions to get vaccinated by 1.9 p.p.30. Nudging health workers to get vaccinated by referring to vaccinated colleagues has been shown to increase the likelihood of their registering for vaccination by around 3 p.p.31. More generally, the most successful, low-cost behavioural nudges with documented effect on uptake have estimated effect sizes up to 5 p.p.4,5, which is quite similar to the effect of providing information about consensus in doctors’ opinions studied here. In addition, a noteworthy aspect of our study is the documented persistence of the effects, which is another crucial margin for assessing the intervention effectiveness.
The Supplementary Information describes exploratory analyses of how the treatment effect differs across different sub-samples of respondents (Supplementary Table 5 and Extended Data Table 5). Reassuringly, we found that the positive effect on vaccine uptake is concentrated among those who underestimated doctors’ trust and vaccination intentions, whereas no systematic effect was observed among overestimators. In addition, the effect is driven by those who initially did not intend to get vaccinated, in line with the interpretation that the intervention changed the views of individuals who were initially skeptical about the vaccine. Nevertheless, the analysis of heterogenous effects should be treated as tentative because the differences in coefficients are not always significant and we did not adjust for testing of multiple hypotheses.
Given that vaccination status is self-reported, we provide several tests documenting that the observed effect does not arise due to priming or the experimenter demand motivating some people in the Consensus condition to report being vaccinated even when they were not. We begin by noting that the observed treatment effect is lasting and emerged only gradually over several months, as more people became eligible to get vaccinated. By contrast, priming and experimenter demand effects are typically thought to be relevant mainly for responses shortly after a treatment25,32.
To probe more directly, we used two distinct approaches to verify the reported vaccination status in the main dataset. First, inspired by existing work25,33, we used additional data about vaccination status collected for us by a third, independent party among the same sample. As the survey agency, graphical interface and topic of the survey were different from our main data collection, the experimenter demand effect that might be potentially associated with treatment in our main survey is unlikely to affect responses in the third-party verification survey. Only two respondents (one in the Consensus condition and one in the Control condition) reported being vaccinated in the main survey, but reported the opposite in the verification survey (Extended Data Table 6), so mismatch in reporting of being vaccinated is very rare in general and not related to treatment. We arrive at a similar conclusion using the second verification approach that links reported vaccination status with an official proof of vaccination: an EU Digital COVID certificate issued by the Czech Ministry of Health. We showed that respondents in the Consensus condition compared to the Control condition are not less willing or able to provide verifiable information from the certificate (Extended Data Table 6). Finally, we showed that the effect of the Consensus condition on lower prevalence of those reporting not being vaccinated in the main survey is almost fully explained by greater prevalence of those reporting being vaccinated and having their vaccination status verified (Supplementary Table 9). More details about the methods and results of both verifications appear in the Methods section and in Section 3.4 of the Supplementary Information.