Vaccine hesitancy part 2: Effective strategies for a human-centered health campaign
In my previous article, we examined how quantitative analyses empowered by online networks can both reveal and implement effective human-centered strategies for health interventions. But once we’ve identified a strategy to test, refine, and deploy, how do we sustain an intervention’s effectiveness?
Trust is health’s most valuable player
COVID-19 has made clear that the politicization of health is incredibly dangerous; the health of billions of individuals is at stake, but so too is trust in pharmaceutical companies, in the scientific process, and in government and our regulatory agencies. For example, the White House’s efforts to block updated FDA requirements—which pushed regulatory approval beyond November’s election—and the subsequent reversing of that position shook the public’s trust in the safety of any potential COVID-19 vaccine. That trust was already teetering. Notably, the Reagan-Udall Foundation for the FDA recently reported major public concerns about potential vaccines expressed by members of historically underrepresented groups as well as frontline healthcare workers.
The overall goal of a human-centered public health campaign is to produce effective messaging and counter-messaging to support specific behaviors. Influencing (even manipulating) emotions to promote behaviors must be done with care because it has been prone to abuse, as seen in neuromarketing and political interference. On the other hand, by refusing to use the tools and strategies that successfully amplify disinformation, we may deny ourselves the opportunity to craft and deliver messages that saves lives—while leaving spaces for mis- and disinformation to flourish. Notably, the quantitative analysis of vaccination views on Facebook by Johnson et al., explored in my previous article, failed to uncover evidence of a dominant, top-down, deliberate disinformation campaign around vaccination. Nonetheless, decentralized anti-vaccination efforts continue to threaten campaigns against COVID-19 and other diseases.
The good news is that we can exploit lessons learned from disinformation while keeping people at the center of our ethical efforts. For example, to allay public fears about vaccination, we can use human stories, shared language, and narrative diversity to message reliable health information in concert with meta-messaging about how data transparency makes it harder for mistakes to persist and for lies to be spread. Both the scientific literature and the lay press could amplify such messaging from public health agencies. Similarly, meta-messaging around fact-checking is becoming prominent in many contexts; such analyses have the added benefit of unearthing insights about where misinformation arises and how it spreads (here is a notable Canadian effort focused on COVID-19).
Overall, evidence-based thinking isn’t just about what we know: it’s crucial to think about how we know what we (think we) know. Resource-strapped public-health messaging may overlook that aspect of digital health and science literacy. But by revealing the very human processes that drive science and medicine, we create ongoing opportunities to nurture public trust.
As Michael Caulfield wrote in his extensive guide for student fact checkers, “The truth is in the network.” Therefore, digital literacy is paramount, for both individuals and society. It is uncertain whether the landscape of reliable information will improve or worsen as time goes on, but individuals, parents, educators, and officials should all contribute to making digital literacy a cornerstone concept of “citizenship.” Here is one example of a free online curriculum in digital literacy being developed by the Stanford History Education Group. Digital health literacy and digital science literacy are useful additions to that toolbox. While some unreliable information is deliberately formulated and spread by disinformers, much amplification seems to come from well-intentioned sharing to members of one’s own tribe—particularly during times of crisis. Society-level efforts to increase digital literacy should support the oft-repeated recommendations that individuals should slow down, check our emotions, scrutinize sources, and make an informed judgment about sharing.
Social media constitutes an avenue to build trust based on values and behaviors that can be—but are not necessarily—decoupled from geography or historical descent. In my Athens vignette, because my colleague and I were ostensibly members of the same ideological (pro-technology, pro-data, future forward) and physically proximate tribe, it simply had never occurred to me that we could hold radically different positions on vaccination. How many other opportunities are missed to earn and maintain trust?
Fortunately, several social media giants are responding to the COVID-19 infodemic through initiatives that support reputable global health agencies, including prominent placement of reliable information, free advertising, and financial support (e.g. Facebook, Twitter, and Google), as well as algorithmic updates and labeling of misleading information from other sources. Many of these initiatives dovetail with similar initiatives around political elections (e.g. Twitter and Google). Independent oversight of these efforts, as well as transparent reporting of datasets about interventions and their outcomes, could be crucial to identifying effective interventions and building the public’s trust in the messaging that seeks the common goal of safeguarding health.
Overall, we must emphasize human connection, trust, empathy, and ongoing evaluation to avoid complacency as the information terrain shifts over time. No matter how heavily we rely on data and technology, we must keep people in the center of our efforts. Data is the means to effective health interventions, not the goal. Our common goal is healthy citizens supported by trusted health systems.
Effective online health messaging must be actively maintained
Today we have a crucial opportunity to craft effective interventions that support public health, acknowledge individuals and their dignity, and nurture trust in government, experts, and each other. Such strategies and tactics will not only aid in the fight against COVID-19 but also position all of us for success in future pandemics and health crises.
Perhaps most importantly, successful health-focused interventions will help establish pipelines for delivering trustworthy and actionable information for other crises, such as climate change, where science, politics, identity, and trust intersect and often clash. At the highest level, there are no “sides” in these “battles.” Humanity needs a home. We all need to be healthy.
For vaccination, the sharing of large anonymized datasets from social networks would empower researchers and organizations to investigate network ecology and online sentiment. For example, based on the insights from the study from Johnson et al. explored in the previous post, we could design automated early-warning systems that watch meta-data for changes suggestive of concerted, top-down disinformation campaigns, such as a sudden and/or statistically significant transition from inactivity to sharing of anti-vaccination messaging across previously unaffected network domains (particularly as COVID-19 vaccines begin to come on the market). Dynamic data would also enable hypothesis-driven testing of the effectiveness of pro-health interventions while assessing (and potentially controlling for) parameters such as geographic location, age, and educational status.
Health agencies, universities, NGOs, and other organizations should provide explicit training, support, and rewards systems for narrative diversity and resource sharing, with the goal of facts-based yet human-centered communication. For example, just as graduate students currently supported by an NIH training grant must take a dedicated course on scientific ethics, so too should effective communication be integrated into curricula. Private foundations could consider providing such support as an element of fellowships for graduate students, postdoctoral researchers, and faculty. Notably, Johnson et al.’s mathematical analysis suggested that including pro-vaccine messaging in online conversations that are tilting or are heavily anti-vaccination (increasing the “heterogeneity” of these conversations) can slow the rate of linkage of anti-vaccination clusters and therefore the spread of unreliable information.
Similarly, all sectors should make explicit commitments to digital communication with the general public in their review and promotion systems, establishing rewards systems for activities that forge human-centered, evidence-based links between online communities (such as between the pro-vaccination and the undecided communities). These commitments should include basic training in digital literacy and empathetic communication to empower all generations to contribute.
The time to empower health interventions is now
This article seeks to distill many years of interdisciplinary research into blog-sized bites. It is an opening conversation and an invitation to dive deeper into these crucial areas of investigation rather than the final word on a complex and urgent problem that requires complex and validated solutions.
Admittedly, the recommendations described here, which explore only a small corner of the space of possible solutions, impose financial and time burdens at every level, from individuals to organizations to societies. They require fundamental shifts in our views of trust, responsibility, identity, and the future. They raise important questions about the ethics of intervening in speech and influencing emotions, particularly in liberal democracies.
Nonetheless, there is real urgency around the anti-vaccination crisis. As Tom Nichols has observed, “The Internet creates a false sense that the opinions of many people are tantamount to a ‘fact’.” The most alarming insight from Johnson et al. is their prediction that at today’s network trajectory, anti-vaccination views will outnumber pro-vaccination views on Facebook in 2030—only ten years from now. We will need to grapple with the effects of anti-vaccine sentiment far sooner as the world rapidly approaches broadly available COVID-19 vaccines.
As seen here and in my previous post, evidence is critical for scientifically grounded health interventions as well as for effective messaging strategies. Further, crafting health messaging that is based on evidence is not a “versus” campaign. It is a campaign to safeguard the health of all people while acknowledging and fulfilling the emotional needs that drive them toward unreliable information, both well-intentioned and ill-intentioned.
The scientific method still sorts valid from invalid conclusions over time. However, the time horizon over which this process can play out, particularly under intense public scrutiny, has notably shortened (for better and for worse). The stakes have gotten higher in today’s densely connected world, where information has been weaponized and tribalism increasingly threatens to undermine civilization’s foundational institutions.
While data and digital platforms underlie both problems and solutions, we must never lose sight of the humans at the center of these issues. That night in Athens, I realized that my colleague was seeking the very thing that I want for my family: good health, supported by trusted partners. As the first COVID-19 vaccines move toward distribution even as infections and deaths continue to rise, we face an important choice: to talk with each other instead of about each other. Fortunately, the choices we make now have the potential to serve as invaluable templates for our responses to crises yet to come.
Read part I
About the author
The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.