• COMPASS

What The Science Tells Us About "Trust In Science"


This post continues our series focused on science communication research. Instead of reporting on or recapping a single paper, we’re asking what the literature has to say about urgent or recurring questions in our field. This is inspired, in part, by John Timmer’s call for an applied science of science communication, as well as the upcoming special issue of PNAS with papers from the 2012 Sackler Colloquium on the Science of Science Communication.


When climate scientist Tamsin Edwards published her editorial “Climate scientists must not advocate for particular policies” in The Guardian, it triggered a cascade of responses on engagement and advocacy. This is something COMPASS spends quite a lot of time thinking about and discussing in our trainings and writings, but the line that particularly caught my eye was: “I believe advocacy by climate scientists has damaged trust in the science. We risk our credibility, our reputation for objectivity, if we are not absolutely neutral.” I admire the conviction in that statement and it’s nothing if not clear. But is it true? Is the behavior of individual scientists a primary driver of public opinion? It reminds me of a conversation regarding our assumptions about audiences, in which my friend Ben Lillie quipped: “Communicating science to the public? Neither noun exists and I’m not sure about the verb.” Given the current conversations, I am not so sure of our use of the phrase ‘trust in (the) science’ either, so I decided to do a little digging. In this post, I’m contrasting existing data with what we often hear in arguments about credibility and trust. I’m barely scratching the surface, but really looking forward to discussing this further during the plenary on ‘Credibility, Trust, Goodwill, and Persuasion’ I’m moderating at next week’s ScienceOnline Climate conference.*


Background In the mid 1980s, a pivotal UK Royal Society report “The Public Understanding of Science” attributed public controversy and low levels of support for science policies to a ‘knowledge problem.’ People just didn’t know enough about science to properly appreciate it. Such deficit model thinking assumes that improving science literacy is the key to improving support. It has since largely been discredited in favor of the view that improving public support for science hinges on understanding the processes by which science and scientific expertise are viewed as legitimate and trustworthy.


Today, in casual conversations, trust is often used synonymously with ‘to believe.’ It expresses faith in something without the need for further investigation or guarantee. Given what a person must achieve to earn professional scientific credentials, it is reasonable to expect a certain amount of deference in the area of specialization. Repeated challenge to that authority feels hostile, perhaps deliberately antagonistic. And when political actors escalate the matter by positioning scientific knowledge as just another opinion and publicly devaluing hard-won expertise, it can be infuriating. Yes, infuriating. This is emotional territory and it’s important to acknowledge that. What we’re actually talking about is persuasion and power dynamics, not accuracy and precision. Trust is a social mechanism for coping with uncertainty and risk. Fundamentally, it is as much about fear, identity, and conflict as it is about being correct.


The Science A comprehensive review by Chryssochoidis et al., 2009, makes the case that trust can be “determined or influenced by: (1) the (perception of) the characteristics of the information received; (2) the (perception of) the risk managed or communicated; (3) the (perception of) institutional characteristics; (4) the individual and socio-cultural characteristics of those who exhibit trust” and by interactions of these four elements.” Drawing from that review and other papers in the social science literature, here’s what we know about “public trust in science”:


Messengers


Trust in groups (“scientists”) or institutions (“the National Academy of Sciences” or “the IPCC”) is called social trust in the literature. Although it is different than interpersonal trust, both vary greatly depending on who is asking to be trusted. • Contrary to fears, public esteem for scientists overall is not plummeting. In 2009 and 2013, The Pew Research Center conducted opinion polling asking, “How much to do you think scientists contribute to the well-being of our society?” During that span, the proportions have held relatively steady with more than 65% of respondents answering “A lot” and another 20% with “Some.” Note: this result does not hold across different political ideologies.


• Specific groups of scientists have suffered reputational damage recently. Climate researchers, for example, saw a significant drop in perceived trustworthiness from 2008 to 2010, but it’s important to note that as a category they remained much more trustworthy than competing sources like weather reporters, political and religious leaders, or the mainstream media.


• The strength of trust varies greatly among institutions. For example, in response to a study question asking, “How much do you trust the scientific research of [a specific government agency],” NASA and NOAA scored highest in “strongly trust” responses, while DOE and USDA had most “strongly distrust” answers.


Messages When it comes to a message, volume (repetition) and content matter. Bad news or information that damages trust is given disproportionate weight (‘trust asymmetry principle’). Familiarity with a topic combines with pre-existing views to influence perceived trustworthiness of new pieces of information. And attitudes toward specific technologies, like nuclear power or genetically modified crops, do not predict overall trust in science. Audiences Personality and demographics of ‘the public’ matter. Gender, race, age, and socioeconomic status can influence the propensity to trust science. We also know that education and political affiliation interact in surprising ways, as when we find that distrust in science significantly increases with greater educational attainment among politically conservative people.


The Takeaway


Based on my readings, ‘trust in science’ is shorthand for something complicated that I can best approximate as ‘your willingness to embrace the advice of a group of strangers because you believe they: (a) know the truth; (b) will tell you the truth as they know it; and (c) have your best interest at heart’… oh, and all of that depends on (d) who you are, (e) who they are, and (f) what you’re all talking about. It feels like we’ve fixated on (a). Yet for as strong a case we can build about the systems by which we know what we know, ‘knowing the truth’ is just one element in that long list. Considering (b) through (f)—and all the other letters I’m sure we should append—it suddenly feels much less surprising (and frustrating) that other people don’t feel the same level of trust in science and scientists that I do.


When it comes to making statements about trust in science, let’s ensure our assertions are grounded in the relevant science. We’ve got a long list of unanswered questions about how to build trust besides getting the facts right. This approach isn’t going to solve it all, but it’s a good place to start. Liz Neeley worked at COMPASS from 2008-2015. This post was transferred from its original location at www.compassonline.org to www.COMPASSscicomm.org in 2017.

#LizNeeley #advocacy #credibility #moreresources #ScioSciComm #trust

  • Twitter Square
  • facebook-square

© 2023 by COMPASS Science Communication, Inc.