Weekend update: Tedx restored to youtube
Apparently the original posting of this (staggeringly brief) talk suffered from imperfect audio (I never listened to it, so I can’t say first-hand whether it was bad) So here is the new & improved version (which unforatunately does not share the URL of the old, widely circulated & downloaded one).
More *doing* science communication on science of science communication
It’s that time again: Science of Science Communication course in Spring Term
This is what’s on deck for spring semester:
PSYC 601b, The Science of Science Communication, Dan Kahan
The simple dissemination of valid scientific knowledge does not guarantee it will be recognized by non-experts to whom it is of consequence. The science of science communication is an emerging, multidisciplinary field that investigates the processes that enable ordinary citizens to form beliefs consistent with the best available scientific evidence, the conditions that impede the formation of such beliefs, and the strategies that can be employed to avoid or ameliorate such conditions. This seminar surveys, and makes a modest attempt to systematize, the growing body of work in this area. Special attention is paid to identifying the distinctive communication dynamics of the diverse contexts in which non-experts engage scientific information, including electoral politics, governmental policy making, and personal health decision making.
This is from the more in depth description of the course that accompanies course materials:
The most effective way to communicate the nature of this course is to identify its motivation. We live in a place and at a time in which we have ready access to information—scientific information—of unprecedented value to our individual and collective welfare. But the proportion of this information that is effectively used—by individuals and by society—is shockingly small. The evidence for this conclusion is reflected in the manifestly awful decisions people make, and outcomes they suffer as a result, in their personal health and financial planning. It is reflected too not only in the failure of governmental institutions to utilize the best available scientific evidence that bears on the safety, security, and prosperity of its members, but in the inability of citizens and their representatives even to agree on what that evidence is or what it signifies for the policy tradeoffs acting on it necessarily entails.
This course is about remedying this state of affairs. Its premise is that the effective transmission of consequential scientific knowledge to deliberating individuals and groups is itself a matter that admits of, and indeed demands, scientific study. The use of empirical methods is necessary to generate an understanding of the social and psychological dynamics that govern how people (members of the public, but experts too) come to know what is known to science. Such methods are also necessary to comprehend the social and political dynamics that determine whether the best evidence we have on how to communicate science becomes integrated into how we do science and how we make decisions, individual and collective, that are or should be informed by science.
Likely you get this already: but this course is not simply about how scientists can avoid speaking in jargony language when addressing the public or how journalists can communicate technical matters in comprehensible ways without mangling the facts. Those are only two of many science communication” problems, and as important as they are, they are likely not the ones in most urgent need of study (I myself think science journalists have their craft well in hand, but we’ll get to this in time). Indeed, in addition to dispelling (assaulting) the fallacy that science communication is not a matter that requires its own science, this course will self-consciously attack the notion that the sort of scientific insight necessary to guide science communication is unitary, or uniform across contexts—as if the same techniques that might help a modestly numerate individual understand the probabilistic elements of a decision to undergo a risky medical procedure were exactly the same ones needed to dispel polarization over climate science! We will try to individuate the separate domains in which a science of science communication is needed, and take stock of what is known, and what isn’t but needs to be, in each.
The primary aim of the course comprises these matters; a secondary aim is to acquire a facility with the empirical methods on which the science of science communication depends. You will not have to do empirical analyses of any particular sort in this class. But you will have to make sense of many kinds. No matter what your primary area of study is—even if it is one that doesn’t involve empirical methods—you can do this. If you don’t yet understand that, then perhaps that is the most important thing you will learn in the course. Accordingly, while we will not approach study of empirical methods in a methodical way, we will always engage critically the sorts of methods that are being used in the studies we examine, and I from time to time will supplement readings with more general ones relating to methods. Mainly, though, I will try to enable you to see (by seeing yourself and others doing it) that apprehending the significance of empirical work depends on recognizing when and how inferences can be drawn from observation: if you know that, you can learn whatever more is necessary to appreciate how particular empirical methods contribute to insight; if you don’t know that, nothing you understand about methods will furnish you with reliable guidance (just watch how much foolishness empirical methods separated from reflective, grounded inference can involve).
If so moved, you can find materials from previous years’ versions of this seminar here.
An adventure in science communication: frequentist vs. Bayes hypothesis testing
A smart person asked me to explain to her the basic difference between frequentist and Bayesian statistical methods for hypothesis testing. Grabbing the nearest envelope, I jotted these two diagrams on the back of it:
Displayed on the left, a frequentist analysis assesses the probability of observing an effect as big as or bigger than the experimental one relative to a hypothesized “null effect.” The “null hypothesis” is represented by a simple point estimate of 0, and the observed effect by the mean in a normal (or other appropriate) distribution.
In contrast, a Bayesian analysis (on right) tests the relative consistency of the observed effect with two or more hypotheses. Those hypotheses, not the observed effect, are conceptualized as ranges of values arrayed in relation to their probability in distributions that account for measurement error and any other sort of uncertainty a researher might have. The relative probability of the observed effect with each hypothesis can then be determined by examining where that outcome would fall on the hypotheses’ respective probability distributions.
I left out why I like the latter better. I was after as neutral & accessible an explanation as possible.
Did I succeed? Can you do better?
Science curiosity research program
The Science Curiosity Research Program
We propose a program for the study of science curiosity as a civic virtue in a polarized society.
1. It has been assumed (very reasonably) for many years that enlightened self-government demands a science-literate citizenry. Perversely, however, recent research has shown that all manner of reasoning proficiency—from cognitive reflection to numeracy, from actively open-minded thinking to science literacy—magnifies political polarization on policy-relevant science.
2. The one science-comprehension-related disposition that defies this pattern is science curiosity. In our research, we define science curiosity as the motivation to seek out and consume scientific information for personal pleasure. The Cultural Cognition Project Science Curiosity Scale (“SCS”) enables the precise measurement of this disposition in members of the general public.
Developed originally to promote the study of public engagement with science documentaries, SCS also has also been shown to mitigate politically motivated reasoning. Politically motivated reasoning consists in the disposition to credit or dismiss scientific evidence in patterns that reflect and reinforce individuals’ membership in identity-defining groups. It is the psychological mechanism that underwrites persistent political controversy over climate change, handgun ownership, the HPV vaccine, nuclear waste disposal, and a host of other controversial issues.
Individuals who score high on SCS, however, display a remarkable degree of resistance to this dynamic. Not only are they less polarized than other citizens with comparable political predispositions. They also are demonstrably more willing to search out and consume scientific evidence that runs contrary to their political predispositions.
The reason why is relatively straightforward. Politically motivated reasoning generates a dismissive, identity-protective state of mind when individuals are confronted with scientific evidence that appears to undermine beliefs associated with their group identities. In contrast, when one is curious, one has an appetite to learn something surprising and unanticipated—a state of mind diametrically opposed to the identity-protective impulses that make up politically motivated reasoning.
These features make science curiosity a primary virtue of democratic citizenship. To the extent that it can be cultivated and deployed for science communication, science curiosity has the power to quiet the impulses that deform human reason and that divert dispositions of scientific reasoning generally from their normal function of helping democratic citizens to recognize the valid policy-relevant science.
3. Perfecting the techniques for cultivating and deploying science curiosity is the central aim of our proposed research program. Certain of the projects we envision aim to instill greater science curiosity in primary and secondary school students as well as adults. But still others seek to harness and leverage the science curiosity that already exists in democratic citizens. Specifically, we propose to use SCS to identify the sorts of communications that arouse curiosity not only in the individuals who already display the most of this important disposition but also in those who don’t—so that when they are furnished evidence that challenges their existing beliefs, they will react not with defensive resistance but with the open-minded desire to know what science knows.
Guest post: Some weird things in measuring belief in human-caused climate change
From an honest-to-god real expert–a guest post by Matt Motta, a post doctoral fellow associated with the Cultural Cognition Project and Annenberg Public Policy Center. Matt discusses his recent paper, An Experimental Examination of Measurement Disparities in Public Climate Change Beliefs.
Do Americans Really Believe in Human-Caused Climate Change?
Do most Americans believe that climate change is caused by human activities? And what should we make of recent reports (e.g., Van Boven & Sherman 2018) suggesting that self-identified Republicans largely believe in climate change?
Surprisingly, given the impressive amount of public opinion research focused on assessing public attitudes about climate change (see: Capstick et al., 2014 for an excellent review), the number of Americans (and especially Republicans) who believe that climate change is human caused actually a source of popular and academic disagreement.
For example, scholars at the Pew Research Center have found that less than half of all Americans, and less than a quarter of Republicans, believe that climate change is caused by human activity (Funk & Kennedy 2016). In contrast, a team of academic researchers recently penned an op-ed in the New York Times (Van Boven & Sherman 2018; based on Van Boven, Ehret, & Sherman 2018) suggesting that most Americans, and even most Republicans, believe in climate change – including the possibility that it is human caused.
In a working paper, my coauthors (Daniel Chapman, Dominik Stecula, Kathryn Haglin and Dan Kahan) and I offer a novel framework for making sense of why researchers disagree about the number of Americans (and especially Republicans) who believe in human caused climate change. We argue that commonplace and seemingly minor decisions scholars make when asking the public questions about anthropogenic climate change can have a major impact on the proportion of the public who appears to believe in it.
Specifically, we focus on three common methodological choices researchers must make when asking these questions. First, scholars must decide whether they want to offer “discrete choice” or Likert style response options. Discrete choice responses force respondents to choose between alternative stances; e.g., whether climate change is human caused, or caused by natural factors. Likert-style response formats instead ask respondents to assess their levels of agreement or disagreement with a particular argument; e.g., whether one agrees or disagrees that climate change is human caused.
Likert-style response can be subject to “acquiescence bias,” which occurs when respondents simply agree with statements, potentially to avoid thinking carefully about the question being asked. Discrete choice response formats can reduce acquiescence bias, but allow for less granularity in expressing opinions about an issue. Whereas the Pew Study mentioned earlier made use of discrete style response options, the aforementioned op-ed made use of Likert style responses (and found comparatively higher levels of belief in anthropogenic climate change).
Second, researchers must choose whether or not to offer a hard or soft “don’t know” (DK) response option. Hard DK options expressly give respondents the opportunity to report that they do not know how they feel about a certain question. Soft DK responses, on the other hand, allow respondents to skip a question, but do not expressly advertise their ability to not answer it.
Hard DKs have the benefit of giving those who truly have no opinion about a particular prompt to say so; rather than either guess randomly, or – especially when Likert style questions – simply agree with the prompt. However, expressly offering a DK option risks that respondents will simply indicate that they “don’t know” rather than engage more effortfully with the survey. Again drawing on the two examples described earlier, the comparatively pessimistic Pew study offered respondents a hard DK, whereas the work summarized in the New York Times op-ed did not.
Third, researchers have the ability to offer text that provides basic background information about complex concepts; including (potentially) anthropogenic climate change. This approach has the benefit of making sure that respondents have a common level of understanding about an issue, before answering questions about it. However, scholars must choose the words provided in these short “explainers” very carefully – as information presented there may influence how respondents interpret the question.
For example, the research summarized in the New York Times op-ed described climate change as being caused by “increasing concentrations of greenhouse gasses.” Although this text does not attribute greenhouse gas emissions to any particular human source, it is important to keep in mind that skeptics may see climate change as the result of factors having nothing to do with gas emissions (e.g., that the sun itself is responsible for increased temperatures). Consequently, this text could lead respondents toward providing an answer that better matches scientific consensus on anthropogenic climate change.
We test the impact of these three decisions on the measurement of anthropogenic climate change attitudes, in a large demographically online survey of American adults (N = 7,019). Respondents were randomly assigned to answer one of eight questions about their belief in anthropogenic climate change; each varying one of the methodological decisions described above, and holding all other factors constant.
The results are summarized in the figure below. Hollow circles are number of respondents in each condition who purport to believe in human-caused climate change, with 95% confidence intervals extending outward from each one. The left-hand pane plots these quantities for the full sample, and the right-hand pane does the same for just self-identified Republicans. The elements varied in each experimental condition are listed in the text just below the figure.
Generally, the results suggest that minor differences in how we ask questions about anthropogenic climate change can increase the number of Americans (especially Republicans) who appear to believe in it. For example, Likert style response options (conditions 5–8) always produce higher estimates of the number of Americans and Republicans than discrete choice style questions (conditions 1–4).
At times, these differences are quite dramatic. For example, Condition 1 mimics the way Pew (i.e., Funk & Kennedy 2016) ask questions about anthropogenic climate change; using discrete-choice questions that offer a hard DK option with no “explainer text.” This method suggests that 50% of Americans, and just 29% of Republicans, believe that climate change is caused by human activities.
Condition 8, on the other hand, mimics method used in the piece reported in the aforementioned op-ed; featuring Likert-style response options, text that explains that climate change is caused by the greenhouse effect, and no explicit DK option. In sharp contrast, this method finds that 71% of Americans and 61% of Republicans believe that climate change is human caused. This means that the methods used in Condition 8 more than double the number of Republicans who appear to believe in human caused climate change.
We think that these results offer readers a useful framework for making sense public opinion about anthropogenic climate change. Our research urges readers to pay careful attention to the way in which public opinion researchers ask questions about anthropogenic climate change, and to consider how those decisions might increase (or decrease) the number of Americans who appear to believe in anthropogenic climate change. Of course, we do not propose a single measurement strategy as a “gold standard” for assessing opinion about anthropogenic climate change. Instead, we hope that these results can readers be better consumers of public opinion about climate change.
References
Capstick, S., Whitmarsh, L., Poortinga, W., Pidgeon, N., & Upham, P. International trends in public perceptions of climate change over the past quarter century. Wiley Interdisciplinary Reviews: Climate Change , 6(1), 35-61. (2015).
Ehret, P. J., Van Boven, L., & Sherman, D. K. (2018). Partisan Barriers to Bipartisanship: Understanding Climate Policy Polarization. Social Psychological and Personality Science, 1948550618758709.
Funk, C., & Kennedy, B. The politics of climate. Pew Research Center. Retrieved from: http://www.pewinternet.org/2016/10/04/the-politics-of-climate/ (2016, Oct 4)
Van Boven, L. & Sherman D. Actually, Republicans Do Believe in Climate Change. New York Times (2018, July 28)
Van Boven, L., Ehret, P. J., & Sherman, D. K. Psychological barriers to bipartisan public support for climate policy. Perspectives on Psychological Science , 13(4), 492-507. (2018).