Researching at the Top of a Slippery Slope?

Facebook
Twitter
Pinterest
Instagram
Google+
https://ceassessment.org/researching-at-the-top-of-a-slippery-slope/">

Thursday, Jul. 25, 2019

By Daniel Callies (University of California, San Diego)

 

Does researching geoengineering—specifically, stratospheric aerosol injection (SAI)—lead us onto a slippery slope, at the bottom of which awaits the deployment of a morally objectionable technology? Should we perhaps abandon research now on account of the possibility that we’ll slide down such a slope? In a recent publication, I argued that we should be cognizant of the possibility for research into SAI to lead to objectionable deployment scenarios. But, for two reasons, I concluded that the slippery slope worry shouldn’t lead us to abandon research into SAI.  First, research need not lead inexorably to deployment, and, second, deployment is not necessarily objectionable. In this post, I want to provide a quick synopsis of that publication, which you can find here.

First, let’s make a distinction between two slippery slope arguments related to SAI research. What we can call the Modest Slippery Slope Argument (MSSA) runs as follows:

  1. If we research SAI, it will lead to deployment
  2. We have serious moral reasons not to deploy SAI
  3. Therefore, we ought not to research SAI

I’ll have more to say about premise 1 below, so for now let’s look at the second premise. Premise 2 strikes me as true. I think most people would agree that we have serious moral reasons to avoid SAI. The various reasons we have to avoid SAI have been explored in the philosophical literature. These reasons may point to the hubris that deployment of SAI would exemplify, or they may be grounded in the idea that we ought to have greater respect for nature, or they might stem from concerns about the detrimental side-effects that deployment could engender. But even granting that we have serious moral reasons not to deploy SAI, we can’t infer the conclusion that we should abandon research. In the non-ideal world in which we find ourselves, we often face difficult decisions – decisions in which there are serious moral reasons speaking both against and in favor of a particular course of action. While it certainly seems true that we have serious moral reasons not to deploy SAI, what we need to know is whether we have decisive moral reasons not to do so.

Consider next what can be called the Decisive Slippery Slope Argument (DSSA):

  1. If we research SAI, it will lead to deployment
  2. We have decisive moral reasons not to deploy SAI
  3. Therefore, we ought not to research SAI

The Decisive Slippery Slope Argument, were its premises to be true, would commit us to accepting the conclusion that we ought not to research SAI. But I have my doubts as to whether or not we have sufficient evidence to label either premise true. That is, I’m not sure whether premise 1 (the empirical premise) is true – I have my doubts as to whether the antecedent leads to the consequent – and I’m not sure about premise 2 (the normative premise) – I have my doubts as to whether we have decisive moral reasons not to deploy SAI. Let’s take a deeper look at each premise.

Setting aside the difficulty of deciding what counts as research and what counts as deployment, I think there are two reasons to question whether research into SAI will lead unavoidably to deployment. The first reason to question whether research will lead to deployment is that often research into novel technologies does not lead to the development of the technology. For example, data from the pharmaceutical industry shows that a mere one in nine compounds makes it from research to development. Sometimes these new entities fail to reach the developmental stage because they are deemed unsafe by regulatory agencies. This points to the importance of regulatory agencies that can halt the development of suspect technologies (a topic I’ll touch on below). But it contradicts the idea that research into new technologies tends to lead to their development.

The second reason to question whether research will lead to deployment rests on a differentiation between development and deployment. That is, even if research into SAI were to lead to development, it may not lead to deployment. Consider the example of nuclear weapons in South Africa. After a couple decades of research, South Africa had developed a fully functioning nuclear bomb. To put it differently, state-sponsored research led to development. But shortly after successfully developing the bomb, the country abandoned its nuclear program and actually dismantled the fruits of research. The South Africa example shows that even after costs have been sunk and research has led to development, deployment is not a foregone conclusion. Rather, through political action and demonstration, deployment can be avoided.

Thus, given that research doesn’t always lead to development, and given that new technologies are not always deployed even once they are developed, we have reason to question the empirical premise that research into SAI will lead to deployment. Of course, there are significant differences between pharmaceuticals, nuclear weapons, and climate engineering. But what these examples are meant to show is that it is less than clear that research into SAI will lead to deployment.

But what if we grant that research will lead to deployment. Let’s look at the second premise. Do we have decisive moral reasons not to deploy SAI? In one sense, yes. Given how little we know about the actual effects of SAI deployment, and given that the harmful effects of climate change have only just begun to materialize, I would argue we have decisive moral reasons not to deploy SAI now. But I don’t know of any calls for immediate deployment. Rather, what is proposed is that we continue researching SAI now, in the event that it would make sense to deploy it at some point in the future. So, can we say that we have decisive moral reasons never to deploy SAI?

To say that we have decisive moral reasons not to deploy SAI is to say that, having compared the reasons that speak in favor of deployment and having compared the reasons that speak against deployment, the reasons that speak against deployment are weightier, all things considered. That all-things-considered judgement is a difficult one to make, especially given that the future circumstances in which the technology might be used are not entirely clear. Other than reasons that relate to intrinsic aspects of the technology – like the claim that SAI is inherently incompatible with democracy (a claim that my co-authors and I doubt) – the reasons that speak against deployment will be grounded in the potential effects of deployment – like the potential for SAI to deleteriously disrupt precipitation patterns. But whether the effects of SAI would be beneficial, harmful, or something in between is not definitively known. Ironically, perhaps the only way to determine whether the potential effects of SAI speak against deployment is to do more research.

Ultimately, neither premise of the argument strikes me as obviously true. And with each premise being questionable, the inference to the conclusion is weak. Now, I don’t think that the weakness of the Slippery Slope Argument shows us that we have nothing to worry about with respect to research into SAI. Rather, I think the weakness of the argument shows that research can be done without objectionable deployment scenarios necessarily arising. But it’s up to us and the institutions we develop to make sure that research only proceeds if it really ought to. To that end, I want to briefly highlight three approaches that could be used to minimize the risk of research leading unjustifiably to deployment.

First, we could use a stage-gate system, as advocated by the World Health Organization in their report on testing genetically modified mosquitoes. The idea is to have a regulatory body that can grant or withhold approval for projects to move from, say, lab testing to field trials, or from field trials to low-scale deployment. And how approval is granted can also be fine-tuned. For instance, we could require a majority, a super majority, or the unanimous agreement of the regulatory panel in order for the move from one stage to the next to be approved.

Another approach has been floated by David Keith, Edward Parson, and M. Granger Morgan. The idea is to assemble two research teams: a blue team charged with researching the best possible way to develop and deploy the technology; and a red team charged with finding holes and identifying problems with the blue team’s research.

Third, and most importantly, we could incorporate public engagement into the regulatory process. Public engagement could be manifested in many different ways, from something like the EPA’s comment period, to embedding public bodies directly into the decision-making process. Giving the public an official voice in the geoengineering decision-making process could serve as an important check on institutional insiders who might fail to halt research when they should.

Unfortunately, none of these approaches is currently being used. In fact, at the moment, we have no regulatory body overseeing geoengineering research at all. This undoubtedly needs to change. Whether regulation happens through an international body like the UNFCCC or whether it is done in a more piece-meal fashion with national governments taking the lead, significant regulation of some kind is a must.

To summarize, we should be aware of the potential for research into SAI to gain momentum and lead to unjustifiable deployment. But (a) it isn’t the case that research will necessarily lead to deployment, and (b) it isn’t clear that we wouldn’t want to deploy SAI at some point in the future. I genuinely hope that our investigation into SAI remains at the research stage and that we never end up in a situation in which it makes sense to lean on the technology. I hope that the global community takes its intergenerational obligations seriously and limits the negative effects of climate change through mitigation and adaptation. With each year that passes in which we fail to take appropriate action, such a hope feels more and more unrealistic.

About the Author

Daniel Callies

Postdoctoral Scholar, Institute for Practical Ethics, University of California, San Diego
Contact: dcallies@ucsd.edu
Daniel Callies (UCSD)

Daniel Edward Callies is a postdoctoral scholar at the Institute for Practical Ethics at the UC San Diego, and the author of the recently released book, Climate Engineering: A Normative Perspective (Lexington Books, 2019).  He graduated cum laude with a BA in philosophy from San Diego State University (2008), and later returned to complete an MA (2012) writing on “Equality, Responsibility, and Climate Policy.” In 2013, he began a PhD at Goethe University Frankfurt under the supervision of Prof. Darrel Moellendorf. In the summer of 2016, he moved to Harvard’s Kennedy School of Government as a Predoctoral Research Fellow in the Belfer Center’s Science, Technology, and Public Policy Program. After submitting his dissertation (“On the Ethics and Politics of Climate Engineering”), he joined the Université catholique de Louvain (Belgium) as the Bernheim Postdoctoral Fellow in Social Responsibility. Daniel defended his dissertation (2018) and then worked as a postdoctoral fellow at Goethe University until joining the Institute for Practical Ethics in the fall of 2018.