Book Read Free

After Geoengineering

Page 24

by Holly Jean Buck


  Most people in the room are confronting the idea of geoengineering for the very first time, and their initial thoughts are diverse. A policymaker explains that he’s read an article that suggested climate warming could create a whole new economic zone of mining and exploration—which would suit Northern countries, while those in the tropics would lose opportunities. Another person says that he’s just installed solar panels to take his house off the grid: Will solar radiation management impact those? One speaker recalls a James Bond movie he saw, in which someone controls an orbital laser beam. Someone else notes that manipulating the climate is like manipulating genes; it can be done in the wrong direction. An ethicist says that there’s always some price to pay for engaging with technology, and we are in this problem because of technology. There’s promise and peril; we are flying in the face of God.

  But the conversation keeps returning to two themes. One is equity: it’s the norm for Jamaica to be on the receiving end of things. Someone asks: “How can that be changed?” Jamaicans have regional alliances, they network, they negotiate as a block of Caribbean or small island developing states. But there’s a disparity in terms of population when you’re a small country. Discussion turns to this disparity in terms of historical emissions, and inequality also comes up in carbon trading, which can let polluters off the hook. “In an ideal world, we have iron-clad politics before” confronting something like this, one speaker asserts. Another asks: Can we afford not to look at an issue like this?

  A second theme is capacity. A policymaker explains that the research system here was originally set up to train agricultural researchers for plantations, not to teach classical subjects. “Those of us in countries like Jamaica need to develop basic research,” he says, because the problems people are trying to solve here may be different than those in other places. But here, they don’t have the computing resources to run many computationally intensive climate models.

  When it comes to designing a solar geoengineering program, both metaphorically (“program” as in a course of actions) and literally (“program” as in coding on a computer), a small developing country like Jamaica has a limited capacity to write it. The organization that pulled together the meeting, the Solar Radiation Management Governance Initiative, now coordinates a fund for researchers in developing countries, which provided an initial round of $430,000 toward eight projects that look at how solar geoengineering could impact things like droughts in Southern Africa or the spread of cholera in South Asia. This is an important step for the philanthropic sector, and its NGO and academic partners. Yet it’s still a drop in the bucket compared to what would be needed for genuine inclusion in research design.

  Algorithmic governance

  Who does get to write the program for geoengineering? The verb “program” is rooted in the Latin -graph, connoting a written plan. Geoengineering would be a program to be authored, to be written, with real choices about what goes into the plan. In the end, though, program might not be the best metaphor, because it still has resonances of something “fixed” that you’d receive on a disk (even if these are now obsolete technology, as our software auto-updates). Solar geoengineering requires a more dynamic practice. “Responsive” or “adaptive” governance tries to connote this; but still, “responsiveness” seems like a tacked-on quality, something that modifies geoengineering after the fact, rather than being written into the fabric of what it is.

  Let’s follow these overlapping meanings of “programming,” and also draw in another fuzzy term: “algorithm.” “Algorithm,” on a basic level, signifies a set of instructions; a recipe of sorts. Today, though, algorithms have taken on a broader meaning. They have become agents that determine aspects of our social reality: helping a billion-plus people get where they’re going, assisting us in finding information, driving cars, manufacturing goods, assigning credit, shaping financial markets, and more, as science historian Massimo Mazzotti describes.5 Wouldn’t solar geoengineering inevitably be yet another one of these domains governed by algorithms, aided by an invisible computational hand?

  We can be certain that aerosol geoengineering would be implemented in some kind of institutionalized program: with research milestones, perhaps “stage gates,” flight operations, monitoring operations, and so on. Geoengineering with aerosols would also involve the use of computer tools to design a literal program—a recipe or operation—that would figure out the optimal way to put the particles in the stratosphere in order to achieve a combination of climate goals while minimizing negative impacts. In short, someone, somewhere, would write code for a system that would block a certain amount of sunlight, monitor the effects, and then adjust the system again. Researchers call this a “feedback control algorithm” because it would guide geoengineering using feedback from climate observations.

  This is complex enough: now add the complexity of possibly using multiple climate engineering techniques. For example, a recent paper from a collaboration between scientists in China, India, and the United States simulated “cocktail geoengineering,” which involved using two different geoengineering strategies—stratospheric aerosols and cirrus cloud thinning—to best restore preindustrial temperatures and precipitation.6

  Then, add a temporal dimension: stratospheric aerosol geoengineering would likely take place over a time span of 150 years or more (at which point, if enough of our descendants make it through the twenty-first century, they will hopefully have not only found better ways of removing carbon, but also improved upon our rusty old technology for deploying and monitoring the particles).

  You can see how heavy computing would be crucial to a problem this complex. The resulting climate could be seen as some kind of human-machine-nature collaboration or dialogue, with constant back-and-forth retuning. In the parlance of the scientists working on it, it’s about feedback and adjustment. Humans would input the goals. These could involve changing global temperatures, reducing sea level rise, stopping Arctic sea ice loss, or some other combination of ends that would likely be subject to long negotiations. It’s quite possible that a set of decision rules for solar geoengineering could be created in a quasi-democratic matter—likely by the United Nations, where technical-expert delegations from various countries would hammer out the goals and the scheme for monitoring results, and so forth. But you can also very readily see the possibility that it will not be done that way—just consider how countries like Jamaica experience international decision-making processes.

  Looking at solar geoengineering as an algorithm allows us to draw from the emerging literature on “algorithmic governance,” which questions how algorithms are used to make decisions that pattern an increasing number of aspects of our lives. One key issue is about transparency and the black-boxing of algorithms: How can a geoengineering system be designed for openness and “algorithmic accountability”—that is, explainability in real time?

  There is also the danger that bias could be coded into the program. This could happen because the underlying climate data is uneven; for example, war-torn countries are going to have gaps in their data. Or bias could be introduced due to variations in how problems are defined. Droughts, for example, can be hydrological, agricultural, or meteorological, each of which would be defined using different thresholds. Theoretically, the system could disadvantage vulnerable peoples without the explicit intention to do so, just because of poor data or poor problem definitions. (Of course, this is on top of the basic bias that determines who gets the education and power to even be in the position of writing computer code and making decisions.)

  Given all the advances in computing, we might take for granted the ability to carry out the programming part. However, researchers point to varying constraints on this—both in terms of computing resources and qualified personnel. This isn’t just the case for small island states or developing economies. Even the US public sector is stunningly constrained when it comes to running climate models; the computing infrastructure of the commercial cloud is soaring by comparison. Sc
ientists I spoke to in several countries pointed to high-level expertise as another limitation. The labor time from qualified humans analyzing outputs is a constraint on geoengineering research (and on how advanced these algorithms can become), probably more so than computing resources. This is, of course, a matter of both training personnel and funding their research. Labor-time of qualified scientists seems like a worldwide constraint—though in theory, it should be solvable, given that politicians everywhere pay lip service to STEM training.

  One challenge is to expand that training globally. In 2017, the geoengineering research program in China held the first geoengineering research course in Beijing for scientists from the developing world. They provided model results for the entire earth system for students to analyze, because they believed students would best understand how to select the model parameters that were most important in their country. “It should not be some sort of teacher and student thing,” clarifies project leader John Moore at Beijing Normal University, speaking about international collaboration more broadly. “It should be an equal relationship.” He explains, “The international collaboration, and everybody being a sort of fair and equal partner, is a big priority from the Chinese viewpoint.” China has no wish to do geoengineering unilaterally, and wants to avoid being seen as eager to geoengineer, he says. I ask him why he thinks that the Chinese are so interested in international cooperation. “They care about how the country’s image is … I guess it’s sort of patriotism in a way, that people have a love of their country, and they don’t want to be the international bad guy. They want to be the good guy, the nice guys. Given a choice, that’s the natural choice.”

  International cooperation and collaboration, then, are part of a best-case scenario—one that many people are actively working toward. These cooperative-minded researchers and funders can help with capacity building. Still, they can’t fully address the underlying structural inequalities between disparate working contexts. All this is to say that when thinking about the algorithm, we can’t forget the material resources needed to make it work—both workers and infrastructure. Beginning an international, interdisciplinary research initiative now would create and hold the space for researchers to think more deeply about what transparent, democratic algorithm design might look like.

  Keeping humans in the loop

  Over the course of the next few decades, as we weigh climate intervention, machine learning and artificial intelligence will no doubt continue to make advances. What does it mean for these two capacities to evolve together? To be clear, scientists thinking about feedback control algorithms for solar geoengineering are not interested in mixing in artificial intelligence. Rather, they view these as systems where humans would be very much in the loop.

  Ben Kravitz, the atmospheric science professor, points out two important problems with employing machine intelligence. “Number one, you have to actually believe what you’re designing is correct.” Understanding the underlying physical system, he tells me, is crucial. If you don’t, and “you just say, ‘Well, I don’t really care, let’s just wrap a controller around it and be done with it,’ then everything could be fine until you get some weird—‘weird’ is I guess a technical term in this case—some weird behavior that you can’t explain but that really messes things up. And Rumsfeld’s ‘unknown unknowns’. That’s always a concern.”

  “Just because you can automate something doesn’t mean it’s a good idea to do so,” Kravitz cautions. A second problem is that a machine intelligence might see an optimal outcome differently. “That is subjective—and not. [‘Optimal’] is a really important word. If you’re an economist and you can reduce everything down to dollar amounts, what you call ‘optimal’ might be different from what say a politician calls ‘optimal,’ because there are various additional concerns.” Kravitz recalls the Three Laws of Robotics, by science fiction writer Isaac Asimov: First, a robot may not injure a human; second, a robot must obey the orders given by humans except where such orders would conflict with the First Law; third, a robot must protect its own existence, as long as that does not conflict with the first two laws. “That’s sort of why they were invented, because what a machine calls ‘optimal’ is not necessarily what a human would call ‘optimal.’” Kravitz points to the analogy of the Federal Reserve, a controller for a very complex system: “Depending on whether you call the Federal Reserve ‘optimal’ … it’s basically a bunch of experts substituting for that machine, doing control theory on a poorly understood system.” On the other hand, he notes, there can be problems with human decision processes, too. “Do you want the computer to do everything for you, or do you want people to be involved, even if that means reduced performance?”

  To gain a better handle on this, I dropped in on control systems engineer Doug MacMartin at Cornell, who’s authored numerous papers in top journals on potential designs of geoengineering systems (he has also collaborated with me on a project about how to incorporate community ideas into geoengineering research). MacMartin offers me some bread that he made, and begins to humor my questions about what it means when geoengineering and artificial intelligence grow up in the same time frame.

  MacMartin, like Kravitz, emphasizes that deciding the goal of a climate intervention system is clearly a human activity: “That’s a value, those are value judgments. Once you then say ‘these are all the things that I care about,’ you could essentially imagine that there’s an algorithm that determines, given all the information that it has available to it and given the goals—here is the best thing to go do.” Conceivably, some complicated deep-learning algorithm could help in this: one that has a much more advanced model of the climate system, and projects the future based on its knowledge of past climates and goals imposed by humans that tell it the performance metric. “‘Here’s what we care about. This matters this much, we don’t want the rainfall here to deviate by more than this. We don’t want this to change by more than that amount. And go find the solution in that space that is robust in all of your uncertainties about the current state of the system, and uncertainties about how the things evolve.’ It sort of does the best job of balancing in this multidimensional goal space.”

  This is all hypothetical, of course, and I tell MacMartin that if this program existed, the outcome seems like it would be a human collaboration or negotiation with the program. I begin speculating: “It’s so complex that you’d have to say, I care about Arctic ice, and I care about precipitation in this vulnerable region, and I care about XYZ—and then it would come up with something—but then that thing it comes up with would have this other thing that might cause a problem, and then you have to go—”

  MacMartin asks: “You’ve done optimizations before, right?”

  “Not really.”

  “This is the way they always work. Every optimization you ever do is like, ‘Here’s what I care about.’ The computer then comes out and says, ‘That’s the optimum.’ You look it and go, ‘That’s not what I wanted.’ You realize it’s like, ‘I only specified these variables, and I didn’t specify this one over here, and it found a solution that never even occurred to me where it improves these, but destroys this thing over here.’”

  I’m thinking that it sounds like a big mess—but this is what engineers deal with all the time, and a lot of our technological systems do actually work much of the time. “What you really want, in some sense,” MacMartin explains, “is an iterative process … And you presumably want a human in collaboration with that process who can then basically say, ‘Wait a minute, that might have been what I asked for, but it’s not what I wanted.’” MacMartin makes an important point—in the feedback class he teaches, they don’t do anything that’s optimal, because “optimal” is so tough to pin down. “You tend to do just as well by not optimizing it quite so much, if you know what I mean.”

  Do we get anywhere by looking at geoengineering as a program, or as software? “I think if we think about it as software, the first thing that comes to my mind is to
think back to Star Wars [the Reagan-era missile defense program]. Which, as long as you thought about missile defense as a physics problem, seems solvable. The instant that you think about the missile defense system as a giant piece of software that happens to interface with physics, then you just laugh at it and say there’s no way we could ever make this work.”

  MacMartin, like Kravitz, thinks that letting humans stray too far out of the loop would be risky: “I would say the biggest risk would be engineers being over confident in the ability of computer algorithms and allowing the computer too much leeway to make decisions. I don’t personally buy into the idea that the computer eventually becomes sentient to prevent you from allowing you to turn it off. I think you can always turn it off.” So, there will be no malicious artificial general intelligence, in MacMartin’s view. But he identifies two other risks. The first is that something unexpected happens that is outside the training data that you’ve used to train a machine learning algorithm. The second is the risk that people become overreliant on the infrastructure and fail to understand the interconnected parts of the system.

  In terms of social effects, like unemployment, the risks of these technologies developing together are probably indirect, MacMartin judges: “I think the bigger issue with them maturing at the same time is probably far more, on some level, related to the Trump factor—on steroids … Just as we look now and we think, ‘Wow, George W. Bush. I wish we still had George W. Bush.’ It wouldn’t surprise me if in thirty years we say, ‘I wish we had Trump.’ Because if half the country is unemployed and unemployable forever, and there is no foreseeable pathway to get somebody who’s forty years old to be employed in any meaningful way, that could have some pretty serious social repercussions—and combining that with something as powerfully upsetting for the human relationship with the universe as taking responsibility for the entire climate, and as inherently globalizing …” When people are struggling to find employment, he suggests, one reaction is to elect someone like Trump. “That nationalistic tendency is kind of at odds with the global implications of doing geoengineering.” He pauses. “I suspect that issues have far less to do with narrowly how AI is being used in conjunction with geoengineering, but broadly in terms of how both AI and geoengineering affect the human relationship with the rest of the world in antagonistic directions. That could lead to really serious problems.”

 

‹ Prev