Films from the Future

Home > Other > Films from the Future > Page 26
Films from the Future Page 26

by Andrew Maynard


  Because of this, I have a bit of a soft spot for Sidney Stratton. This is someone who’s in love with his science. He’s captivated by the thrill of the scientific chase, as he uses his knowledge to solve the puzzle of a stronger, more durable textile. And while he justifies his work in terms of how it will improve people’s lives, I suspect that it’s really the science that’s driving him.

  Stratton is, in some ways, the epitome of the obsessed scientist. He captures the single-mindedness and benevolent myopia I see in many of my peers, and even myself at times. He has a single driving purpose, which is synthesizing a new polymer that he is convinced it’s possible to produce. He has a vague idea that this will be a good thing for society, and this is a large part of the narrative he uses to justify his work. But his concept of social good is indistinct, and rather naïve. We see no indication, for instance, that he’s ever considered learning about the people he’s trying to help, or even asking them what they want. Instead, he is ignorant of the people he claims his work is for. Rather than genuinely working with them, he ends up appropriating them as a convenient justification for doing what he wants.

  Not that Stratton wants to cause any harm—far from it. His intentions are quite well-meaning. And I suspect if he was interviewed about his work, he’d spin a tale about the need for science to make the world a better place. Yet he suffers from social myopia in that he is seemingly incapable of recognizing the broader implications of his work. As a result, he is blindsided when the industrialists he thought would lap up his invention want to suppress it.

  Real-life scientists are, not surprisingly, far more complex. Yet elements of this type of behavior are not that uncommon. And they’re not just limited to researchers.

  Some years back, I taught a graduate course in Entrepreneurial Ethics. The class was designed for engineers with aspirations to launch their own startup. Each year, we’d start the course talking about values and aspirations, and with very few exceptions, my students would say that they wanted to make the world a better place. Yes, they were committed to the technologies they were developing, and to their commercial success, but they ultimately wanted to use these to help other people.

  I then had them take part in an exercise where their task was to make as much profit from their classmates as possible, by creating and selling a piece of art. Each student started with a somewhat random set of raw materials to make their art from, together with a wad of fake money to purchase art they liked from others in the class. There were basically no rules to the exercise beyond doing whatever it took to end up with the most money. As an incentive, the winner got a $25 Starbucks voucher.

  Every year I ran this, some students found ethically “inventive” ways to get that Starbucks card—and this is, remember, after expressing their commitment to improving other people’s lives. Even though this was a game, it didn’t take much for participants’ values to fly out of the window in the pursuit of personal gain. One year, an enterprising student formed a consortium that was intended to prevent anyone outside it from winning the exercise, regardless of the creation of any art (they claimed the consortium agreement was their “art”). Another year, a student realized they could become an instant millionaire by photocopying the fake money, then use this to purchase their own art, thus winning the prize.

  In both of these examples, students who were either too unimaginative or too ethical to indulge in such behavior were morally outraged: How could their peers devolve so rapidly into ethically questionable behavior? Yet the exercise was set up to bring out exactly this type of behavior, and to illustrate how hard it is to translate good intentions into good actions. Each year, the exercise demonstrated just how rapidly a general commitment to the good of society (or the group) disintegrated into self-interest when participants weren’t self-aware enough, or socially aware enough, to understand the consequences of their actions.152

  A similar tendency toward general benevolence and specific self-interest is often seen in science, and is reflected in what we see in Stratton’s behavior. Most scientists (including engineers and technologists) I’ve met and worked with want to improve and enriches people’s lives. They have what I believe is a genuine commitment to serving the public good in most cases. And they freely and openly use this to justify their work. Yet surprisingly few of them stop to think about what the “public good” means, or to ask others for their opinions and ideas. Because of this, there’s a tendency for them to assume they know what’s good for others, irrespective of whether they’re right or not. As a result, too many well-meaning scientists presume to know what society needs, without thinking to ask first.

  This is precisely what we see playing out with Stratton in The Man in the White Suit. He firmly believes that his new polymer will make the world a better place. Who wouldn’t want clothes that never get dirty, that never need washing, that never need replacing? Yet at no point does Stratton show the self-reflection, the social awareness, the humility, or even the social curiosity, to ask people what they think, and what they want. If he had, he might have realized that his invention could spell economic ruin and lost jobs for a lot of people, together with social benefits that were transitory at best. It might not have curbed his enthusiasm for his research, but it might have helped him see how to work with others to make it better.

  Of course, modern scientists and technologists are more sophisticated than Stratton. Yet, time after time, I run into scientists who claim, almost in the same breath, that they are committed to improving the lives of others, but that they have no interest in listening to these people they are supposedly committing themselves to. This was brought home to me some years ago, when I was advising the US President’s Council of Advisors on Science and Technology (PCAST) on the safe and beneficial development of nanotechnology. In one meeting, I pushed the point that scientists need to be engaging with members of the public if they want to ensure that their work leads to products that are trusted and useful. In response, a very prominent scientist in the field replied rather tritely, “That sounds like a very bad idea.”

  I suspect that this particular scientist was thinking about the horrors of a presumed scientifically-illiterate public telling him how to do his research. Of course, he would be right to be horrified if he were expected to take scientific direction from people who aren’t experts in his particular field. But most people have a pretty high level of expertise in what’s important to them and their communities, and rather than expect members of the public to direct complex research, it’s this expertise that it is important to use in guiding research and development if naïve mistakes are to be avoided.

  The reality here is that scientists and technologists don’t have a monopoly on expertise and insights. For new technologies to have a positive impact in a messy world of people, politics, beliefs, values, economics, and a plethora of other interests, scientists and others need to be a part of larger conversations around how to draw on expertise that spans all of these areas and more. Not being a part of such conversations leads to scientific elitism, and ignorance that’s shrouded in arrogance. Of course, there is nothing wrong with scientists doing their science for science’s sake. But willful ignorance of the broader context that research is conducted within leads to myopia that can ultimately be harmful, despite the best of intentions.

  Never Underestimate the Status Quo

  Some time ago, I was at a meeting where an irate scientist turned to a room of policy experts and exclaimed, “I’m a scientist—just stop telling me how to do my job and let me get on with it. I know what I’m doing!”153

  The setting was a National Academy of Sciences workshop on planetary protection, and we were grappling with the challenges of exploring other worlds without contaminating them or, worse, bringing virulent alien bugs back to earth. As it turns out, this is a surprisingly tough issue. Fail to remove all Earth-based biological contamination from a spacecraft and the instruments it carries, and you risk permanently contaminating the planet or mo
on you’re exploring, making it impossible to distinguish what’s truly alien from what is not. But make the anti-contamination requirements too stringent, and you make it next to impossible to search for extraterrestrial life in the first place.

  There are similar problems with return samples. Play fast and loose with safety precautions, and we could end up unleashing a deadly alien epidemic on Earth (although, to be honest, this is more science fiction than science likelihood). On the other hand, place a million and one barriers in the way of bringing samples back, and we kill off any chance of studying the biological origins of extraterrestrial life.

  To help tread this fine line, international regulations on “planetary protection” (which, despite the name, is not about protecting the Earth from asteroid hits, or space debris, or even us trashing other planets, but instead is geared toward managing biological contamination in space exploration) were established in 1967 to ensure we don’t make a mess of things.154 These regulations mean that, when an agency like NASA funds a mission, the scientists and engineers developing vehicles and equipment have to go through what, to them, is a bureaucratic nightmare, to do the smallest thing.

  To space exploration scientists, this can feel a little like an imposed form of bureaucratic obsessive-compulsive disorder, designed to send even the mildest-mannered person into a fit of pique. What makes it worse is that, for scientists and engineers working on years-long missions designed to detect signs of life elsewhere in the universe, they are deeply aware of what’s at stake. If they get things wrong, decades of work and hundreds of millions of dollars—not to mention their scientific reputations—are put at risk. So they’re pretty obsessive about getting things right, even before the bureaucrats get involved. And what really winds them up (or some of them at least) is being told that they need to fill out yet more paperwork, or redesign their equipment yet again, because some bureaucrat decided to flex their planetary protection muscles.

  This frustration reached venting point in the National Academy meeting I was at. Speaking to a room of planetary protection experts—some of whom were directly involved in establishing and implementing current policies—the scientist couldn’t contain his frustration. As the lead scientist on a critical mission to discover evidence of life beyond Earth, he knew what he had to do to be successful, or so he thought. And in his mind, the room of “experts” in front of him had no idea how ignorant they were about his expertise. He even started to lecture them in quite strong terms on policies that some of them had helped write. It probably wasn’t a particularly smart move.

  I must confess that, listening to his frustrations, I had quite a bit of sympathy for him. He was clearly good at what he does, and he just wanted to get on with it. But he made two fatal errors. He forgot that science never happens in a vacuum, and he deeply underestimated the inertia of the status quo.

  This anecdote may seem somewhat removed from nanotechnology, synthetic chemistry, and The Man in the White Suit. Yet there are a surprising number of similarities between this interplanetary scientist and Sidney Stratton. Both are brilliant scientists. Both believe they have the knowledge and ability to deliver what they promise. Both would like nothing better than to be left alone to do their stuff. And neither is aware of the broader social context within which they operate.

  The harsh reality is that discovery never happens in isolation. There are always others with a stake in the game, and there’s always someone else who is potentially impacted by what transpires. This is the lesson that John Hammond was brutally reminded of in Jurassic Park (chapter two). It underpins the technological tensions in Transcendence (chapter nine). And it’s something that Sidney wakes up to rather abruptly, as he discovers that not everyone shares his views.

  Here, The Man in the White Suit has overtones of Luddism, with workers and industry leaders striving to maintain the status quo, regardless of how good or bad it is. Yet just as the Luddite movement was more nuanced than simply being anti-technology, here we see that the resistance to Sidney’s discovery is not a resistance to technological innovation, but a fight against something that threatens what is deeply important to the people who are resisting it. The characters in the movie aren’t Luddites in the pejorative sense, and they are not scientifically illiterate. Rather, they are all too able to understand the implications of the technology that Sidney is developing. As they put the pieces together, they realize that, in order to protect the lives they have, they have to act.

  Just as in the meeting on planetary protection, what emerges in The Man in the White Suit is a situation where everyone is shrewd enough to see how change supports or threatens what they value, and they fight to protect this value. As a result, no one really wins. Sure, the factory owners and workers win a short reprieve against the march of innovation, and they get to keep things going as they were before. But all this does is rob them of the ability to adapt to inevitable change in ways that could benefit everyone. And, of course, Sidney suffers a humiliating defeat at the hands of those he naïvely thought he was helping.

  What the movie captures so well as it ends—and one of the reasons it’s in this book—is that there is nothing inherently bad about Sidney’s technology. On the contrary, it’s a breakthrough that could lead to tremendous benefits for many people, just like the nanotechnology it foreshadows. Rather, it’s the way that it’s handled that causes problems. As with every disruptive innovation, Sidney’s new textile threatened the status quo. Naturally, there were going to be hurdles to its successful development and use, and not being aware of those hurdles created risks that could otherwise be avoided. Self-preservation and short-sightedness ended up leading to social and economic benefits being dashed against the rocks of preserving the status quo. But things could have been very different. What if the main characters had been more aware of the broader picture; what if they had bothered to talk to others and find out about their concerns and aspirations; and what if they had collectively worked toward a way forward that benefitted everyone? Admittedly, it would have led to a rather boring movie. But from the perspective of beneficial and responsible innovation, the future could have looked a whole lot brighter.

  It’s Good to Talk

  Not so long ago, at a meeting about AI, I had a conversation with a senior company executive about the potential downsides of the technology. He admitted that AI has some serious risks associated with it if we get it wrong, so much so that he was worried about the impact it would have if it got out of hand. Yet, when pushed, he shied away from any suggestion of talking with people who might be impacted by the technology. Why? Because he was afraid that misunderstandings resulting from such engagement would lead to a backlash against the technology, and as a result, place roadblocks in the way of its development that he felt society could ill afford. It was a perfect example of a “let’s not talk” approach to technological innovation, and one that, as Sidney Stratton discovered to his cost, rarely works.

  The irony here is that it’s the misunderstanding and miscommunication from not talking (or to be precise, not listening and engaging) that makes The Man in the White Suit a successful comedy. As the audience, we are privy to a whole slew of comedic misunderstandings and resulting farcical situations that could have been avoided if the characters had simply taken the time to sit down with each other. From the privileged position of our armchairs, this all makes perfect sense. But things are rarely so obvious in the real-world rough-and-tumble of technology innovation.

  To many technology developers, following a “let’s not talk” strategy makes quite a bit of sense on the surface. If we’re being honest, people do sometimes get the wrong end of the stick when it comes to new technologies. And there is a very real danger of consumers, policy makers, advocacy groups, journalists, and others creating barriers to technological progress through their speculations about potential future outcomes. That said, there are serious problems with this way of thinking. For one thing, it’s incredibly hard to keep things under wraps these days.
The chances are that, unless you’re involved in military research or a long way from a marketable product, people are going to hear about what you are doing. And if you’re not engaging with them, they’ll form their own opinions about what your work means to them. As a result, staying quiet is an extremely high-risk strategy, especially as, once people start to talk about your tech, they’ll rapidly fill any information vacuum that exists, and not necessarily with stuff that makes sense.

  Perhaps just as importantly, keeping quiet may seem expedient, but it’s not always ethical. If an emerging technology has the potential to cause harm, or to disrupt lives and livelihoods, it’s relevant to everyone it potentially touches. In this case, as a developer, you probably shouldn’t have complete autonomy over deciding what you do, or the freedom to ignore those whom your products potentially affect. Irrespective of the potential hurdles to development (and profit) that are caused by engaging with stakeholders (meaning anyone who potentially stands to gain or lose by what you do), there’s a moral imperative to engage broadly when a technology has the potential to impact society significantly.

  On top of this, developers of new technologies rarely have the fullest possible insight into how to develop their technology beneficially and responsibly. All of us, it has to be said, have a bit of Sidney Stratton in us, and are liable to make bad judgment calls without realizing it. Often, the only way to overcome this is to engage with others who bring a different perspective and set of values to the table.

 

‹ Prev