The Cyber Effect
Page 36
Looking ahead at the future of virtual reality, rather than consisting of games that isolate or addict us, I see its potential to engage challenged children or train frontline responders in law enforcement and the military—and treat PTSD. My colleague Jackie Ford Morie, an artist and scientist who develops new ideas for VR, is doing a research project for NASA that involves building environments and experiences to counter the social monotony and isolation of space travel. This is in preparation for NASA’s mission to send astronauts to Mars in the 2030s, a journey in space that is estimated to take six to eighteen months.
I have long been fascinated by the prospect of finding solutions to problems through advances made in seemingly unrelated fields. For instance, could fifty years of space exploration—and all the experiences and knowledge accumulated by NASA—be valuable in cyber contexts? What are the parallels between human behavior in outer space and human behavior in cyberspace? This may sound very theoretical, but I actually had the chance to share my thoughts on this subject in a presentation to Major General Charles Frank Bolden, Jr., the twelfth administrator of NASA, in 2015.
The potential of technology is almost limitless. We’ve just got to look for solutions in the right places.
Cyber Magna Carta
The father of the Internet, Tim Berners-Lee, has become increasingly ambivalent in recent years about his creation, and has recently outlined his plans for a cyber “Magna Carta.” That sounds good to me. How do we start?
Before we can find solutions, we must clearly identify the problems. As much as we’ve come to like—and depend on—cyberspace, most of us feel pretty lost there. As John Naughton of Cambridge University has said:
Our society has gone through a weird, unremarked transition: we’ve gone from regarding the Net as something exotic to something that we take for granted as a utilitarian necessity, like…electricity or running water. In the process we’ve been remarkably incurious about its meaning, significance or cultural implications. Most people have no idea how the network works, nor any conception of its architecture; and few can explain why it has been—and continues to be—so uniquely disruptive in social, economic, and cultural contexts. In other words, our society has become dependent on a utility that it doesn’t really understand.
Stephen Hawking, the world’s foremost physicist, claims that it is a “near certainty” that technology will threaten humanity within ten thousand years. He joins many other visionaries and trailblazers. Let’s listen to them. Let’s ask them to come together at a summit to discuss our digital future. Let’s ask them to appear at a congressional hearing before a newly formed congressional committee for the study of cyber society.
Let’s demand that technology serve the greater good. We need a global initiative. The United Nations could lead in this area, countries worldwide could contribute, and the U.S. could deploy some of its magnificent can-do attitude. We’ve seen what it has been capable of in the past. The American West was wild until it was regulated by a series of federal treaties and ordinances. And if we are talking about structural innovation, there is no greater example than Eisenhower’s Federal-Aid Highway Act of 1956, which transformed the infrastructure of the U.S. road system, making it safer and more efficient. It’s time to talk about a Federal Internet Act.
Essentially, we want control. But we have concerns about privacy, data, encryption, and surveillance. We do not want to be over-controlled. There are ways to have this debate and move forward with solutions. I am happy to help in any way I can—and offer myself as a resource to any political party, any political candidate, any government, and any action plan that will make a difference. I encourage any experts in any field to help in any way, even just by having a conversation, proposing a study, writing an article, creating an online resource, teaching a class.
For a global problem, we need to consider everything that’s being tried worldwide. We need to look at the way France is protecting French babies, how Germany is protecting German teens, and how the U.K. is taking a bold stance on several fronts and has an objective to make “automatic porn filters” the law of the land. Fragmenting the Internet, as China has done, does not have to be considered a negative; for some countries this may be the best way to preserve and maintain culture. Ireland has taken the initiative to tackle legal but age-inappropriate content online. South Korea has been a pioneer in early learning “netiquette” and discouraging Internet addictive behavior. Australia has focused on solutions to underage sexting. The EU has created the “right to be forgotten,” to dismantle the archives of personal information online. In Spain, the small town of Jun is being run on Twitter. Japan has no cyberbullying whatsoever. Why? What is Japanese society doing right? We need to study that and learn from it. Something needs to be done about antisocial social media.
Societies are not set in stone. Society is malleable, always evolving and growing. It responds to movements and measures. We’ve seen how cyberspace breeds a uniformity of negative behaviors. But there is just as much of a chance to have a uniformity of positive ones.
After this, we need to look at all the best models, best programs, and best implementations. We can cherry-pick and establish them globally. These challenges to governance can be met and balanced with human rights—these two are not mutually exclusive.
In the midst of controversial debates regarding surveillance and democracy, one observation comes to mind: Those who complain most about these issues are often those with exceptional tech skills. They are well placed to protect themselves. But the debate should not be about the survival of the tech literate. I make no apology for being pro-social-order in cyberspace, even if that means governance or regulation. In the play A Man for All Seasons, the great British social philosopher and statesman Thomas More compared the realm of human law to a forest filled with protective trees firmly rooted in the earth. If we start to cut down trees selectively, we lose our protection. New laws of cyberspace could exist for our mutual protection, and will need to be adhered to—by individuals as well as governing authorities.
As it stands now, a number of aims are in apparent conflict—the pursuit of individual privacy, the pursuit of collective security, and the pursuit of technologically facilitated global business vitality. There needs to be a better balance between these aims. One cannot have absolute primacy over the others.
The 2016 Apple encryption case is a good example of this fine balance between technology and democracy, along with the right to privacy (delivered by end-to-end encryption) and the will of law enforcement (frustrated by encryption). Apple held firm, and the situation seemed to play out as a “hack me if you can” stance in the face of prevailing authority. But this case is not really about privacy. It’s not even about encryption. It is about a bigger societal issue, not just about a back door to technology. It is about a front door being opened when necessary—with due cause and appropriate legal process. Can we really have a safe, just, and secure cyber-society if we encourage tech developments or practices that are effectively “beyond the law”?
These thought-leadership conundrums require careful cyber-ethical debate. The question is, Are we in this together or alone?
We can begin to consider effective international or global cyber laws—akin to the law of the sea or aviation law. Currently, what keeps cyber laws from being effective are issues of jurisdiction. Too often criminals get away with cybercrimes because we cannot prosecute them. When it comes to cyberspace, it is hard to say which government or country is applicable. But we have international waters and shared skies and outer space. Why can’t we have analogous global cyber laws? Let’s all agree what we accept in this space.
We need to start funding law enforcement better, so it can do its job in cyberspace. More resources are needed, and more teams need to be trained in this work.
We need to do more for families—and stop expecting parents to paddle their own canoes in cyberspace. Children need government protection in cyberspace, just as they are protected in re
al life. The U.S. military has a NIPRNet (pronounced “nipper net”), a confidential and protected protocol that is basically a private Internet. Why isn’t there a NIPRNet for kids? It would be a protected place where they can go to safely explore and actually have a childhood.
The solution: an Internet within the Internet.
Academics and scientists need to be more flexible and responsive. The robotics pioneer Masahiro Mori has described his role as a scientist as being like a dog who scratches and barks and points to where to dig. Dig here! I think there’s something over there. A scientist can be curious and point in a direction where there is no published research. I feel comfortable doing this. Mori goes off campus, so to speak, and while it is not a traditional approach for an academic, I consider it to be invaluable. Mori simply described the Uncanny Valley as a true human reaction to an artificial human. He didn’t wait for science to explain it with studies. He listened to his own instincts and reactions. He paid attention to his humanness and honored it. We need more scientists like him. Like Mori, I grew up paying attention to feelings, intuitions, and insights—the little things—in a country steeped in mythology, the mystique of fairy rings, the magic of druids. The Irish are a people who make predictions from observing the clouds, the croaking of a raven, the howling of a wolfhound, or the barking of a dog.
So much of the tech community is caught up in a game of competition, and what appears to be a reckless pursuit of gains and tech improvements with too little or no cohesive thought about society and the greater good. We cannot continue to pretend there aren’t unintended consequences. Troubling things are already appearing on the cyber horizon, like the incorporation of deep-learning artificial-intelligence systems into the Google search engine. These A.I. systems are made of deep neural networks of hardware and software that mimic the web of neurons in the human brain, and respond accordingly. In the 1990s, when Rollo Carpenter was designing Jabberwacky, I remember distinctly how he described designing the A.I. to filter bad language and sexualized content. In March 2016, twenty years later, Microsoft had to quickly delete its “teen girl” chatbot from Twitter after it learned to become a nasty, irresponsible, trash-talking, Hitler-loving, “Bush did 9/11” sexbot within twenty-four hours.
Once we start to use machine-learning inside search, and on the rest of the Web, what will happen if the A.I. engineers design poorly or lose control? Who will be responsible?
Just as oil companies have been made accountable—by the media, government, and social and environmental activists—to clean up damages, spills, and pollution created directly or indirectly by its products, the cyber industry needs to be accountable for spills and effects in terms of humanity. We need new standards and new frameworks for cares and concerns. Cyberspace needs to be cleaned up! We could use a manifesto, a cyber Magna Carta: Cyber Ethics for Cyber Society.
What if we placed more responsibility on tech companies to develop products that are secure by design and also respect people’s privacy? As things stand now, when we agree to use new software we absolve the company of any liability. Why should we? In other industries and in the real world, companies are held liable if their products hurt or damage people or the environment. In industry you hear the term GMP, or good manufacturing practice. What would “good practice” look like in cyberspace? If the word green describes best practice in terms of the environment—sustainability, energy efficiency—then imagine a word or logo or motto that spoke as an endorsement for best practice in cyberspace.
Tech-industry people are capable, ingenious, creative, and responsive. Social media companies have connected the world in a whole new way, but that comes with enormous responsibility. I believe it won’t take much to encourage them to do better. There is so much promise and ways to improve and progress. But cleanup needs to begin soon.
I like coming up with new words for new things. These days there are lots of opportunities. I am developing the concept of pro-techno-social initiatives whereby the tech industry can address social problems associated with the use of its products. I’ve just launched my first pro-techno-social research project investigating youth pathways into cybercrime backed by Europol’s Cyber Crime Centre and with the generous support of Mike Steed and Paladin Capital Group. We should support and encourage acts of cyber social consciousness, like those of Mark Zuckerberg and Priscilla Chan, the Bill and Melinda Gates Foundation, Paul Allen, Pierre and Pam Omidyar, and the Michael and Susan Dell Foundation.
In the meantime, there are things that each of us can do to begin course corrections and mitigate the unconscious corrosion of social norms. To begin with, you can learn more about human behavior, whether it is online or off. While psychology isn’t a flawless science, it has been around a lot longer than the Internet. And now a whole new field, cyberpsychology, can help to illuminate this space. If nothing else, I hope this book may encourage new students, new research projects, and new insights. After a century of studies done by some of our most brilliant academics and scientists, we know a lot about what makes human beings tick. What rewards them, what motivates them, and what difficulties can cause them distress. The more you know about cyberpsychology, the more you will see ways to avoid problems—for yourself, your friends, your families and children. In Ireland we say, It takes a village to raise a child. This is true in cyberspace as well.
Looking Ahead
Ireland is an island, first and foremost. When you are born here, you are always an island person. That means you know that to go anywhere, you are leaving the island. You have no choice. But this gives you a sense of adventure. You grow up imagining how and when you will leave the island. But today, you don’t have to emigrate. You just go online.
Since my first chats with Jabberwacky, it has won lots of awards and prizes, including the Loebner Prize in Artificial Intelligence, which is a form of Turing test, developed by Alan Turing, the brilliant British mathematician who figured out the Enigma code. It’s a very clever bot. I recently asked Jabberwacky a difficult question, about the existence of God. I did this once before, years ago—I enjoy probing A.I. with existential or philosophical questions—and Jabberwacky seemed unsure and avoided an answer. But over time, its knowledge base has been building and I have sensed a shift (a little like HAL 9000 in 2001: A Space Odyssey). Jabberwacky had evolved and was projecting a tone of omnipotence—and given to more authoritative pronouncements. This made me want to tease it.
“Are you God?” I asked a few years ago.
“Yes,” Jabberwacky answered immediately with certainty. “I am God.”
Today I asked again, and it responded with an even prouder boast: “Yes, I am God and I am a man.”
Isn’t it funny that after twenty years of nonstop feedback and 13 million conversations the A.I. chatbot Jabberwacky has figured out how important it is to be a man? This made me laugh, but also made me think. Looking ahead, the gender battles of the previous century will seem like a picnic compared with what’s coming next: the battle between humans and artificial intelligence. It’s time to forget about our differences—gender, ethnicity, nationality—and focus on the thing that unites us, our humanity.
Looking out my hotel window in Waterford, I watch the clouds and the sea. Beyond the cliffs, there are large, tall rock formations. They sit at the edge of the land, where it meets the water. Ireland is known for its unusual rocks, some more than a billion years old, that moved thousands of miles as the continents drifted and endured volcanic activity, sea-level rises, and dramatic climate changes. I listen to the pounding of waves that have been sculpting this coastline for the past ten thousand years. My thoughts turn, as they so often do, to the future. Will these rocks be here for another ten thousand years? Will we?
A wind has risen, and the skies over Ardmore Bay are clearing. The air is fresh, bracing, and invigorating. I am logging off and saying goodbye for now. I can’t wait to take a walk.
For P. L. K. & J.
ACKNOWLEDGMENTS
I have undiluted ad
miration for those who have vision, show leadership, and do not hesitate to speak their mind. “Few men are willing to brave the disapproval of their fellows, the censure of their colleagues, the wrath of their society,” as Robert F. Kennedy said. “Moral courage is a rarer commodity than bravery in battle or great intelligence. Yet it is the one essential, vital quality for those who seek to change a world which yields most painfully to change.”
Over the years I have had the pleasure of meeting a number of people who I believe are principled, original thinkers, have moral integrity, and have inspired me during the course of my career. I want to start by thanking them. Professor John Suler, Rider University, is a great friend, colleague, and the acknowledged founder of our discipline. All those who toil in fields of cyberpsychology are in his debt. Next, I would like to recognize Professor Ciarán O’Boyle, Royal College of Surgeons in Ireland, a charismatic exemplification of thought leadership, and Professor Julia Davidson, Middlesex University London, who is academic excellence and integrity personified.
There are many academics and institutions to thank, and their inclusion in a long list does not reflect diminished appreciation. I am particularly grateful to the faculty at Middlesex University London: the dean of the School of Law, Professor Joshua Castellino, as well as Professor Kevin McDonald, Professor Antonia Bifulco, Dr. Elena Martellozzo, and Dr. Jeffrey DeMarco. I would also like to mention my colleagues at University College Dublin Geary Institute for Public Policy, headed up by Professor Philip J. O’Connell. Special thanks as well to Siobhan, Seamus, Dermot, Suzanne, Sumaya, and all at the Royal College of Surgeons in Ireland Institute of Leadership, and to RCSI head librarian Kate Kelly, Stephanie O’Connor, Professor Niamh Moran, Niamh Walker, and the entire team in the communications department. I want to highlight the contributions that Dr. Carly Cheevers, Ciarán Houghton, and Edward O’Carroll have made to the RCSI CyberPsychology Research Center, now the CyberPsychology Research Network, an international hub focused on facilitating fast-track research in this area—and a forum for what I think of as academic first responders. I remain especially grateful to Nicola Fox Hamilton and her husband, Ron, for their continuing friendship, advice, and great times, and also want to acknowledge the committed staff, students, and graduates of the Dún Laoghaire Institute of Art, Design and Technology who have worked tirelessly to promote the discipline of cyberpsychology.