Mindfuck

Home > Other > Mindfuck > Page 31
Mindfuck Page 31

by Christopher Wylie


  Taking slightly exaggerated steps to silently glide over to my dresser, I grabbed my jeans and a T-shirt, lying in a heap on the floor. The shirt was a gift from the English designer Katharine Hamnett. Soft black cotton with bold white letters, it simply read, SECOND REFERENDUM NOW! If I wear anything today, it should be this T-shirt, I thought. I reached over into my drawer to pull out my phone, and once it regained signal, it began buzzing with messages.

  Oh shit, I thought. I turned back to see I had woken him up. Groaning into a pillow, he asked why I was up so early and I simply said because I want to go vote. He sat up and smirked, rolling his eyes, asking if today was like Christmas for people like me. I told him no, that I wanted to go early, before the party poll watchers show up and start tallying who is voting. I didn’t want to get into another fight with UKIP or Brexiteers. I have been called a traitor and pushed into the streets, but I did not want to be stopped from voting.

  It did not feel like Christmas, and it wasn’t exciting at all. It was a sad day, because I knew in my heart that I wasn’t going to be taking part in a real election—it was all part of a final performance before Britain was scheduled to leave the European Union. Despite the Electoral Commission’s ruling against Vote Leave, an ongoing National Crime Agency investigation, testimonies at Parliament, and a weeks-long exposé in The Guardian about the cover-up inside Downing Street, the government was nonetheless determined to exit the European Union with a mandate won through cheating and fraud.

  My postbox was filled with leaflets and literature. I was half expecting to receive something mad from Arron Banks or Leave.EU, like a Brexit leaflet rolled into a Russian vodka bottle, as they were so fond of trolling me and Guardian journalist Carole Cadwalladr. But no, it was just regular leaflets. Greens. Lib Dems. UKIP. Nothing from the Tories or Labour, for some reason. I opened up the Lib Dem one and I thought about what data they were using now and whether they had targeted me with a message. It didn’t look like it. It was just another crap leaflet.

  I looked up at the security camera watching me in the lobby and left. I set out, walking through a couple of streets in my neighborhood. Old Georgian row houses interspersed with the occasional block of flats. It was extremely bright and sunny. The morning air was fresh and invigorating. I turned onto a high street, where the shops were not yet open, save for a local coffee shop. I walked in and ordered a coffee with a splash of soy milk. As I waited, I looked at everyone in the café, standing and looking at their phones, all scrolling, following and engaging with content. I stood beside them, but they were all off in their own digital worlds. To be honest, I used to do the same thing before my ban. But without social media, aside from a Twitter account I barely use, I have found myself scrolling less, posting less, and taking fewer photos of things. I no longer spend hours being alone together with other people through my screen. I may live outside these digital worlds, but at least I have come to be more present in this world. After grabbing my coffee, I left and walked down a tree-lined street before reaching the community center. Tied to the trees were large white placards with black letters that read POLLING STATION. I kept my distance and peered around, but no one from any of the parties was loitering outside yet. So I walked inside and followed the signs down a corridor and into a simple, unadorned room scattered with cardboard voting booths and tiny pencils without erasers.

  The polling station clerk looked at me and asked for my name. She flipped through the paper list and took a pencil to cross it out. That was it—no IDs, no electronics. She handed me what seemed like a meter-long ballot for the election of London’s delegation of members of the European Parliament. The paper was only slightly thicker than newspaper, but as I held it, I thought about how physical the act of voting seems, and yet so much sophisticated activity online leads up to this simple act of crossing an X on a thin piece of paper. I dropped the ballot into the ballot box and hoped it would not be the last time.

  EPILOGUE

  -

  ON REGULATION: A NOTE TO LEGISLATORS

  If we are to prevent another Cambridge Analytica from attacking our civil institutions, we must seek to address the flawed environment in which it incubated. For too long the congresses and parliaments of the world have fallen for a mistaken view that somehow “the law cannot keep up with technology.” The technology sector loves to parrot this idea, as it tends to make legislators feel too stupid or out of touch to challenge their power. But the law can keep up with technology, just as it has with medicines, civil engineering, food standards, energy, and countless other highly technical fields. Legislators do not need to understand the chemistry of molecular isomers inside a new cancer drug to create effective drug review processes, nor do they need to know about the conductivity of copper in high-voltage wiring to create effective insulation safety standards. We do not expect our legislators to have expert technical knowledge in any other sector because we devolve technical oversight responsibility to regulators. Regulation works because we trust people who know better than we do to investigate industries and innovations as the guardians of public safety. “Regulation” may be one of the least sexy words, evoking an image of faceless jobsworths with their treasured checklists, and we will always argue about the details of their imperfect rules, but nonetheless safety regulation generally works. When you buy food in the grocery store or visit your doctor or step onto an airplane and hurtle thousands of feet in the air, do you feel safe? Most would say yes. Do you ever feel like you need to think about the chemistry or engineering of any of it? Probably not.

  Tech companies should not be allowed to move fast and break things. Roads have speed limits for a reason: to slow things down for the safety of people. A pharmaceutical lab or an aerospace company cannot bring new innovations to market without first passing safety and efficacy standards, so why should digital systems be released without any scrutiny? Why should we allow Big Tech to conduct scaled human experiments, only to realize that they become too big a problem to manage? We have seen radicalization, mass shootings, ethnic cleansing, eating disorders, changes in sleep patterns, and scaled assaults on our democracy, all directly influenced by social media. These may be intangible ecosystems, but the harms are not intangible for victims.

  Scale is the elephant in the room. When Silicon Valley executives excuse themselves and say their platform’s scale is so big that it’s really hard to prevent mass shootings from being broadcast or ethnic cleansing from being incited on their platforms, this is not an excuse—they are implicitly acknowledging that what they have created is too big for them to manage on their own. And yet, they also implicitly believe that their right to profit from these systems outweighs the social costs others bear. So when companies like Facebook say, “We have heard feedback that we must do more,” as they did when their platform was used to live-broadcast mass shootings in New Zealand, we should ask them a question: If these problems are too big for you to solve on the fly, why should you be allowed to release untested products before you understand their potential consequences for society?

  We need new rules to help create a healthy friction on the Internet, like speed bumps, to ensure safety in new technologies and ecosystems. I am not an expert on regulation, nor do I profess to know all the answers, so do not take these words as gospel. This should be a conversation that the wider community takes part in. But I would like to offer some ideas for consideration—at the very least to provoke thought. Some of these ideas may work, others may not, but we have got to start thinking about this hard problem. Technology is powerful, and it has the potential to lift up humanity in so many ways. But that power needs to be focused on constructive endeavors. With that, here are some ideas to help you consider how to move forward:

  1. A BUILDING CODE FOR THE INTERNET

  The history of building codes stretches back to the year 64 C.E., when Nero restricted housing height, street width, and public water supplies after a devastating fire ravaged Rome for nine d
ays. Though a fire in 1631 prompted Boston to ban wooden chimneys and thatched roofs, the first modern building code emerged out of the devastating carnage of the Great Fire of London, in 1666. As in Boston, London houses had been densely constructed from timber and thatch, which allowed the fire to spread rapidly over four days. It destroyed 13,200 homes, eighty-four churches, and nearly all of the city’s government buildings. Afterward, King Charles II declared that no one shall “erect any House or Building, great or small, but of Brick, or Stone.” His declaration also widened thoroughfares to stop future fires from spreading from one side of the street to the other. After other historic fires in the nineteenth century, many cities followed suit, and eventually public surveyors were tasked with inspections and ensuring that the construction of private property was safe for the inhabitants and for the public at large. New rules emerged, and eventually the notion of public safety became an overarching principle that could override unsafe or unproven building designs, regardless of the desires of property owners or even the consent of inhabitants. A platform like Facebook has been burning for years with its own disasters—Cambridge Analytica, Russian interference, Myanmar’s ethnic cleansing, New Zealand’s mass shootings—and, as with the reforms after the Great Fire, we must begin to look beyond policy, to the underlying architectural issues that threaten our social harmony and citizens’ well-being.

  The Internet contains countless types of architectures that people interact with on a daily and sometimes hourly basis. And as we merge the digital world with the physical world, these digital architectures will impact our lives more and more. Privacy is a fundamental human right and should be valued as such. However, too often privacy is eviscerated through the bare performance of clicking “accept” to an indecipherable set of terms and conditions. This consent-washing has continually allowed large tech platforms to defend their manipulative practices through the disingenuous language of “consumer choice.” This positions our frame of thinking away from the design—and the designers—of these flawed architectures, and toward an unhelpful focus on the activity of a user who does not have understanding or control over the system’s design. We do not let people “opt in” to buildings that have faulty wiring or lack fire exits. That would be unsafe—and no terms and conditions pasted on a door would let any architect get away with building dangerous spaces. Why should the engineers and architects of software and online platforms be any different?

  In this light, consent should not be the sole basis of a platform’s ability to operate a feature that engages the fundamental rights of users. In following the Canadian and European approach of treating privacy as an engineering and design issue—a framework called “privacy by design”—we should extend this principle to create an entire engineering code: a building code for the Internet. This would include new principles beyond privacy, to include respect for the agency and integrity of end users. Such a code would create a new principle—agency by design—to require that platforms use choice-enhancing design. This principle would also ban dark pattern designs, which are common design patterns that deliberately confuse, deceive, or manipulate users into agreeing to a feature or behaving in a certain way. Agency by design would also require proportionality of effects, wherein the effect of the technology on the user is proportional to the purpose and benefit to the user. In other words, there would be a prohibition on undue influence in platform design, where there are enduring and disproportionate effects, such as addictive designs or consequential mental health issues.

  As with traditional building codes, the harm avoidance principle would be a central feature in such a digital building code. This would require platforms and applications to conduct abusability audits and safety testing prior to releasing or scaling a product or feature. The burden would rest with tech companies to prove that their products are safe for scaled use in the public. As such, using the public in live scaled experiments with untested new features would be prohibited, and citizens could no longer be used as guinea pigs. This would help prevent cases like Myanmar, where there was no prior consideration from Facebook on how features could ignite violence in regions of ethnic conflict.

  2. A CODE OF ETHICS FOR SOFTWARE ENGINEERS

  If your child was lost and needed help, whom would you want them to turn to for help? Perhaps a doctor? Or maybe a teacher? What about a cryptocurrency trader or gaming app developer? Our society esteems certain professions with a trustworthy status—doctors, lawyers, nurses, teachers, architects, and the like—in large part because their work requires them to follow codes of ethics and laws that govern safety. The special place these professions have in our society means that we demand a higher standard of professional conduct and duties of care. As a result, statutory bodies in many countries regulate and enforce ethical conduct of these professions. For society to function, we must be able to trust our doctors or lawyers to always act in our interests, and that the bridges and buildings we use every day have been constructed to code and with competence. In these regulated professions, unethical behavior can bring dire consequences for those who transgress boundaries set by the profession—ranging from fines and public shaming to temporary suspensions or even permanent bans for more egregious offenders.

  Software, AI, and digital ecosystems now permeate our lives, and yet those who make the devices and programs we use every single day are not obligated by any federal statute or enforceable code to give due consideration to the ethical impacts to users or society at large. As a profession, software engineering has a serious ethics problem that needs to be addressed. Tech companies do not magically create problematic or dangerous platforms out of thin air—there are people inside these companies who build these technologies. But there is an obvious problem: Software engineers and data scientists do not have any skin in the game. If an engineer’s employer instructs her or him to create systems that are manipulative, ethically dubious, or recklessly implemented, without consideration for user safety, there are no requirements to refuse. Currently, such a refusal to act unethically creates a risk to the employed engineer of repercussions or termination. Even if the unethical design later is found to run afoul of a regulation, the company can absorb liability and pay fines, and there are no professional consequences for the engineers who built the technology, as there would be in the case of a doctor or lawyer who commits a serious breach of professional ethics. This is a perverse incentive that does not exist in other professions. If an employer asked a lawyer or nurse to act unethically, they would be obligated to refuse or face losing their professional license. In other words, they have skin in the game to challenge their employer.

  If we as software engineers and data scientists are to call ourselves professionals worthy of the esteem and high salaries we command, there must be a corresponding duty for us to act ethically. Regulations on tech companies will not be nearly as effective as they could be if we do not start by putting skin in the game for people inside these companies. We need to put the onus on engineers to start giving a damn about what they build. An afternoon employee workshop or a semester course on ethics is a wholly insufficient solution for addressing the problems we now face from emerging technologies. We cannot continue down the path we are on, in which technological paternalism and the insulated bro-topias of Silicon Valley create a breed of dangerous masters who do not consider the harm that their work has the potential to inflict.

  We need a professional code that is backed by a statutory body, as is the case with civil engineers and architects in many jurisdictions, where there are actual consequences for software engineers or data scientists who use their talents and know-how to construct dangerous, manipulative, or otherwise unethical technologies. This code should not have loose aspirational language; rather, what is acceptable and unacceptable should be articulated in a clear, specific, and definitive way. There should be a requirement to respect the autonomy of users, to identify and document risks, and to subject code to scrutiny and review.
Such a code should also include a requirement to consider the impact of their work on vulnerable populations, including any disproportionate impact on users of different races, genders, abilities, sexualities, or other protected groups. And if, upon due consideration, an employer’s request to build a feature is deemed to be unethical by the engineer, there should be a duty to refuse and a duty to report, where failure to do so would result in serious professional consequences. Those who refuse and report must also be protected by law from retaliation from their employer.

  Out of all the possible types of regulation, a statutory code for software engineers is probably what would prevent the most harm, as it would force the builders themselves to consider their work before anything is released to the public and not shirk moral responsibility by simply following orders. Technology often reflects an embodiment of our values, so instilling a culture of ethics is vital if we as a society are to increasingly depend on the creations of software engineers. If held properly accountable, software engineers could become our best line of defense against the future abuses of technology. And, as software engineers, we should all aspire to earn the public’s trust in our work as we build the new architectures of our societies.

  3. INTERNET UTILITIES AND THE PUBLIC INTEREST

  Utilities are traditionally physical networks said to be “affected with the public interest.” Their existence is unique in the marketplace in that their infrastructure is so elemental to the functioning of commerce and society that we allow them to operate differently from typical companies. Utilities are often by necessity a form of natural monopoly. In a marketplace, balanced competition typically results in innovation, better quality, and reduced prices for consumers. But in certain sectors, such as energy, water, or roads, it makes no sense to build competing power lines, pipelines, or subways to the same places, as it would result in massive redundancy and increased costs borne by consumers. With the increased efficiencies of a single supplier of a service comes the risk of undue influence and power—consumers unable to switch to new power lines, pipelines, or subways could be held hostage by unscrupulous firms.

 

‹ Prev