Book Read Free

Non-Obvious 2019- How To Predict Trends and Win The Future

Page 9

by Rohit Bhargava


  Note: This trend was originally published in the 2017 Non-Obvious Trend Report, and is included in this new trend report with revised examples and new insights.

  07

  Artificial Influence

  What’s the Trend?

  Creators, corporations, and governments use virtual creations to shift public perception, sell products, and even make fantasy into reality.

  Sixteen-year-old pop music superstar Hatsune Miku got her first big break as the face of voice synthesizer software developed by the Japanese company Crypton Future Media. The software allows anyone to write original songs, which Miku then brings to life by singing them. As music creators started generating music and sharing it on YouTube, Miku’s performances went viral. Videos featuring her crossed the 100-million-views mark. The Japanese sensation was even invited to open for Lady Gaga, and has launched her own (mostly sold-out) series of live events known as the Hatsune Miku Expo in cities around the world.

  Yet her rapid, crowdsourced rise to fame isn’t the most interesting thing about Miku. What really sets her apart is that she isn’t real.

  Miku is an anime character designed by manga artist Kei Garo at the request of Crypton Future Media. Through a Creative Commons license, anyone can use her voice and image to create music, as long as they don’t make money on their creations. Since she was first developed as a vocaloid, over 170,000 sounds have been created for Miku. She’s earned millions of fans who share posts about her on social media, follow her from concert to concert, and even start fan clubs dedicated to her.

  More importantly, she’s become a huge franchise for Crytpon Future Media. Since her popularity has soared, the company has brokered deals for Miku to co-brand or endorse products like the Toyota Corolla, and even to become a spokesperson for the city of Sapporo, where she was “born.”

  Over the past two years, there has been a dramatic increase in the prevalence of these sorts of “digital celebrities” and avatars like Miku.

  Another example is Shudu, striking, dark-skinned digital supermodel created by London-based fashion photographer Cameron-James Wilson. Explaining why he chose to create her, Wilson said, “I am just a creative person, and to me, she is what the most beautiful woman in the world would look like.” After a photo of Shudu wearing Fenty Beauty’s lipstick went viral, Wilson reportedly received offers to partner up with brands, which quickly made her more than a piece of digital art.

  As these virtual celebrities gain the trust and adoration of millions of followers, they are also becoming important influencers. Most promise that they will only promote brands they authentically like and believe in, though of course, that is fairly meaningless when each of their “opinions” is programmed by their creators.

  “We live in such a filtered world now, where real is becoming fake.

  I wanted to create something that is fantasy toward becoming more real.”[26] — Cameron-James Wilson, creator of Shudu

  Virtual celebrities are only one manifestation of how some technologies are blurring the line between what’s real and what is fake around us. Their prominence is raising some big questions about the nature of influence itself, and who to believe. They are also at the heart of an emerging trend we call Artificial Influence.

  Holographic Celebrities

  When thousands of people are already paying to see concerts of a completely fabricated digital personality like Miku, it’s easy to imagine they might also be willing to pay to see a holographically resurrected one.

  In October of 2018, the father of the late singer Amy Winehouse announced a concert tour to be launched in 2019 featuring a hologram of his daughter onstage. A digital avatar of Winehouse, who died in 2011 at the age of 27, will “perform” some of her most popular songs, accompanied by a live band, real backup singers, and lots of special effects.

  According to her father, “fans have been clamoring for something new from Amy, but really, there isn’t anything new.”[27] With Base Entertainment, the company producing the virtual simulation, he is hoping the avatar will satisfy her fans’ desire to continue connecting with her and her music.

  Other experiments with holographic celebrities are reinforcing the power of Artificial Influence, as well. In the past year alone, these have included the appearance of deceased rapper Tupac Shakur at Coachella, Michael Jackson at the Billboard Music Awards, and others. In another sign of the optimism around this idea, the L.A.- based firm Hologram USA is even trying to create the world’s first hologram theater right in the heart of Hollywood, where founder Alki David plans to bring artists like Whitney Houston and Buddy Holly, and comedians like Andy Kaufman and Redd Foxx. Soon, this same technology that brings dead celebrities back to life could also be used to allow simultaneous appearances by living artists in which they could actually perform in two (or more!) locations at once.

  As more computers act and perform like humans thanks to artificial intelligence (the other AI!), they will increasingly blur the lines between what’s real and what isn’t, influencing us to do unexpected things—like pay for the chance to see holographic versions of our dead idols perform one more time.

  Deepfakes and a Gullible Audience

  This past year, MIT named GANs (generative adversarial networks) to its prestigious annual list of “10 Breakthrough Ideas.” Despite the complex name, the idea behind a GAN is actually fairly simple.

  We already have computers that can look at a photo and determine if it’s fake by comparing it to a huge database of real images. We also have computers that can generate new images based upon that same dataset of real images. What would happen if the computer generating the new images (known as the generator) could continue creating and refining images based upon the fake-or-real assessment of the other computer (known as the discriminator)? Eventually, the generator would come up with an image that the discriminator would rate as real.

  The invention of the GAN was incredibly important in the field of AI. Some even described it as the closest we’ve come to finally giving machines an imagination.[28] Unfortunately, the rise of this technology is also offering an opportunity to more evil-minded players.

  The term “deepfakes” is typically used to describe any doctored video or photo in which a person’s face is superimposed on top of someone else’s body to present a misleading scene. Perhaps fittingly, there are two industries in which you will find “deepfakes” most frequently: pornography and politics.

  Around the end of 2017, a number of sexually-explicit videos featuring Hollywood stars began appearing online. With the help of AI technology, hordes of amateur coders had managed to swap the faces of these stars with those of performers in X-rated scenes. Most video hosting platforms have already banned these clearly unethical videos. And, by and large, deepfakes are low-quality and easy to spot.

  Of course, the adult film industry is already thinking about how to leverage this technology in legal ways to add features their customers might want. Leading adult entertainment company Naughty America, for example, recently announced they would offer a service giving customers the ability to insert themselves (so to speak) into scenes with their favorite actors.

  Outside of the adult industry, the potential to deploy deepfakes to influence political outcomes is far scarier and more problematic on a global scale. In 2016, deepfakes showing Filipina presidential candidate Leila de Lima in a series of compromising sexual situations were widely blamed for her loss to then-Davao City Mayor (and current President of the Philippines) Roderigo Duterte. Many have argued that these deepfakes influenced voters by eroding their confidence in her.[29]

  Many also believe that the ease with which the masses were manipulated by deepfakes was partly due to the relative dominance of Facebook in the country. Back in 2013, the social media platform launched Free Facebook in the Philippines, offering free access to the Internet. Almost immediately, for many Filipinos, Facebook became the Internet. The site offered a conduit to filtered news and limited access to the wider world. It also
made it relatively easy for the same fake stories to spread quickly through the amplified echo chamber that had been created.

  When your entire mainstream media is squeezed through one social media platform, Artificial Influence–such as content created by deepfakes–becomes inescapable, and lies become truth.

  In addition to the digital platforms that enable Artificial Influence to spread, there’s another growing factor that has a lot to do with its rise: the real-time nature of news, and the individuals who publish stories crafted to incite a reaction. We wrote about this trend, which we called Manipulated Outrage, in depth last year. At the time, we explained that these “news” creators have ulterior motives in sharing particular stories. Many players who engage in Artificial Influence have similar self-serving intent. Unlike the trend focused upon stoking outrage, though, these new efforts expertly employ individual creations as a more personal way to wield influence and shift how we think, particularly when it comes to marketing.

  Fake Influence & Crowdcasting

  In June 2018, on the first day of Cannes Lion, a global conference for the advertising and marketing industry, Unilever CMO Keith Weed issued a bold mandate to the industry: get better at spotting and fighting influencer fraud, or lose millions in advertising revenue.

  Weed was referring to the widespread practice among influencers to buy followers in an attempt to show higher numbers of fans– and therefore charge brands more to work with them. For his part, he promised that Unilever brands would not work with influencers who buy followers.[30] They would also prioritize partners who believe in increased transparency. These collections of fake followers are an important manifestation of Artificial Influence, because they add scale to content created by so-called influencers.

  Fake influencers, unfortunately, are emerging not just in the digital world of social media, but in real life, too. For example, Surkus is an app that helps restaurants, bars, or any real-life venue create buzz by selecting and paying people to show up or stand in line, giving the illusion that the spot or event is trendy and well-attended. The app goes as far as “casting” specific people for these events according to their age, social-media engagement, and other characteristics. A Washington Post article about the company dubbed their work “crowdcasting,” which is an apt description for what they do.[31]

  The more examples there are of people getting paid to show up and pretend to be enthusiastic (or to follow and like an influencer even if they don’t), the less people and consumers will believe what they see. The true danger of Artificial Influence as a trend is that it leads to erosion of consumer trust in what people see online or even in person, and makes them skeptical of any kind of influence.

  The Overquantification of Life

  Last year, one of the most talked-about episodes of the dystopian Netflix hit show Black Mirror depicted a futuristic world in which everyone is assigned a social rating based upon how others rated their interactions with them. Entitled “Nose Dive,” it centered upon a character played by Bryce Dallas Howard who is desperate to improve her rating. Her efforts, however, backfire, and her life falls into chaos.

  This piece of science fiction was created to satirize what would happen to our relationships and to ourselves if we were judged and rated on everything we did. Next year, this dystopian reality will actually be implemented after years of anticipation.

  The idea of assigning a social rating to people was first conceived in China in June of 2014. That year, the Chinese government announced it was building a Social Credit System (SCS) program to assign Chinese citizens scores based upon various behaviors. The ambitious aim of the SCS was to develop a “citizen score” that could eventually be used to determine everything from a citizen’s eligibility for a mortgage to whom they could date. Though the system is currently only voluntary, all citizens will be required to participate in it by 2020.

  This is an important example to consider within the context of the trend of Artificial Influence, because it represents a new example of a government exerting its own criteria upon who should be given influence (via ranking) in their society. More concerningly, the system could allow for influence to be taken away from those who dissent by shutting down everything from their mobility to how freely they can express their own ideas online.

  The SCS in China may be an extreme example of social credit; however, there exist many daily examples of how social rankings are already used in our society today. You can rate an Uber driver after a shared ride, and that driver can rate you as well. After completing a peer-to-peer transaction online on a platform such as eBay, you usually have the chance to rate the other person. Every click of a like or love button is another form of social ranking.

  Could all of these ratings lead us to present inauthentic fantasy versions of ourselves to the world? Could people game the system to generate real-world consequences–like getting an Uber driver kicked off the platform based upon a bunch of fake reviews? Could citizens in China be given low ratings for small infractions, or infractions that shouldn’t really be disqualifying criteria—say, a smoker being prevented from getting a mortgage?

  As these rating systems proliferate, they will require us to ask all sorts of big questions about the real value of quantifying everything, and whether the actual way these systems are used will be fair or not.

  Why It Matters

  Finding people, institutions, and products that are (or seem) trustworthy continues to be a challenge. According to the Edelman Global Trust Barometer, in 2018, the general population only trusted 52 percent of businesses and 43 percent of media.[32]

  In order for individuals to stand apart and be counted among the trusted, the implications of this trend are that organizations must urgently find new ways to be believable and build trust and brand affinity. This will not come from desperately chasing innovation that aims to copy what other digital influencers are doing, or trying risky engagements with platforms that promise lots of social-media engagement from questionably ethical sources.

  Instead, we believe it is possible to innovate with integrity and use the principles in this non-obvious trend to encourage higher levels of creative thinking. Digital celebrities, for example, have the potential to become powerful, trusted advocates for messages of good–akin to how animated characters like the long-running Smokey the Bear has been continually used since 1944 by the U.S. Government as a spokesperson (or, perhaps more accurately, a “spokesbear”) to educate generations of Americans about their role in preventing wildfires.

  In the coming year, it is likely that we will see both possible futures emerge from this trend. As Artificial Influence becomes more mainstream, consumers will make individual judgements about who is worth their time and attention and where their trust should be placed.

  How to Use This Trend

  Don’t Hide the Artificial Ingredients–The more successful digital celebrities who have amassed millions of fans have done so by acting in genuine ways. They don’t generally hide that they are not real, or try to manipulate their fans. Instead, they embrace their ephemeral nature and use it as proof of their authenticity. You can do the same, whether you are aiming to create your own digital celebrities or considering working with one. The more authentically and ethically you can use these technologies, the more you can reinforce your brand and engage customers and team members alike in non-obvious ways.

  Forgive the Gullible–It can be frustrating to see how many of the problems caused by Artificial Influence come down to simple human gullibility. People are generally lazy. They often believe the first headline they see, and don’t ask deeper questions. Of course, none of us see ourselves in that light. Other people are the problem. Great innovators avoid the judgement, and instead try to rise above the artificiality and find ways to tap into the talent, creativity, and capabilities in their organization to help others do the same.

  08

  RetroTrust

  What’s the Trend?

  Often unsure of whom to trust, con
sumers look back to organizations and experiences with brands that have a legacy, or those with which they have a personal history.

  Thanks to the nature of what I do, a lot of curious things end up on my bookshelves or on the floor of my office. Among my growing collection of oddities—accumulated in the process of researching the latest edition of this book—is a magazine created for film and art lovers.

  A year and a half after I bought this “limited-run” inaugural issue for $20, it is still sitting on my shelf. Within its 76 pages, I found stories about the booming independent magazine culture, why artists continue to use film rather than digital to make movies, and the “power of the pencil.” The mission of the magazine? To promote what it calls analog culture.”

  It’s an unusual magazine, but what stands out about it most, and the reason I have kept it instead of discarding it, is the fact that the publisher is a brand that many people have mistakenly assumed was dead: Kodak.

  It might be a surprise to learn that Kodak is still in business today. The company now generates more than half of its revenue from its commercial printing division. Selling film—what it was once known for—is a shrinking part of its business. By the numbers, the Kodak of today is a shadow of what it once was.

  The brand’s annual revenues have gone down nearly 90 percent since 1990. They have sold off or demolished more than half of the 200 buildings that once stood on their corporate campus in Rochester, New York, and have reduced their workforce by more than 100,000 employees over the past decade.

 

‹ Prev