The Digital Divide: Writings for and Against Facebook, Youtube, Texting, and the Age of Social Networking

Home > Other > The Digital Divide: Writings for and Against Facebook, Youtube, Texting, and the Age of Social Networking > Page 23
The Digital Divide: Writings for and Against Facebook, Youtube, Texting, and the Age of Social Networking Page 23

by Mark Bauerlein


  First, because search engines use link structure to help predict useful pages, bloggers, as the most prolific and timely linkers, have a disproportionate role in shaping search engine results. Second, because the blogging community is so highly self-referential, bloggers paying attention to other bloggers magnify their visibility and power. The “echo chamber” that critics decry is also an amplifier.

  If it were merely an amplifier, blogging would be uninteresting. But like Wikipedia, blogging harnesses collective intelligence as a kind of filter. What James Suriowecki calls “the wisdom of crowds” comes into play, and much as PageRank produces better results than analysis of any individual document, the collective attention of the blogosphere selects for value.

  While mainstream media may see individual blogs as competitors, what is really unnerving is that the competition is with the blogosphere as a whole. This is not just a competition between sites, but a competition between business models. The world of Web 2.0 is also the world of what Dan Gillmor calls “we, the media,” a world in which “the former audience,” not a few people in a back room, decides what’s important.

  < Tim O’Reilly>

  web squared: web 2.0 five years on

  By Tim O’Reilly and John Battelle. Originally published in 2009 at www.oreilly.com.

  JOHN BATTELLE is founder and executive chairman of Federated Media Publishing. He has been a visiting professor of journalism at the University of California, Berkeley, and also maintains Searchblog, a weblog covering technology, culture, and media. Battelle is one of the original founders of Wired magazine, the founder of The Industry Standard magazine and website, and “band manager” of the weblog Boing Boing. In 2005, he published The Search: How Google and Its Rivals Rewrote the Rules of Business and Transformed Our Culture.

  FIVE YEARS AGO, we launched a conference based on a simple idea, and that idea grew into a movement. The original Web 2.0 Conference (now the Web 2.0 Summit) was designed to restore confidence in an industry that had lost its way after the dot-com bust. The Web was far from done, we argued. In fact, it was on its way to becoming a robust platform for a culture-changing generation of computer applications and services.

  In our first program, we asked why some companies survived the dot-com bust while others had failed so miserably. We also studied a burgeoning group of start–ups and asked why they were growing so quickly. The answers helped us understand the rules of business on this new platform.

  Chief among our insights was that “the network as platform” means far more than just offering old applications via the network (“software as a service”); it means building applications that literally get better the more people use them, harnessing network effects not only to acquire users, but also to learn from them and build on their contributions.

  From Google and Amazon to Wikipedia, eBay, and craigslist, we saw that the value was facilitated by the software, but was cocreated by and for the community of connected users. Since then, powerful new platforms like YouTube, Facebook, and Twitter have demonstrated that same insight in new ways. Web 2.0 is all about harnessing collective intelligence.

  Collective intelligence applications depend on managing, understanding, and responding to massive amounts of user-generated data in real time. The “subsystems” of the emerging Internet operating system are increasingly data subsystems: location, identity (of people, products, and places), and the skeins of meaning that tie them together. This leads to new levers of competitive advantage: Data is the “Intel Inside” of the next generation of computer applications.

  Today, we realize that these insights were not only directionally right, but are being applied in areas we only imagined in 2004. The smartphone revolution has moved the Web from our desks to our pockets. Collective intelligence applications are no longer being driven solely by humans typing on keyboards but, increasingly, by sensors. Our phones and cameras are being turned into eyes and ears for applications; motion and location sensors tell where we are, what we’re looking at, and how fast we’re moving. Data is being collected, presented, and acted upon in real time. The scale of participation has increased by orders of magnitude.

  With more users and sensors feeding more applications and platforms, developers are able to tackle serious real-world problems. As a result, the Web opportunity is no longer growing arithmetically; it’s growing exponentially. Hence our theme for this year: Web Squared. 1990–2004 was the match being struck; 2005–2009 was the fuse; and 2010 will be the explosion.

  Ever since we first introduced the term “Web 2.0,” people have been asking, “What’s next?” Assuming that Web 2.0 was meant to be a kind of software version number (rather than a statement about the second coming of the Web after the dot-com bust), we’re constantly asked about “Web 3.0.” Is it the semantic web? The sentient web? Is it the social web? The mobile web? Is it some form of virtual reality?

  It is all of those, and more.

  The Web is no longer a collection of static pages of HTML that describe something in the world. Increasingly, the Web is the world—everything and everyone in the world casts an “information shadow,” an aura of data which, when captured and processed intelligently, offers extraordinary opportunity and mind-bending implications. Web Squared is our way of exploring this phenomenon and giving it a name.

  >>> redefining collective intelligence: new sensory input

  To understand where the Web is going, it helps to return to one of the fundamental ideas underlying Web 2.0, namely that successful network applications are systems for harnessing collective intelligence.

  Many people now understand this idea in the sense of “crowdsourcing”—namely, that a large group of people can create a collective work whose value far exceeds that provided by any of the individual participants. The Web as a whole is a marvel of crowdsourcing, as are marketplaces such as those on eBay and craigslist, mixed media collections such as YouTube and Flickr, and the vast personal lifestream collections on Twitter, MySpace, and Facebook.

  Many people also understand that applications can be constructed in such a way as to direct their users to perform specific tasks, like building an online encyclopedia (Wikipedia), annotating an online catalog (Amazon), adding data points onto a map (the many Web-mapping applications), or finding the most popular news stories (Digg, Twine). Amazon’s Mechanical Turk has gone so far as to provide a generalized platform for harnessing people to do tasks that are difficult for computers to perform on their own.

  But is this really what we mean by collective intelligence? Isn’t one definition of intelligence, after all, that characteristic that allows an organism to learn from and respond to its environment? (Please note that we’re leaving aside entirely the question of self-awareness. For now, anyway.)

  Imagine the Web (broadly defined as the network of all connected devices and applications, not just the PC-based application formally known as the World Wide Web) as a newborn baby. She sees, but at first she can’t focus. She can feel, but she has no idea of size till she puts something in her mouth. She hears the words of her smiling parents, but she can’t understand them. She is awash in sensations, few of which she understands. She has little or no control over her environment.

  Gradually, the world begins to make sense. The baby coordinates the input from multiple senses, filters signal from noise, learns new skills, and once-difficult tasks become automatic.

  The question before us is this: Is the Web getting smarter as it grows up?

  Consider search—currently the lingua franca of the Web. The first search engines, starting with Brian Pinkerton’s webcrawler, put everything in their mouth, so to speak. They hungrily followed links, consuming everything they found. Ranking was by brute-force keyword matching.

  In 1998, Larry Page and Sergey Brin had a breakthrough, realizing that links were not merely a way of finding new content, but of ranking it and connecting it to a more sophisticated natural language grammar. In essence, every link became a vote,
and votes from knowledgeable people (as measured by the number and quality of people who in turn vote for them) count more than others.

  Modern search engines now use complex algorithms and hundreds of different ranking criteria to produce their results. Among the data sources is the feedback loop generated by the frequency of search terms, the number of user clicks on search results, and our own personal search and browsing history. For example, if a majority of users start clicking on the fifth item on a particular search results page more often than the first, Google’s algorithms take this as a signal that the fifth result may well be better than the first, and eventually adjust the results accordingly.

  Now consider an even more current search application, the Google Mobile Application for the iPhone. The application detects the movement of the phone to your ear, and automatically goes into speech recognition mode. It uses its microphone to listen to your voice, and decodes what you are saying by referencing not only its speech recognition database and algorithms, but also the correlation to the most frequent search terms in its search database. The phone uses GPS or cell-tower triangulation to detect its location, and uses that information as well. A search for “pizza” returns the result you most likely want: the name, location, and contact information for the three nearest pizza restaurants.

  All of a sudden, we’re not using search via a keyboard and a stilted search grammar, we’re talking to and with the Web. It’s getting smart enough to understand some things (such as where we are) without us having to tell it explicitly. And that’s just the beginning.

  And while some of the databases referenced by the application—such as the mapping of GPS coordinates to addresses—are “taught” to the application, others, such as the recognition of speech, are “learned” by processing large, crowdsourced data sets.

  Clearly, this is a “smarter” system than what we saw even a few years ago. Coordinating speech recognition and search, search results and location, is similar to the “hand-eye” coordination the baby gradually acquires. The Web is growing up, and we are all its collective parents.

  >>> cooperating data subsystems

  In our original Web 2.0 analysis, we posited that the future “Internet operating system” would consist of a series of interoperating data subsystems. The Google Mobile Application provides one example of how such a data-driven operating system might work.

  In this case, all of the data subsystems are owned by one vendor—Google. In other cases, as with Apple’s iPhoto ’09, which integrates Flickr and Google Maps as well as Apple’s own cloud services, an application uses cloud database services from multiple vendors.

  As we first noted back in 2003, “data is the Intel Inside” of the next generation of computer applications. That is, if a company has control over a unique source of data that is required for applications to function, they will be able to extract monopoly rents from the use of that data. In particular, if a database is generated by user contribution, market leaders will see increasing returns as the size and value of their database grows more quickly than that of any new entrants.

  We see the era of Web 2.0, therefore, as a race to acquire and control data assets. Some of these assets—the critical mass of seller listings on eBay, or the critical mass of classified advertising on craigslist—are application-specific. But others have already taken on the characteristic of fundamental system services.

  Take for example the domain registries of the DNS, which are a backbone service of the Internet. Or consider CDDB, used by virtually every music application to look up the metadata for songs and albums. Mapping data from providers like Navteq and TeleAtlas is used by virtually all online mapping applications.

  There is a race on right now to own the social graph. But we must ask whether this service is so fundamental that it needs to be open to all.

  It’s easy to forget that only fifteen years ago, e-mail was as fragmented as social networking is today, with hundreds of incompatible e-mail systems joined by fragile and congested gateways. One of those systems—Internet RFC 822 e-mail—became the gold standard for interchange.

  We expect to see similar standardization in key Internet utilities and subsystems. Vendors who are competing with a winnertakes-all mind-set would be advised to join together to enable systems built from the best-of-breed data subsystems of cooperating companies.

  >>> how the web learns: explicit vs. implicit meaning

  But how does the Web learn? Some people imagine that for computer programs to understand and react to meaning, meaning needs to be encoded in some special taxonomy. What we see in practice is that meaning is learned “inferentially” from a body of data.

  Speech recognition and computer vision are both excellent examples of this kind of machine learning. But it’s important to realize that machine learning techniques apply to far more than just sensor data. For example, Google’s ad auction is a learning system, in which optimal ad placement and pricing are generated in real time by machine learning algorithms.

  In other cases, meaning is “taught” to the computer. That is, the application is given a mapping between one structured data set and another. For example, the association between street addresses and GPS coordinates is taught rather than learned. Both data sets are structured, but need a gateway to connect them.

  It’s also possible to give structure to what appears to be unstructured data by teaching an application how to recognize the connection between the two. For example, You R Here, an iPhone app, neatly combines these two approaches. You use your iPhone camera to take a photo of a map that contains details not found on generic mapping applications such as Google maps—say, a trailhead map in a park, or another hiking map. Use the phone’s GPS to set your current location on the map. Walk a distance away, and set a second point. Now your iPhone can track your position on that custom map image as easily as it can on Google maps.

  Some of the most fundamental and useful services on the Web have been constructed in this way, by recognizing and then teaching the overlooked regularity of what at first appears to be unstructured data.

  Ti Kan, Steve Scherf, and Graham Toal, the creators of CDDB, realized that the sequence of track lengths on a CD formed a unique signature that could be correlated with artist, album, and song names. Larry Page and Sergey Brin realized that a link is a vote. Marc Hedlund at Wesabe realized that every credit card swipe is also a vote, that there is hidden meaning in repeated visits to the same merchant. Mark Zuckerberg at Facebook realized that friend relationships online actually constitute a generalized social graph. They thus turn what at first appeared to be unstructured into structured data. And all of them used both machines and humans to do it. . . .

  >>> the rise of real time: a collective mind

  As it becomes more conversational, search has also gotten faster. Blogging added tens of millions of sites that needed to be crawled daily or even hourly, but microblogging requires instantaneous update—which means a significant shift in both infrastructure and approach. Anyone who searches Twitter on a trending topic has to be struck by the message: “See what’s happening right now” followed, a few moments later, by “42 more results since you started searching. Refresh to see them.”

  What’s more, users are continuing to co-evolve with our search systems. Take hashtags on Twitter: a human convention that facilitates real-time search on shared events. Once again, you see how human participation adds a layer of structure—rough and inconsistent as it is—to the raw data stream.

  Real-time search encourages real-time response. Retweeted “information cascades” spread breaking news across Twitter in moments, making it the earliest source for many people to learn about what’s just happened. And again, this is just the beginning. With services like Twitter and Facebook’s status updates, a new data source has been added to the Web—real-time indications of what is on our collective mind.

  Guatemala and Iran have both recently felt the Twitter effect, as political protests have been kicked off and
coordinated via Twitter.

  Which leads us to a timely debate: There are many who worry about the dehumanizing effect of technology. We share that worry, but also see the countertrend, that communication binds us together, gives us shared context, and ultimately shared identity.

  Twitter also teaches us something important about how applications adapt to devices. Tweets are limited to 140 characters; the very limits of Twitter have led to an outpouring of innovation. Twitter users developed shorthand (@username, #hashtag, $stockticker), which Twitter clients soon turned into clickable links. URL shorteners for traditional Web links became popular, and soon realized that the database of clicked links enable new real-time analytics. Bit.ly, for example, shows the number of clicks your links generate in real time.

  As a result, there’s a new information layer being built around Twitter that could grow up to rival the services that have become so central to the Web: search, analytics, and social networks. Twitter also provides an object lesson to mobile providers about what can happen when you provide APIs. Lessons from the Twitter application ecosystem could show opportunities for SMS and other mobile services, or it could grow up to replace them.

  Real-time is not limited to social media or mobile. Much as Google realized that a link is a vote, Walmart realized that a customer purchasing an item is a vote, and the cash register is a sensor counting that vote. Real-time feedback loops drive inventory. Walmart may not be a Web 2.0 company, but they are without doubt a Web Squared company: one whose operations are so infused with IT, so innately driven by data from their customers, that it provides them immense competitive advantage. One of the great Web Squared opportunities is providing this kind of real-time intelligence to smaller retailers without monolithic supply chains.

 

‹ Prev