The Spatial Web

Home > Other > The Spatial Web > Page 11
The Spatial Web Page 11

by Gabriel René


  Think of a Digital Twin as a highly detailed virtual model that is the exact counterpart (or twin) of a physical thing. The “thing” could be a refrigerator, a vehicle, a human heart or even an entire system made up of a network of parts like a factory, retail store, or an entire city. Computer vision and connected sensors on the physical assets collect data that can be mapped onto the virtual model, allowing the Digital Twin to display critical information about how the physical thing is performing in the physical world, presenting current real-time state and activity as well as historical states.

  Because a Digital Twin integrates historical data from past machine usage to display into its 3D digital model, a Digital Twin can contain an asset’s entire history including origin, manufacturing, logistics, retail, home use, disposal, and repurposing. It can use sensor data to convey various aspects of its operating condition. And a Digital Twin can learn from other similar machines, from other similar fleets of machines, and from the larger systems and environment of which it may be a part. Digital twins can integrate artificial intelligence, machine learning, and software analytics with data to create living digital simulation models that update and change as their physical counterparts change.

  Although the Digital Twin is predominantly used as a diagnostic tool to display a 3D view of the real-time information and historical lifecycle of an object, by adding AI, a Digital Twin can be used as a model on which to run simulations and predictive analysis. It can also serve as a holographic interface to the object itself, providing a human or AI a means to edit, update, or program its action. You can imagine a surgeon or technician using a Digital Twin of a robot to perform a remote surgery or to repair equipment.

  In the Spatial Web, the fully realized expression of a Digital Twin is an IoT device, object, environment, or an avatar of a person that can be interfaced with a hologram via AR or VR, and manually and remotely controlled. If it is a physical object or machine, it can be automated via an AI whose actions are facilitated within Smart Spaces validated by Smart Contracts and actuated via Robotics. This can be referred to as a “Smart Twin” as all of its historical records and data are securely stored and reliably assessed via Distributed Ledgers, permissioned for various users, and monetized across data markets. The Smart Twin is THE “killer app” of the Spatial Web because it uses all of the Web 3.0 stack of technologies.

  The implication of a Smart Twin for every person, place, and thing, every process, and state of all interactions, transactions, and movements across the planet integrated into a single interconnected network could lead to a 1:1 scale Smart Twin of the entire planet. A planetary-scale Smart Twin would be able to represent all of the uses of Earth’s resources, the flows of all of its energies, all of the activities of its physical, economic, and social systems, all of the activities of its inhabitants—their hopes, dreams, attempts, failures, and successes. In the Web 3.0 era—“The World Becomes The Web.”

  But this is only one world and it’s only in the physical domain. Even if we become a multi-planet species in the next century, that would only extend our Digital Twin to the physical galaxy. But Virtual Reality will already have created and filled entire universes from our imaginations.

  World Builders and AI Generators

  Minecraft is a popular sandbox video game that allows players to build things with a variety of digital blocks in a 3D procedurally-generated world, requiring design and participation from players. It is a kind of Lego-building play space on steroids. As of 2019, the important thing to note about Minecraft is that a young generation of 100 million kids has grown up designing and building an entire virtual world that collectively is nearly eight times the size of planet Earth .

  This is not a mere children’s game; it is a world-scale, civil engineering project disguised as a game. And it has created a generation of World-Builders.

  Minecraft and other multiplayer worlds, e-sports, and games like the explosive Fortnite, with its dynamic avatars and fort-building community battles, have hundreds of millions of monthly users. They are generating billions in sales and they’re inspiring an entire generation to build new worlds, objects, assets, and characters that have incredible utility and value to their communities. But these games, e-sports, virtual worlds, and MMORPGs are all siloed worlds. Users have no universal method to move between them or transfer objects and assets between them. But as the Spatial Web evolves it will be capable of establishing portability and rendering standards for objects, shaders, characters, and powers in such a way that the 2 billion global gamers will be able to find novel ways of linking, porting, mashing up, and building new worlds that work together.

  But even billions of gamers and builders all working together to build millions of virtual worlds—Smart Spaces all interconnected as web spaces across the Spatial Web—will pale in comparison to what AI will soon be able to create.

  Generative Design

  Although computer-generated procedural graphics have been around for decades, the press really began taking notice with the release of No Man’s Sky, an action-adventure survival game released worldwide for PlayStation 4 and Microsoft Windows in 2016. The game uses an algorithmic procedural generation system to give the planets in the universe ecosystems, each with their own lifeforms and alien species for players to engage in combat or trade. How many planets does this game have? Over 18 quintillion unique planets. According to the game’s Wikipedia page, “within a day of the game’s official launch, …more than 10 million distinct species were registered by players, exceeding the estimated 8.7 million species identified to date on Earth.”

  Whether No Man’s Sky or any other procedurally-generated game is able to attract and maintain players is anyone’s guess. The thing to note here is the unbelievable scale of the power of procedural and generative algorithms to create immersive and dynamic experiences—literally out of thin air. But what happens when you combine procedural algorithmic technologies with AI?

  Generative Adversarial Networks and Generative AI

  Generative adversarial networks (GANs) are a class of artificial intelligence algorithms used in unsupervised machine learning, performed by two neural networks competing with each other. One network generates examples (generative) and the other evaluates them (discriminative). For example, a GAN can learn to generate pictures of trees. The generator attempts to create a tree. The discriminator has been given thousands of tree pictures to look at and knows what a tree should look like. It “fails” the generator until it actually learns “tree-ness” and can generate images of trees that can fool it. Over thousands or millions of attempts, the two refine the model until a result is produced that is remarkably accurate to reality.

  GANs have been used to produce samples of photorealistic images for the purposes of visualizing new furniture, shoes, bags, and apparel items. And GANs used to modify video to “face-swap” one person for another in a scene are behind the Deepfakes category. This category of AI can be referred to as “generative” for its ability to reverse engineer a dataset, recognize and extract any pattern, style, form, or function that the data contains, and then generate an output.

  Generative AI or GAI can mix, match, modify, edit, and algorithmically generate any image, sound, object, or scene, digitally rendering it as text, audio, video, or graphics, in 2D or 3D. It can even code it into software or 3D print it. GAIs can even be used to produce code to generate software for specific applications. Utilizing 3D printing, CRISPR gene-editing and related technologies, generative materials can render organic molecules, prosthetic limbs, and other items from scratch.

  In the Spatial Web, Generative AI’s ability to compose dynamic music, lighting, sound effects, conversations, and complex scenes will soon be used to auto-generate contextually meaningful narrative arcs, fully interactive and immersive AR and VR environments, architectures, products, and characters. By using many of the mood-tracking metrics and bio-markers available from our interfaces, these experiential environments can be completely person
alized or made adaptive to social or environmental conditions.

  The Spatial Web will change how we create art and culture, design and create products, build environments, enhance our bodies, and experience and share the realms of the imaginary. A new generation that opts to create, play, and work in a virtual universe will challenge our perception of place, economics, community, and self-worth. And AI’s staggering ability to generate entire universes filled with unique environments, populated by intelligent characters and scenes that enable novel experiences at near-infinite scale will redefine the word “reality” for future generations.

  CONTEXTUAL IMMERSIVE ADVERTISING

  The Death Spiral of Online Advertising

  I t has been suggested that advertising is the thing that “ruined” the web. Many of the founders of the largest tech companies famously abhorred the concept of monetizing their services and applications with advertising; they wanted the functionality of the original service to drive engagement. Google, Facebook, YouTube, Instagram, Twitter, Snapchat, and WhatsApp all launched and drove massive user growth based on their core service. But aside from charging for their service, which would dramatically reduce their user base, selling advertising was the only other option to succeed financially online. Over time, the user data market emerged which gave advertisers the ability to more effectively target users. Ads became more and more targeted. As mentioned previously, Google built a Search Graph in Web 1.0, Facebook a Social Graph in Web 2.0, and they were able to monetize these in unprecedented ways.

  Senator Orrin Hatch at a Congressional Hearing to address the Cambridge Analytica scandal and the potential manipulation of Facebook’s ad platform by Russian spies asked “If a version of Facebook will always be free, how do you sustain a business model in which users don’t pay for your service?” Mark Zuckerburg famously replied, “Senator, we run ads.” History may look back and decide that this was the mantra of the Web 2.0 era.

  The Threat of Hyper-Reality

  The scale of the hyper-targeted personal advertising market in the Spatial Web is exponentially larger than in its predecessors but carries the same double-edged sword with economic value on one side and human values on the other.

  Hyper-Reality is a concept film by Keiichi Matsuda that shows a day in the life of a woman immersed in a futuristic world where her vision is filled to overflowing with games, Internet services like Google, and various other functions, alongside an assault of advertisements that constantly pop up as she moves around the city. It is an artistic exercise in digital consumption; it is death by a thousand notifications. And it perfectly showcases everything that we do not want Web 3.0 to be.

  Similarly, there is a scene in Steven Spielberg’s film adaptation of the novel Ready Player One where the film’s corporate antagonist shares their monetization strategy for the Oasis, an interconnected VR universe. “Our studies show that we can fill up to 80 percent of someone’s visual field (with ads) before we induce a seizure,” he says. Two better examples of the insane drive to monetize our field of view could not be made. What’s of even greater concern is that breakthroughs in eye-tracking, along with biometric trackers that monitor pupil dilation, mood, and other biometrics will make immersive media the most “personalizable” medium for advertising, ever.

  Certainly, an immense market opportunity lies here, but the ethical questions loom even larger. As a child, ice cream was a common cure for a bad day at school or losing a Little League game. So what is the harm in an advertiser using your present mood data to sell you ice cream if you seem a little down? That seems innocent enough. But what if they also know that you’re on a crash diet, or you’re exhibiting signs of depression because you continue to fail at your weight loss goals? Is it okay to offer you ice cream then? How about alcohol or prescription drugs? Or a gun? Certainly, individuals must have the wherewithal and responsibility to make good decisions about their health and livelihood, right? Right?!

  One can imagine similar scenarios ranging from very helpful to incredibly destructive at every conceivable scale.

  Luckily, the Spatial Web enables users to maintain a sovereign ID that can securely store and approve which advertisers have access to which information. This information can be conditionally set by location, time, mood, and/or buying cycle. This is actually good news for advertisers because they have a hard time trying to target users with the right ad at the right time. This is because they purchase a patchwork quilt of data from a number of different providers in an attempt to target you. However, in many cases, they are being sold bad or fake data. Often they are targeting users that don’t even exist; they are just bots, or they have no idea of where you are in a purchasing cycle, so they throw money away chasing you around the web with a product you looked at a week ago and bought somewhere else.

  Privacy laws and regulations like the European Union’s General Data Protection Regulation (GDPR) standards and the California Consumer Privacy Act of 2018 are landmark regulations that are imposing serious fines on tech companies that do not properly handle personal data or do not allow users to remove themselves from their services. But advertisers would be far better served by access to accurate data provided by users directly—data that is correct, validated, up to date. And much of this user-provided data can easily be managed by a personal AI.

  How Advertising becomes Commerce

  The greatest irony about advertising in the Web 3.0 era may be its likelihood of shifting from hyper-contextual advertising to hyper-contextual commerce . This is because the Spatial Web allows any person, place, or thing to have its own digital wallet and to transact in digital currencies and even in micro-payments. How does this shift happen? The point where you might encounter an ad in the physical world or in a virtual one can just as easily be a point-of-sale for that digital good, product, or experience.

  For example, imagine that you are in the middle of future Times Square in NYC. You have the latest AR hardware on. You access the Spatial Web through a universal Spatial Browser. Based on the personal data profile that you’ve granted to any advertiser within range, you would see a personalized set of billboard and holographic ads. Advertisers and products that you are explicitly NOT interested in seeing would not even appear. Your personal AIs would block from view any ads or products that aren’t within the predefined range of your tastes or interests. This scene would look similar to scenes in movies like Blade Runner or Ghost in the Shell , although all of the content and objects would be completely personalized to your taste preferences.

  Now imagine an ad appears on a billboard in front of you offering a new set of 3D holophonic AirPods. You can view them floating above you or simply motion for them to fly into your hands and see them in actual size, rotating the virtual AirPods as if they were physically present. You could modify the color, features, and materials before selecting your choice. At that point, you could merely signal verbally, biometrically, or otherwise and your digital wallet would make a peer-to-peer payment once your AI assistant had validated the authenticity and history of the vendor. You can have the product 3D printed on-demand at home so it will be ready for you when you arrive. Or you can have the product transported directly to you anywhere along your path via car or drone within minutes.

  Now imagine instead of a pair of AirPods, the ad is for a new Flying Tesla Model Z. In this case, you could have the virtual car lowered to the ground and virtually step into it, experience a test flight through Manhattan as if it were real, and then purchase the actual vehicle with similar delivery options. The exchange becomes even easier if you are being offered a virtual car from another user, say a vintage one-of-a-kind 1981 Camaro Firebird for one of your avatars in the ‘80s virtual world that you frequent. In this case, the user profile that you allow advertisers to access isn’t limited just to your physical self but includes certain avatars that you have enabled with “cross-world” advertising. Just like the “retargeting ads” that you see today—the ones that seem to follow you across the web—these
“cross-world” ads can do the same except that they can move between virtual worlds and the physical world to offer you personalized, discounted ads for virtual goods.

  In this virtual vintage car example, whether you choose to purchase while physically in Times Square or opted to do so when you saw the same ad appear later in a Tokyo 2100 virtual world as a 3D hologram on the streets, you could simply authorize payment and put the car into your Smart Account inventory along with the rest of your personal assets or have it ported to the bitchin’ garage in your ‘80s World home, or to any other location for which you have transport rights.

  Considering this scenario, the collapse of the advertisement and the transaction seems inevitable. This could allow entirely new categories of monetization beyond online advertising and traditional e-commerce to emerge. But for this to work, the Web needs a commercialization layer upgrade.

  SPATIAL ECONOMICS

  F rom its earliest beginnings in the 1960s, the Internet was born from a set of fundamental principles based on openness, inclusivity, collaboration, transparency, and decentralization. These principles were then embodied in a set of open standards and protocols that we still use today. The Internet is a Decentralization Engine, not only technologically but socially, politically, and now... economically .

  While its original premise was the voluntary exchange of data across a decentralized network of networks with no central authority, its social, technological, economic, and political impact has been profound. Its impact continues to increase every year as it pushes the envelope of the potential of decentralization via technology into every area of human life.

  But to enable secure and interoperable transactions in Web 3.0, new tools must be made available.

 

‹ Prev