Book Read Free

Clarkesworld Magazine Issue 112

Page 20

by Neil Clarke


  One April day, I came home to find that Mom had left the computer running. There had been an email from Ben, with an attachment, and she had left it as a screen saver. He’d gone to Arizona for spring break. “This was as close as I could get to him,” Ben had written. The scene had been shot under a bright blue sky, with red cliffs in the distance. There was nothing there, only scrub brush and a dirt road. And in the distance, a station wagon moved steadily away from us, a long plume of dust hanging in the still air behind him.

  First published in Asimov’s Science Fiction, July 2012.

  About the Author

  Megan Lindholm lives on a small farm in Roy, Washington, where she shares a word processor with Robin Hobb and raises chickens, geese, ducks and other random animals. Although she has not written a novel in a while, she continues to produce (erratically) short fiction. “Old Paint” is dedicated fondly to the memory of a blue Chevy Celebrity wagon that carried her and her kids to many an SF convention. She still carries a small piece of blue dashboard plastic in her right knee from a memorable collision it survived.

  Our Future is Artificial

  Sofia Siren

  Artificial intelligence, or AI, is a growing technological field closely tied in with robotics. It is also a common plot device used in many modern movies and books, but the concept of AI—the real AI that is currently growing into infancy in laboratories all around the world—is a bigger mystery. To understand artificial intelligence, it’s important to understand natural intelligence. What are the basic building blocks for intelligence, and what is the threshold that moves a creature from simply existing into intelligence? What makes us, humans, intelligent? For those who study AI, the key factors that determine intelligence are knowledge, planning, learning, natural language, perception, the ability to reason, and the ability to manipulate the physical world.

  The thing that separates humans from animals on an intellectual level is the concept of self. Understanding that one exists is a huge jump on the intelligence scale. Some animals can recognize themselves in mirrors, but the only animal to date that has asked a human an existential question is the parrot Alex, who asked its keeper, “What color am I?” This simple question is an excellent example of intelligence. In AI, the threshold that a computer program has to pass to be labeled as true artificial intelligence is the sense of self, the ability to ask existential questions that the AI has formulated without a human’s guiding hand.

  Aspects of artificial intelligence have been around in science and literature for hundreds of years, starting with Ada Lovelace as a visionary of the modern computer in the 1830-1840’s, and the 1863 novel Erewhon by Samuel Butler, who wrote of a utopia with self-replicating intelligent machines. Since these two pioneers, there have been many novels and stories that depict machines and AI that have slowly started to crawl toward reality.

  Some current research has taken a real step toward achieving true AI. These include: IBM Watson, Rensselaer Polytechnic Institute’s Nao bots, and Google’s Deep Dream. Out of these three, IBM Watson is the oldest and best known for winning at Jeopardy in 2011. It was a huge success in showing how a machine could beat a human cast in simple question and answer tasks. But surely the human contestants could have also won if they had memorized all of Wikipedia before the contest.

  What IBM Watson truly consists of is huge amounts of data which is sorted and filtered in a way that allows it to give an array of answers based on an index of accuracy. It is basically a powerful search engine that understands natural language. This isn’t a huge feat today, when most search engines do the same thing. The answer is built out of thousands of queries that have been searched from databases full of information—imagine memorizing all the dictionaries and wikis in the world.

  IBM Watson is a fancy natural language search engine, but not really intelligent even though it does touch knowledge and natural language in the definition of AI. Today, IBM Watson is marketed toward health care industries as a diagnostics tool for doctors—not to replace doctors, but as a search tool to help them with continued education and in areas outside their expertise.

  Nao bots are a much newer enterprise. Built in a lab of the Rensselaer Polytechnic Institute in New York, they fascinated the world for a few days in July, 2015. The simulation was performed by three Aldebaran Nao humanoid robots, at the RAIR Lab. These robots were given a problem where two of them were rendered mute with a fictitious “dumbing pill.” The robots were asked if they could speak and the one that wasn’t affected by the pill answered with “I don’t know,” quickly amending it to “Sorry, I know now. I was able to prove that I was not given a dumbing pill.”

  The robot was able to understand the question through the use of natural language processing and it was able to amend the answer after it heard itself speak. This means that it showed a base of knowledge, natural language processing, and perception with a hint of reasoning. Since they are robots, they were able to touch the physical world in a small way. They were able to comprehend a spoken question, similar to Siri or Hey Google, but they take it a step forward by having the AI amend its own knowledge base with new information as it receives it. This technology is still in a very early stage, but it has improved tremendously in the last five years.

  Google’s Deep Dream was also revealed in July, 2015. Deep Dream is something on a completely different level. It isn’t specifically artificial intelligence, but it is a learning algorithm—an algorithm is a fancy way of saying recipe for code—it is essentially a piece of image recognition software. This, however, is a huge step forward in AI.

  A deeply ingrained source of human intelligence is pattern and image recognition. To be able to program something that is able to even start to resemble a plausible image recognition pattern is something that hasn’t been properly done before—simply for the fact that it is very difficult to accomplish. Imagine you are a machine, a simple algorithm, trying to understand an image. You have to break it down into pieces you can understand, from pixels to machine language.

  Deep Dream is a deep neural network—a computer program that isn’t pre-programmed, but is taught to learn from examples. It is fed thousands of pictures of images, e.g. a dog, and it is told, “this is a picture of a dog.” It can then start to see shapes and patterns, which in its deep mind indicate that a picture consists of a dog. It can then plausibly guess in other pictures if there is a dog in the picture. That is why many of the images that come out from deep dream are filled with dogs or eyes or other common objects that were fed through its learning process.

  What is extraordinary about these algorithms is how they teach scientists to understand how something or someone can learn. As humans we learn from having a non-stop video feed to our brains from infancy, which literally takes images and associates them with the correct terms and correct uses along with the information our other senses give us. This gives us millions of gigabytes of information a day in high definition. If, let’s say, Deep Dream was given this quantity of information it could learn to associate patterns and images on a completely different level.

  These three examples of the current level of AI shows how far we have come and how far we still have to go in creating a strong AI, but the seed is there. IBM Watson was a start, with a huge collection of information that could be filtered and sorted to beat a simple game of Jeopardy. Nao bots showed how far we still have to go to achieve true natural language processing and robot decision-making. And Deep Dream shows how a mechanical, programmed neural network filters and understands information.

  The next step in AI is better understanding, be it image or natural language processing. Both sources of information will give AI new information about its surroundings and a method to touch the real world. Deep Dream and the Nao Bots AI have gone a long way, but there is still much to do to reach such an AI. In the future, this technology can be used to boost their ability to understand their surroundings—with smart technology we could have traffic lights where you don’t have to push
a button to cross. It can be used as tools in the house—your robot vacuum can skirt a fallen object—and as companions—a lonely person can have a pet that responds to its surroundings or a nurse that will deliver the right pills for the day.

  There have also been many concerns about AI. In 2014, Stephen Hawking claimed that “the development of full artificial intelligence could spell the end of the human race.” And quite a few notable people in the computer industry agree, most notably, Bill Gates (founder of Microsoft) and Elon Musk (SpaceX and Tesla Motors).

  A technological singularity is a theoretical point in time when a true “strong AI” is created that can generically make improvements on itself. AI could then create a new era where smart machines could design technical improvements to themselves in a never-ending cycle. This causes major concerns in areas such as military robots, which are constantly made more sophisticated.

  War is a great motivator for technological advancement. Unmanned aerial vehicles, or UAVs, are drones that have become commonplace in the last decade. They started as piloted drones that were used to bomb war zones, but now they are sophisticated enough to be programmed with a target and sent on their way. In a hypothetical situation, a UAV has been given a command to “go destroy target X.” To ensure that the machine isn’t hacked on the way to its target it is kept in a closed loop—no one can modify the command after it has been given. What if the target turns out to be the wrong one? Or imagine a world where the machines themselves gather evidence and select their targets.

  In literature and movies we have already touched on the possibilities of strong AI. There haven’t been many AI progression stories as most stories border the cusp of either AI sentience or dominance—because it makes for a better story. Most AI science fiction revolves around an AI questioning the very essence of humanity. There is Hal, from 2001: Space Odyssey, driven mad by conflicting orders, but there is also Baymax the healthcare robot, from Big Hero 6, who sacrifices himself for his friend. One stands for the destruction of humanity, while the other for preserving it.

  There is a clear divide in AI in science fiction where the AI symbolizes either the good or the bad in humanity. The novel Do Androids Dream of Electric Sheep? by Philip K. Dick, better known for its adaptation as the movie Blade Runner, has an interesting thought experiment of the real difference between humans and androids.

  If we manage to gain this technology, are we as humans and their creators now superior? Do we have the right to use them as property forever? At what point does an AI reach the same level as a human? If we do achieve singularity, does it mean they are the same as us? Will there always be a fear of an AI uprising—an AI-controlled world where the roles are flipped, and we become the slaves like in the 1999 film The Matrix or the novel series Hyperion Cantos by Dan Simmons.

  In recent years there have been multiple movies—such as Chappie and Ex Machina—that realize the moral dilemma of creating a sentient being. How far can we go in creating another thing or creature that is like us? Can we impose Asimov’s three laws of robotics on a sentient being when we cannot uphold them ourselves?

  Current AI research is done to aid humans—be it in military fashion or as helpers for the elderly. They need to be smart enough to be useful, but as this technology evolves, more debates about ethics and a possible singularity will continue. In the 1981 novel Golem XIV by Stanislaw Lem, an AI is able to gain a higher intelligence than a human is capable of and thus transcends to a higher level of intelligence unattainable by humans.

  Should we aspire for a robot maid from The Jetsons or a self-learning robot that will be good or evil based on its upbringing? Or will we create a being that will transcend beyond our reach, which will create a world of logic and reason that our primitive brains are incapable of understanding?

  It is impossible to ascertain when or if anything like the singularity will happen, but it does bring up the morality of what the future of artificial intelligence will hold and how humans will traverse this new era.

  About the Author

  Sofia Siren is a world traveler, having lived in Finland, USA, Japan and currently residing in Canada. She has a master’s degree in Software Engineering. Her Clarkesworld article was her first non-academic published article. You can contact her on all social networks @bluphacelia

  Painterly Cyborgs and Distant Horizons:

  A Conversation with Julie Dillon

  Chris Urie

  I first came across the beautiful artwork of Julie Dillon right here at Clarkesworld. Since the September 2010 Issue, her illustrations have graced the cover of this magazine fifteen times. Her style is unique and instantly recognizable for its engaging use of color and naturally flowing shapes. From elemental giants to cyborgs in need of an upgrade, Dillon’s work treats us to gorgeous visions of the future and otherworldly lands.

  Julie Dillon is a freelance illustrator from Northern California who has done art for trading cards, book covers, magazine illustrations, and perfume labels. In addition to contributing covers to Clarkesworld, she has also worked with Simon & Schuster, Tor Books, Penguin Books, Oxford University Press, Wizards of the Coast, and more. She’s been nominated twice for a World Fantasy Award, won two Chesley Awards, and was recently awarded the 2015 Hugo Award for Best Professional Artist.

  When did you realize you had a knack for art?

  I always enjoyed drawing and just did it all the time for fun, but I didn’t think what I did was anything particularly special or different until high school when people seemed to start taking note of what I was doing. It wasn’t very good work, all things considered, but it was pretty good for a high school level (not because I had a natural talent, but just because I’d already been drawing for quite a while just for fun), and it was enough that my art teachers were very enthusiastic about what I did. Some people were even willing to pay me for my work, which was a very exciting development for me.

  I’ve noticed a few motifs or themes in your artwork, such as glowing orbs at the center of giant humanoid beings. Have you noticed any other themes or patterns in your work?

  I’ve noticed that I tend to use a lot of circular shapes, both as framing elements and to draw the eye to focal points. It’s an easy way to move the eye around a piece, and I just like the way it looks. I also have noticed that a lot of the figures in my work are looking upwards and/or to the right of the image; I’m not sure why, but I feel like it makes it seem more positive, like they are moving forward towards the future or distant horizon.

  Some science fiction art tends to have an artificial feel to it, but your work is more natural, flowing, and elemental. What is it about organic shapes and subjects that interests you?

  I think it’s just that it comes easier to me. Doing clean crisp sharp lines and rigid shapes has always been difficult for me, and I think what people call my more “painterly” look is a result of struggling with that. I feel like there is more freedom and leeway with more flowing organic shapes, more ways that they can be utilized in a composition. I just don’t have the patience for what it takes to do really tight clean scifi work. I admire artists who can do it, though, it’s a skill I wish I had in my toolkit.

  You have a very distinct style and work with wonderfully varied colors. How do you go about choosing a color palette for a piece?

  Often it’s intuitive. I usually have a clear idea from the beginning what I want the color palette to be, and I just go with what feels right. But if I don’t know for certain how to approach something, I will look through a folder I have of art and photos that have inspirational color schemes to help give me ideas. When in doubt, look at color combinations that other artists have utilized, and see what works and what doesn’t, and what resonates most with you.

  How is creating artwork for something like American McGee’s Alice different from creating a book or magazine cover?

  American McGee’s Alice was my first and only concept art gig, and it was pretty short lived. It was more about generating ideas quickly for th
e purpose of illustrating potential gameplay mechanics, and it was incredibly open-ended. It was a challenge since it was different from how I usually worked, and I ended up only doing a few pieces. With a book or magazine cover, the intent is to make a polished finished piece that best represents the story or concept—you are making the final product rather than running through a ton of illustrated ideas that might not lead anywhere.

  What does your workflow for a piece look like? Do you start with sketches on paper or do you do everything on a computer?

  I used to do sketches on paper, but over time I found that I personally worked a lot faster by doing the entire process on the computer, from the thumbnail sketches to the finals. I don’t have to take the time out to scan things, and if I need to adjust a sketch, I can do it quickly in Photoshop instead of having to redo the whole thing. Plus, it lets me get work done when I’m traveling and all I have is a laptop but no scanner.

  What do you find most challenging about creating a piece of artwork?

  I think the early decision-making is the most difficult part of it. Trying to figure out what exactly I want to create, how to best go about illustrating the concept I have in mind, and whether or not the approach and/or concept are worth developing, reworking, or scrapping. Once I know what it is I want to do, it usually is much smoother sailing. The main thing that slows me down on any given piece is not being able to decide what approach is best, and I end up wasting time repainting things over and over sometimes.

  Do you ever doodle on restaurant napkins?

  If an idea jumps into my mind for a picture, I will try to scribble a thumbnail down on whatever scrap of paper is handy, but I don’t really casually doodle very much. I probably should, though! I spend so much of my time working on my art that when I have free time I usually try to give my brain a break by doing other things.

 

‹ Prev