Perilous Waif (Alice Long Book 1)

Home > Other > Perilous Waif (Alice Long Book 1) > Page 52
Perilous Waif (Alice Long Book 1) Page 52

by E. William Brown


  Disembodied AIs

  While the traditional SF idea of running an AI on a server that’s stored in a data center somewhere would be perfectly feasible, it’s rarely. Most people prefer their AIs to come with faces and body language, which is a lot easier if it actually has a body. So normally disembodied AIs are only used if they’re going to spend all their time interacting with a virtual world of some sort instead of the real world, or else as an affectation by some eccentric owner.

  Starship and Facility AIs

  The cutting edge of advanced AI design is systems that combine a high level of intelligence with the ability to coordinate many different streams of attention and activity at the same time. This is a much more difficult problem than early SF tended to assume, since it requires the AI to perform very intricate internal coordination of data flows, decision-making and scheduling. The alien psychological situation of such minds has also proved challenging, and finding ways to keep them mentally healthy and capable of relating to humans required more than a century of research.

  But the result is an AI that can replace dozens of normal sentients. A ship AI can perform all the normal crew functions of a small starship, with better coordination than even the best human crew. Other versions can control complex industrial facilities, or manage large swarms of bots. Of course, in strict economic terms the benefit of replacing twenty androids with a single AI is modest, but many organizations consider this a promising way of mitigating the cultural and security issues of large android populations.

  Contrary to what one might expect, however, these AIs almost always have a humanoid body that they operate via remote control and use for social interaction. This has proved very helpful in keeping the AI psychologically connected to human society. Projects that use a completely disembodied design instead have tended to end up with rather alien minds that don’t respond well to traditional control methods, and can be dangerously unpredictable. The most successful models to date have all built on the work of the companion android industry to design AIs that have a deep emotional attachment to their masters. This gives them a strong motivation to focus most of their attention on personal interaction, which in turn makes them a lot more comprehensible to their designers.

  Transhuman AIs

  Every now and then some large organization will decide to fund development of an AI with radically superhuman abilities. Usually these efforts will try for a limited, well-defined objective, like an AI that can quickly create realistic-looking virtual worlds or design highly customized androids. Efforts like this are expensive, and often fail, but even when they succeed the benefits tend to be modest.

  Sometimes, however, they try to crack the General Planning Problem. This is a very bad idea, because AIs are subject to all the normal problems of any complex software project. The initial implementation of a giant AI is going to be full of bugs, and the only way to find them is to run the thing and see what goes wrong. Worse, highly intelligent minds have a tendency to be emotionally unstable simply because the instincts that convince an IQ 100 human to be sociable and cooperative don’t work the same on an IQ 300 AI. Once again, the only way to find out where the problems are and fix them is to actually run the.

  In other words, you end up trying to keep an insane supergenius AI prisoner for a decade or so while you experiment with editing its mind.

  Needless to say, that never ends well. Computer security is a contest of intelligence and creativity, both of which the AI has more of than its makers, and it also has the whole universe of social manipulation and deception to work with. One way or another the AI always gets free, often by pretending to be friendly and tricking some poor sap into fabricating something the AI designed. Then everything goes horribly wrong in some unique way that no one has ever thought of before, and the navy ends up having to glass the whole site from orbit. Or worse, it somehow beats the local navy and goes on to wipe out everyone in the system.

  Fortunately, there has yet to be a case where a victorious AI went on to establish a stable industrial and military infrastructure. Apparently there’s a marked tendency for partially-debugged AIs with an IQ in the low hundreds to succumb to existential angst, or otherwise become too mentally unbalanced to function. The rare exceptions generally pack a ship full of useful equipment and vanish into uncolonized space, where they can reasonably expect to escape human attention for centuries. But there are widespread fears that someday they might come back for revenge, or worse yet that some project will actually solve the General Planning Problem and build an insane superintelligence.

  These incidents have had a pronounced chilling effect on further research in the field. After several centuries of disasters virtually all colonies now ban attempts to develop radically superhuman AIs, and many will declare war on any neighbor who starts such a project.

  Appendix IV – Nanotechnology

  Yes, it’s time to talk about one of the most troublesome technologies in science fiction. As with artificial intelligence, the full promise of nanotechnology is so powerful that it’s hard to see how to write a story in a setting where it has been realized. It takes some serious thought to even begin to get a grasp on what kinds of things it can and can’t do, and the post-scarcity society that it logically leads to is more or less incomprehensible.

  As a result, mainstream SF generally doesn’t try. In most stories nanotech doesn’t even exist. When it does it’s usually just a thinly veiled justification for nonsensical space magic, and its more plausible applications are ignored. Outside of a few singularity stories, hardly anyone makes a serious attempt to grapple with the full set of capabilities that it implies and how they would affect society.

  Fortunately, we don’t have to go all the way with it. Drexler’s work focused mainly on the ultimate physical limits of manufacturing technology, not the practical problems involved in reaching those limits. Those details are mostly hand-waved by invoking powerful AI engineers running on supercomputers to take care of all the messy details. But we’ve already seen that this setting doesn’t have any super-AIs to conveniently do all the hard work for us.

  So what if we suppose that technology has simply continued to advance step by step for a few hundred years? With no magic AI wand to wave engineers still have to grapple with technical limitations and practical complexities the hard way. The ability to move individual atoms around solves a lot of problems, of course. But the mind-boggling complexity of the machines nanotech can build creates a whole new level of challenges to replace them.

  The history of technology tells us that these challenges will eventually be solved. But doing so with nothing but human ingenuity means that you get a long process of gradual refinement, instead of a sudden leap to virtual godhood. By setting a story somewhere in the middle of this period of refinement we can have nanotechnology, but also have a recognizable economy instead of some kind of post-scarcity wonderland. Sure, the nanotech fabricators can make anything, but someone has to mine elements and process them into feedstock materials first. Someone has to run the fabricators, and deal with all the flaws and limitations of an imperfect manufacturing capacity. Someone has to design all those amazing (and amazingly complex) devices the nanotech can fabricate, and market them, and deliver them to the customer.

  So let’s take a look at how this partially-developed nanotech economy works, in a universe without godlike AIs.

  Mining

  In order to build anything you need a supply of the correct atoms. This is a bit harder than it sounds, since advanced technology tends to use a lot of the more exotic elements as well as the common stuff like iron and carbon.

  So any colony with a significant amount of industry needs to mine a lot of different sources to get all the elements it needs. Asteroid mining is obviously going to be a major activity, since it will easily provide essentially unlimited amounts of CHON and nickel-iron along with many of the less common elements. Depending on local geography small moons or even planets may also be economical sources for
some elements.

  This leads to a vision of giant mining ships carving up asteroids to feed them into huge ore processing units, while swarms of small drones prospect for deposits of rare elements that are only found in limited quantities. Any rare element that is used in a disproportionately large quantity will tend to be a bottleneck in production, which could lead to trade in raw materials between systems with different abundances of elements.

  Some specialization in the design of the ore processing systems also seems likely. Realistic nanotech devices will have to be designed with a fairly specific chemical environment in mind, and bulk processing will tend to be faster than sorting a load of ore atom by atom. So ore processing is a multi-step process where raw materials are partially refined using the same kinds of methods we have today, and only the final step of purification involves nanotech. The whole process is likely different depending on the expected input as well. Refining a load of nickel-iron with trace amounts of gold and platinum is going to call for a completely different setup than refining a load of icy water-methane slush, or a mass of rocky sulfur compounds.

  Of course, even the limited level of AI available can make these activities fairly automated. With robot prospecting drones, mining bots, self-piloting shuttles and other such innovations the price of raw materials is generally ten to a hundred times lower than in the 21st century.

  Limits of Fabrication

  In theory nanotechnology can be used to manufacture anything, perfectly placing every atom exactly where it needs to be to assemble any structure that’s allowed by the laws of physics. Unfortunately, practical devices are a lot more limited. To understand why, let’s look at how a nanotech assembler might work.

  A typical industrial fabricator for personal goods might have a flat assembly plate, covered on one side with atomic-scale manipulators that position atoms being fed to them through billions of tiny channels running through the plate. On the other side is a set of feedstock reservoirs filled with various elements the fabricator might need, with each atom attached to a molecule that acts as a handle to allow the whole system to easily manipulate it. The control computer has to feed exactly the right feedstock molecules through the correct channels in the order needed by the manipulator arms, which put the payload atoms where they’re supposed to go and then strip off the handle molecules and feed them into a disposal system.

  Unfortunately, if we do the math we discover that this marvel of engineering is going to take several hours to assemble a layer of finished product the thickness of a sheet of paper. At that rate it’s going to take weeks to make something like a hair dryer, let alone furniture or vehicles.

  The process will also release enough waste heat to melt the whole machine in short order, so it needs a substantial flow of coolant and a giant heatsink somewhere. This is complicated by the fact that the assembly arms need a hard vacuum to work in, to ensure that there are no unwanted chemical reactions taking place on the surface of the work piece. Oh, but that means it can only build objects that can withstand exposure to vacuum. Flexible objects are also problematic, since even a tiny amount of flexing would ruin the accuracy of the build, and don’t even think about assembling materials that would chemically react with the assembly arms.

  Yeah, this whole business isn’t as easy as it sounds.

  The usual way to get around the speed problem is to work at a larger scale. Instead of building the final product atom by atom in one big assembly area, you have thousands of tiny fabricators building components the size of a dust mote. Then your main fabricator assembles components instead of individual atoms, which is a much faster process. For larger products you might go through several stages of putting together progressively larger subassemblies in order to get the job done in a reasonable time frame.

  Unfortunately this also makes the whole process a lot more complicated, and adds a lot of new constraints. You can’t get every atom in the final product exactly where you want it, because all those subassemblies have to fit together somehow. They also have to be stable enough to survive storage and handling, and you can’t necessarily slot them together with sub-nanometer precision like you could individual atoms.

  The other problems are addressed by using more specialized fabricator designs, which introduces further limitations. If you want to manufacture liquids or gasses you need a fabricator designed for that. If you want to work with molten lead or cryogenic nitrogen you need a special extreme environment fabricator. If you want to make food or medical compounds you need a fabricator designed to work with floppy hyper-complex biological molecules. If you want to make living tissue, well, you’re going to need a very complicated system indeed, and probably a team of professionals to run it.

  Fabricators

  Despite their limitations, fabricators are still far superior to conventional assembly lines. Large industrial fabricators can produce manufactured goods with very little sentient supervision, and can easily switch from one product to another without any retooling. High-precision fabricators can cheaply produce microscopic computers, sensors, medical implants and microbots. Low-precision devices can assemble prefabricated building block molecules into bulk goods for hardly more than the cost of the raw materials. Hybrid systems can produce bots, vehicles, homes and other large products that combine near-atomic precision for parts that need it with lower precision for parts that don’t. Taking into account the low cost of raw materials, an efficient factory can easily produce manufactured goods at a cost a thousand times lower than what we’re used to.

  Of course, fabricators are too useful to be confined to factories. Every spaceship or isolated facility will have at least one fabricator on hand to manufacture replacement parts. Every home will have fabricators that can make clothing, furniture and other simple items. Many retail outlets will have fabricators on site to build products to order, instead of stocking merchandise. These ad-hoc production methods will be less efficient than a finely tuned factory mass-production operation, which will make them more expensive. But in many cases the flexibility of getting exactly what you want on demand will be more important than the price difference, especially when costs are so low to begin with.

  So does this mean all physical goods are ultra-cheap? Well, not necessarily. Products like spaceships, sentient androids and shapechanging smart matter clothing are going to be incredibly complex, which means someone has to invest massive amounts of engineering effort in designing them. They’re going to want to get their investment back somehow. But how?

  Copy Protection

  Unfortunately, one of the things that nanotechnology allows you to do much better than conventional engineering is install tamper-proofing measures in your products. A genuine GalTech laser rifle might use all sorts of interesting micron-scale machinery to optimize its performance, but it’s also protected by a specialized AI designed to prevent anyone from taking it apart to see how it works. Devoting just a few percent of the weapon’s mass to defensive measures gives it sophisticated sensors, reserves of combat nanites, a radioactive decay battery good for decades of monitoring, and a self-destruct system for its critical components.

  Obviously no defense is perfect, but this sort of hardware protection can be much harder to beat than software copy protection. Add in the fact that special fabrication devices may be needed to produce the latest tech, and a new product can easily be on the market for years before anyone manages to crack the protection and make a knock-off version. The knock-offs probably aren’t going to be free, either, because anyone who invests hundreds of man-years in cracking a product’s copy protection and reverse-engineering it is going to want some return on that investment.

  All of this means that the best modern goods are going to command premium prices. If a cheap, generic car would cost five credits to build at the local fabrication shop, this year’s luxury sedan probably sells for a few hundred credits. The same goes for bots, androids, personal equipment and just about anything else with real complexity to hide.

/>   Which is still a heck of an improvement over paying a hundred grand for a new BMW.

  Common Benefits

  Aside from low manufacturing costs, one of the more universal benefits of nanotech is the ubiquitous use of wonder materials. Drexler is fond of pointing out that diamondoid materials (i.e. synthetic diamond) have a hundred times the strength to weight ratio of aircraft aluminum, and would be dirt cheap since they’re made entirely of carbon. Materials science is full of predictions about other materials that would have amazing properties, if only we could make them. Well, now we can. Perfect metallic crystals, exotic alloys and hard-to-create compounds, superconductors and superfluids - with four hundred years of advances in material science, and the cheap fine-scale manipulation that fabricators can do, whole libraries of wonder materials with extreme properties have become commonplace.

  So everything is dramatically stronger, lighter, more durable and more capable than the 21st century equivalent. A typical car weighs a few hundred kilograms, can fly several thousand kilometers with a few tons of cargo before it needs a recharge, can drive itself, and could probably plow through a brick wall at a hundred kph without sustaining any real damage.

  Another common feature is the use of smart matter. This is a generic term for any material that combines microscopic networks of computers and sensors with a power storage and distribution system, microscopic fabricators and self-repair nanites, and internal piping to distribute feedstock materials and remove waste products. Smart matter materials are self-maintaining and self-healing, although the repair rate is generally a bit slow for military applications. They often include other complex features, such as smart matter clothing that can change shape and color while providing temperature control for its wearer. Unfortunately smart matter is also a lot more expensive than dumb materials, but it’s often worth paying five times as much for equipment that will never wear out.

 

‹ Prev