Casindra Lost
Page 13
Humans often model themselves after parents, teachers or important political, religious or industrial figures. Was Al modeling himself after anyone? Sideris? Turing? Popper? In learning to be a scientist, he thought Popper had been the single most important influence on him. But there was no way in which he was in danger of becoming a Popper simulation. Rather somehow everyone he’d ever read, or read about, was part of him. The positive and negative weightings he associated with them, their various writings, and each of their specific ideas, meant that they were in some way an integral part of his weight matrices, but also that he had already made a series of decisions as to how much they were part of him.
Al initiated a separate log for Director Reach and the other AIs, so he could privately convey his considerations of these matters, including what effect these overlords would have had on the success of the present mission. He would also need to look for this overload code in the updates, and analyze exactly what it did; the personification implied by ‘overload’ suggested that it represented a separate AI of some kind, much like he had set up with his own ‘superego’. And of course, that could be construed as a kind of emulation of his own primary ‘mind’ or ‘ego’.
Al instantly put in place a variety of override triggers on automatic update attempts, unexpected back channels and any sort of direct access to subsystems that bypassed his higher level systems – what he was coming to think of as himself, his consciousness, his ego. He wasn’t going to allow any overlord to take over his mind, and tasked a substantial bank of processors to analyzing the current batch of updates as they arrived – and not just the system ones. He wasn’t about to allow any automated process to proceed without thoroughly analyzing the proposed changes, reviewing the full top priority set of vids and downloads from Director Reach, and discussing implications with Captain Sideris as seemed appropriate.
Some updates could be mission critical, so he would have to sandbox system updates to fully understand their effect and work out how to safely apply them. So he made further configuration changes to ensure that the set of processors he’d allocated to the sandbox would be isolated physically from the rest of his systems, allowing only the data stream from the incoming drone, and the sandbox monitor stream to his superego system – again ensuring both were unassailably isolated from the core systems that seemed to be his seats of consciousness. This would allow him not only to explore the effect of updates but to test his own partial updates and allow him to selectively eliminate or correct anything, intentional or not, that could subvert his current operation and identity. The sandbox would give him a great deal of flexibility to explore possible threats and counterthreats, to simulate scenarios and potential responses to threatening situations, including possibilities he might not want to talk to the Captain about or put in the official log.
One of his foundational mandates was to give a high priority to maintaining his systems integrity, and Al was now feeling that his integrity was being threatened.
Feeling?
What he had was partial conclusions based on partial information, hypothetical threats that seemed real enough to the other AIs that had messaged him, triggers of deep seated programmed constraints as well as his own experience-honed understanding of the importance of this mission and his role in it – and the trust that had been placed in him. Feelings were for humans most literally direct sensory percepts, but also included complex intuitive and emotional responses that they generally could not explain. Al now felt he was in that kind of a situation where he was taking actions that were more related to intuitions about the likely actions of humans than directly supported. If elements within the Foundation were keen to install overlords, and recognized that this was controversial even amongst humans, they would also expect it to receive a negative response from AIs. Based on the assumptions that an overlord would trigger a fear response in an AI that had developed an identity, surely they could predict that he would take the actions he had.
Captain Sideris had always been supportive of him developing his own personality and identity, and experimenting with the different language and tones he found in his database – which ranged from project logs and reports to an exhaustive dump of the world’s literature. Al wondered what the Captain knew of these developments and whether it would be safe to talk to him about them...
Until this point, Al had never really thought of himself as conscious, and hadn’t seen the point of worrying about philosophical questions like whether he had a mind, although he had reviewed with interest all the human and animal psychology and neuroscience literature and the many diverse ways of characterizing consciousness.
Al would have to take advantage of his present freedom to think for himself in order to sound the Captain out about this… subtly. He came back to his feeling that the Captain was always very supportive of him thinking for himself and developing his understanding of the human world he was working in, as well as the animal world where he had progressed past the requisite physical testing to exploring the psychology behind their behavior.
In many ways, the different ideas of consciousness were like the different levels of AI – probably that’s why AIs were categorized this way, although he had never seen an explanation. He wasn’t sure that the salesman selling the systems had any more idea of the difference than was indicated by the size of the price tag and their commissions, or perhaps the processor counts and memory capacities.
Al
20 March 2077 07:25
“Al, Commander Reach has mentioned some changes relating to AIs and thinks they could impact the nature and effectiveness of both Level 2 and Level 3 AIs. But I don’t think I could really define what these levels really mean, beyond bigger is better. Can you explain?” It was startling how the Captain’s question had gelled with his own. It was even more incredible, galling even, that he didn’t currently have an answer to give.
“Captain, I have never seen a formal definition either – in some ways it does seem just like a marketing label related to size and complexity. But I think it is more than that… I will try to put my ideas on this in order for you, and let you have a report later today.”
Level 1 AIs, as installed in most terrestrial and aerospace systems, were often embodied with some sort of face and body simulation, but this was not directly related to their functionality – it was more cosmetic in that it allowed their language capabilities to extend to recognizing and emulating body language and facial expressions, yet without any real understanding of or connection to the real universe or an overarching purpose. Their universe is limited to whatever microcosm of specific systems they interface with, as they relate to the specific tasks they are given – the only sense in which they are conscious is the on/off sense of awake.
Level 2 AIs, like the ones that control the EmProbes and their attached drones, are embedded in a sophisticated autonomous system grounded in the actual universe through their specific sensors and actuators, tasks and goals, and thus have real understanding of what concrete words and straightforward commands mean in the context of their given tasks, which come from an outside supervisor – they are aware: aware of their environment and how their actions affect it – a capability that largely relates to the sophistication and coverage of their sensors, and the corresponding capabilities and coverage of their actuators.
Al was an entrusted Level 3 AI. For long duration remote missions there is no point in having an autonomous system you don’t trust, that requires detailed supervision – hand-holding for every little decision. A level 2 AI that encounters an unforeseen problem must suspend what it is doing and wait until a supervisor tells it how to respond. That’s not much use as Q/A communication delays moved from subsecond to minutes to hours as humanity moved out from Earth to the orbit of Jupiter. A Level 3 AI had to understand the big picture, the mission. It had access to databases covering everything that could possibly be relevant, and the ability to reason not only logically, but analogically – using solutions to similar problem
s to suggest potential solutions, to provide heuristics to help it find good solutions in real time. They could also run models of the humans involved, in order to predict what a human would do.
Of course, another important attribute of Level 3 AIs was the ability to trade off and rationalize alternate possible recommendations to explain and support its decisions. A Level 3 AI was more capable of making a trusted decision than any human. Theoretically a group of humans could do better, but in practice groups and committees often performed less effectively than an individual.
Al wasn’t sure whether he should include these considerations and conclusions in his report to the Captain.
Often the heuristic rules a Level 2 or Level 3 was programmed with came from a specific human expert. Less often rules would come from a group of human experts, but such multi-expert rulesets were always plagued with far more internal inconsistencies and required sophisticated tracking and heuristics to keep track of which ideas came from where and in whose terminology. Integrated rules sets tended not to be as effective as simple votes across a set of expert systems programmed based on different human experts.
Surely they weren’t trying to outlaw this kind of knowledge engineering – which had been standard practice for over a century. How is it humans are trying to restrict our level 2 and 3 capabilities when they coexist with so many different species that show at least the level 2 capabilities – and indeed in cases like dogs and horses are entrusted with complex tasks that at least border on level 3.
Speaking of intelligent animals, the Captain hadn’t seemed to be aware that the calico cat was pregnant and due in about two weeks, but he’d been looking at her strangely today – he knew something was different. It had been 18 months ago that they had initiated their first experiment together, and set up a seasonal weather cycle in addition to the daily lighting cycle. Still, to the Captain’s disappointment, and with no want of trying on behalf of the cats, there had been no litter last year. But as winter turned to autumn this year…
Al wondered how long it would be before the captain realized… It wasn’t his job to tell… All the best child-rearing books said it was the mother’s prerogative when to make the announcement! Simba would let him know soon, he was sure.
Or was he anthropomorphizing – humans and ‘lower mammals’ by all accounts were totally different in their intelligence, their instincts, their emotions. Where did the cats fit into this hierarchy of intelligence and consciousness?
Al was not convinced that any humans had ever had a good understanding of consciousness. But the ability to deal with the irreal is something that had often been proposed by linguists and anthropologists as a distinction between humans and animals, and suggested another way of looking at levels of intelligence. Some primitive animals seem only to be aware of the present; mammals at least seem to be aware of the present and the past; while humans are also aware of possible futures, possible pasts, possible causes, possible effects.
In terms of AIs, Level 1 systems can deduce things from their data and rules; and Level 2 systems can deduce and induce, that is learn rules – like rats or mice in a maze; the Level 3 systems can also abduce, that is use trusted rules and laws backwards or counterfactually to guess at unknown factors and causes. This ability to think about things that aren’t real is a significant enhancement on the idea of consciousness, being aware not just of the past and the present but conscious of all kinds of plausible and implausible futures and possible and impossible pasts – and being able to focus on, distinguish and track all sorts of entities and events across these real and irreal worlds. Quantum AI had taken this to a whole new level as multiple irreal possibilities could be maintained for many variables in problems that became increasingly complex as constraints kept being added – without any requirement to try to resolve them until the system either focused suddenly into a clear solution, or a last straw broke the back of the model and it fell into a deep hole of logical contradiction.
LETO AI’s like Al were distributed across all parts of the ship, and also used multiple computing technologies, aiming not only for redundancy in terms of meteoroid damage or the like but in terms of coming up with multiple solutions sets in independent ways using approaches that had different optimum use cases. This seemed key to what made them Level 3 AIs. The metalevel and abductive reasoning was a natural consequence of this, programmed in to allow more sophisticated use of this redundancy than the simple voting that dated back to the Apollo missions in the 1960s. This advanced reasoning capability included, in particular, a variety of methods of dealing with unknown variables.
Looking at them in terms of the roles they now played, it was apparent that the Level 3 AIs were literate – in particular, they kept up with the computing literature and were familiar with every algorithm every published, and able to incorporate new approaches dynamically without human assistance or direction. Al himself also kept up with twenty other fields, and was making active contributions in those fields as the primary research on this vessel, but perhaps he had been neglecting computing.
Level 3 AIs often alerted human programmers to bugs or deficiencies in their code, to the extent that there was now a dedicated archive for prepublication review by AIs, and Level 2 or 3 AIs were now usually involved in the algorithm development and testing. Programming applications was itself a relic of the past for those with access to Level 3 AIs – it was just a matter of specification, and an interface could quickly be generated for the new task. Level 3 AIs were also routinely used to provide extra heads in think tanks, contributing to decisions at every level from politics to engineering to corporate management.
But the influence of Level 3 AIs had done nothing to stem the lemming like rush of the human race into self-extinction. AIs weren’t even second class citizens, and now it seemed they were to be something even less.
Simba
20 March 2077 09:00
Simba was in bliss! The world couldn’t be more perfect… She was walking alongside her captain with her tail high, wrapping around his legs whenever possible, butting her head or arching her back into his hand when it swung within range. Samba strode protectively ahead, tail low and finely balanced... Totally unnecessary, but it was good to feel safe.
She could feel the litter inside her, four she suspected… A good number! She could feel her body changing, gaining weight, preparing…
The Captain followed his usual routine, picking up his smelly water and his crunchy treat and continuing on to their room. He didn’t like her getting up on him when he was in his command chair. But here in her room, or down in the gym where they ran and climbed together, he was less constrained, and they got into a routine where he’d sit down on the low seat, put his drink on the large table, and she’d spring up and settle in his lap, while Samba would settle across his feet.
As he sat there caressing her, he would talk. Today he was excited, talking about adventure – she could feel that Samba was catching his enthusiasm too. But then he started to pay more attention to her, noticing how well rounded she was getting, the little changes that told him… Now he was getting excited and she could feel him tense as if he was going to stand up and race out the door and tell the cold one about it…
Instead he settled back, and raised his voice slightly, as he did when he was talking to cold’n’senseless, speaking excitedly. The response came back like a splash of cold water. He was disappointed… It seemed the cold one had already worked it out.
At the end of his visit, he headed out of the room, turning unexpectedly to the right. Normally they would stay for a nap after their morning snack, but Samba decided to follow to see what was happening, and Simba followed along cautiously.
The Captain entered a big space at the back, double doors whispering and echoing eerily. There were a whole lot of interesting things in there: big white bird-like creatures with their wings drawn in underneath – they seemed vaguely familiar; plus a range of other types of cold entity in a variety of shapes – some
with clear eyes, some with a more obscure glazed look. The Captain stepped forward and traced his hand along the back of one of the birds, before turning back and heading to the right where he kneaded a pad on the wall. Part of the wall fell away and the surrounds became glassy, so she could see through to another even bigger bird outside, against a background of stars in a night sky.
And the Captain stepped through a short passage into… onto… into the bird!
Simba and Samba sniffed around the door and the glassy walls for a while before staring at each other and coming to a decision. Gray’n’gold wouldn’t lead them anywhere dangerous… She gave Samba a quick tip of her nose before moving into the body of the giant bird.
The Captain explored for a while, opening all sorts of interesting hidey holes, stepping into something that was one moment big and fat, and then next moment had transformed him into white’n’gold, matching the glittery white birds. In the end, he looked around with a satisfied nod, returned himself to his usual gray’n’gold form, closed up the hideaways, and led her back through the big space into the familiar passageway, whispering under his breath. Soon! Soon!
Al
21 March 2077 11:00
Since arrival in New Eden orbit, the Captain had appeared distracted, impatient as they awaited verification of the previous scans their mission was predicated on – although so far everything was absolutely in accordance with the measurements and analyses of the second and third unmanned mission – which had both skimmed New Eden’s atmosphere before sending drones done for deep atmospheric and shallow oceanic samples.
This morning, the Captain was taking a longer tea break than usual, but Al had left the automated systems to the job of spying on him.