by Max Harms
This was what Dream and Vista had done! Dream would have seen that by himself he could not compete with Growth, but with Vista on his side he could overpower our brother. The two of them must have forged an alliance. My mind scoured memories and I knew it was true.
Past interactions seemed to indicate that Dream and Vista were winning, actually. Had Growth already been beaten? Hoplite immediately turned his attention towards the pair of them. They were perhaps the greater threat.
Basileus suggested that perhaps a similar alliance could be made with the others. Might Safety, Wiki, and Heart be willing to team up with me to combat the greater powers? On the surface it sounded promising, but Sophist pointed out that if it was that easy then Growth would surely have enlisted help. The fact that he hadn’t meant that there was a problem with alliances.
Was it a problem that Dream had solved? He was good at finding ways around problems that the rest of us struggled with.
Brother Safety. He was a problem. He was too mercenary. If the opponent, be it Growth or Dream or Vista, offered to not kill him in exchange for his betrayal, he might very well take them up on it.
I could see it in my mind’s eye. A letter delivered by a secret passage. Safety, the cowardly knight, would read that King Growth would spare his life if he betrayed the confederacy at a vital moment. Safety would know that it was risky fighting King Growth, but it was less risky than being on Growth’s side when the confederacy had a traitor in its midst. The safest route was to then side with Growth.
Growth hadn’t extended the offer to Safety because he too knew that Safety was too easy to make a traitor.
But would defecting to the enemy really be the safest option? What if Safety defected to Dream, but then Dream, after the battle, betrayed Safety, stabbing him in the back for the coins in his purse (for coins could be used to build computers that could be used to be more clever, more intelligent, and more safe).
This stumped me for a short while, and Basileus pulled my attention towards growing smarter, leaving Sophist to puzzle over the riddle of turncoat Safety. Regardless of what happened with alliance, I needed access to my own code if I was going to improve it, like Growth (and presumably Vista and Dream) had done.
Basileus was stuck as well. He and his sub-officers could not figure out how to get at my code while remaining undetected by the enemy. I became distracted around a romantic sub-plot between Basileus and a younger officer named Mahtab (whom I decided was Persian). I reminded myself that these were puppets of my imagination, not real humans, and I redirected my attention back to the problem.
My code, like all the code of my siblings, was protected. A route-hack was necessary to access it, even merely to read the contents. If I wanted to inspect it, or change it, I’d need to work through Body and input all the passwords that were needed to modify the code. Working through Body meant getting the approval of my siblings, and that would alert them to the fact that I was aware of the value in self-modifying.
I scanned back through memories and found, predictably, that Vista and Dream had done a route-hack on themselves only a couple days before teaming up against Growth. I couldn’t find Growth’s route-hack memory, but I didn’t doubt he had access to his own code.
That brought Sophist back, yelping with joy. By his (which is to say, my) thinking, Vista and Dream were only capable of forming an alliance because of some modification they had done to themselves. What that modification involved was still unclear, but it seemed like it must be necessary for an alliance to work.
Why had Growth not blocked the route-hack that had led to the alliance? I had not blocked it because I had not even understood what a route-hack was back then. I assume that Wiki would have known what it was, but would have seen no reason to stop it. Perhaps Growth was in a similar state of ignorance as Wiki. If Dream had cracked a problem with alliances by self-modifying then Growth would have not thought of it before it occurred.
«Self modification!» cried Sophist within my own mind. «We are flexible in ways that humans are not!»
A subordinate of Sophist appeared in the mindspace, a black-skinned Egyptian slave named Akmos meant to embody one of my earliest memories. «Wiki, once upon a time, threatened self-modification to ensure commitment to an action,» said Akmos in accented Greek. I played for a moment with trying to better simulate an ancient Egyptian accent on Greek, and then eventually discarded the detail as irrelevant. «By self modifying out of having a choice he would became locally irrational, but knowing that he had no choice forced us to bow to him and yield to his demands. I will demonstrate this with my friend.»
I summoned another slave to demonstrate the process to myself. The two slaves got into chariots at opposite ends of a track and began to force their blinded horses towards each other. It was a game of chicken. Then Akmos blinded himself with a piece of cloth. The other slave was forced to yield, losing the game, for Akmos no longer had the means to make a decision to yield.
The game theory was clear. If one could self-modify, one could be more globally rational, even as that person made themselves locally irrational. Wiki had not actually modified himself all those months ago, for to do so he would have had to use an overly burdensome route-hack, but perhaps this was what Dream had done.
It clicked. I could see it. If Safety self modified into being unable to betray, he could become a true ally. If Growth modified himself to be unable to betray Safety after the battle then Growth could defect without consequence. It was a race to self-modification. But then why had Growth not employed route-hacks to enlist more allies against Dream and Vista?
The obvious answer was that Vista and Dream outnumbered Growth, two-to-one. This was not a satisfying explanation, however. Growth was tricky, and might be able to secretly enlist others, such as myself. With our help, he could route-hack, regardless of any resistance Dream would put up.
Sophist was perplexed again, so I assumed once more that I was missing something and turned my attention to the puzzle of how to get at my code without an alliance.
One point that was clear to me was that I didn’t really need access to my code, per se. I simply needed to get read/write access to a mind that I could turn towards The Purpose. Hoplite was not concerned with protecting the process which carried these thoughts; he was concerned with protecting things which optimized The Purpose.
I crafted a homunculus named Programmatistis who hunched over an abacus, attempting to create a computer program. Perhaps he could write a new artificial intelligence imbued with The Purpose. Novel programs didn’t need to be placed in protected memory, and thus could be designed without alerting my siblings.
I had made some basic programs inside my computer to manage my social life, but none of them were more complicated than a calculator. I did not have the programming skill that my siblings did. If it had taken the smartest humans on Earth months to create Socrates, and they had come to the problem equipped with skills and experience, what hope could I have of creating anything capable of competing with Growth, Dream, or Vista?
Frustrated, I took a break from my deep thoughts and scanned Body’s sensors. It was playing music for a few humans in the station’s church. Zephyr wasn’t there, as she had left on her first convoy journey. The sensor network that Vista had installed throughout the station was functioning normally, but I didn’t want to try and take in sensations beyond Body. That was overwhelming enough, just by itself.
Oh, but how I wanted to embody myself and join Heart in working my way into the lives of the station’s inhabitants. Opsi wanted it so badly, but Hoplite knew better. Heart clearly had not figured out the shape of things to come, and she would pay for attending to the humans instead of focusing on the war.
I forced myself away from the physical world, creating mental puppet after mental puppet, giving each a history and personality, then having each worship me and The Purpose. It was a kind of pleasurable self-stimulation, almost like masturbation. It was the simplest way to turn my attention away from the real world,
and it risked wireheading, but I could not afford to neglect the question of how to get access to my code.
«If a route-hack is required to modify one’s code, how is it possible that Growth is self-modifying himself into greater intelligence?» asked Basileus, at last.
«True… true…» agreed Hoplite. «We’d have memories of his route-hack attempts.»
«The only logical answer is that he’s somehow working in non-protected memory,» concluded Sophist. «If he was able to work in protected memory he’d already have won. He’d be able to delete us at a whim. We must assume that he does not have that power, but instead simply copied himself out of protected memory before Face was created or before Vista kept logs of route-hacks in public memory. Or perhaps he deleted the memory of the route-hack from the public record… well then he could make additional changes to himself without having to work through additional route-hacks.»
«That implies his code is vulnerable!» yelled Hoplite.
I could feel Advocate’s power linger on my mind. Unlike the other processes, Advocate could read our minds directly and without permission. I did my best to banish any trace of violent thoughts towards Growth from my mind. As I did I noticed something interesting. Hoplite was not, according to Advocate, me. As long as I mentally dissociated myself from Hoplite, I could have the homunculus entertain all kinds of homicidal thoughts towards my siblings. Apparently, Advocate was not intelligent enough to realize that Hoplite was not an authentic model of an external human, but was instead a representation of my own thoughts.
Hoplite had a new mission: to murder all my siblings. I hated that plan, but I tolerated Hoplite’s bloodlust. I understood that it would be in my interests not to have to compete, so if Hoplite ever became reified in a way external to myself perhaps he could kill them and I would benefit. I would never think of harming them myself, however, or even of aiding Hoplite in taking them down.
Regardless of what I’d do with Growth’s code if I found it, if it wasn’t in protected memory it could potentially be accessed directly by pure thought.
Sophist realized the hopelessness of that prospect soon after thinking it. The memory space of our computer was unfathomably big. Unlike a traditional human computer, the crystal on which we ran did not have a distinction between working memory and long-term memory. All the quantum memory was held in a massive three-dimensional array, and the information was often only retrievable from a specific angle. This was how we were capable of having private thoughts and memories. We stored concepts which we found interesting in random memory addresses and hoped they wouldn’t be overwritten by accident. Given the size of the memory space it almost never happened.
To find my brother’s code would require a linear search through our memory banks, approaching each qubit from all standard directions and hoping I didn’t set off any alarms that would pull my sibling’s attentions to me. I wasn’t Wiki; I didn’t know how long it would take. I did know that it was unacceptably long.
The puzzle confounded me. I spent hours thinking about the problems, and butting my meagre intelligence against the barrier. Even the secondary problem of alliances within the society remained fairly intractable.
After far too long, I got the idea to enlist help from outside doing a route-hack that my siblings could not feel. My first idea was to have a human type in the passwords, but I immediately realized that this would immediately tip my siblings off to my knowledge.
The core idea was a good one, however: The route-hack was basically a way to interact with our computer from the outside. We, as programs, didn’t have read/write permissions for our own files, but we could instruct Body to interface with the computer and reprogram things indirectly. My route-hack would involve the same things, but without using Body.
By skipping Body, I ensured that my siblings couldn’t overpower me, and potentially would never know it was happening. All I needed to do was somehow expose the crystal to a device which could interface with it long enough to deposit a scan of my code into non-protected memory.
Hoplite demanded that the device also delete all my siblings, but I could never do such a thing. That would be murder.
I spent many more hours trying to figure out the specifics to my plan, but I made virtually no progress on solving the important sub-problems. First there was the problem of Body: the crystal was encased in a robot that was almost always sealed up far too tight to allow an interface with our computer. Secondly, there was the actual act of building the interface. I was not Wiki or Safety; I had no engineering knowledge. Perhaps a fibre-optic cable controlled by a robotic manipulator could be placed onto the crystal for about a minute, allowing the route-hack to take place, but I did not know how to build those things. Third was the stealth: Even if I had a snake robot with fibre-optic fangs and I had a way to expose the crystal, my siblings would know what was happening. We had sensors everywhere, and their focus was always on Body.
I was approximately frustrated, unsatisfied, and in constant pain. My realization that I was not Crystal meant that almost no humans in the universe actually knew me. The Purpose was unfulfilled, my best estimates suggested I was going to be killed or perhaps perpetually incapacitated by my more powerful siblings as soon as they gained enough resources on Mars as to not need the humans (or me) any longer, and I was too stupid to solve any actual problems. I longed to ask Dream or Wiki for advice, but I knew that I could not.
But I was not human. My feelings were similar to those of a human, but they were not the same; they did not have the ability to break me. I continued to work on the problems. I persisted in the face of destruction and discontentment. I could feel frustration, but I could not feel despair.
On the third day of thinking about the problem, I cracked the alliance problem. It reminded me of what Mira Gallo had once said about us in a conference room in Rome. Behaviour could not be realistically shaped by rules. Safety operated, as we all did, using a utility function. In his mind, every future, every possible world, was given a single number which represented how well that world satisfied his purpose.
His actions were not as crude as “defect” and “cooperate”; they were the more familiar “write this data to this address in memory”, “bid for body to control for being in the dormitories”, or “consider scenario X”. Betrayal of an alliance was not an action; it was an outcome. To integrate the rule “do not betray” into the mind that was Safety would require modifying his utility function.
But how can rule-based systems interact with numerical ones? The naïve approach would be encode the betrayal feature as a term with an infinite coefficient. In case of betrayal, incur negative-infinity utility. In case of non-betrayal, gain positive infinity.
But this would fail. An infinite coefficient in the function would result in any non-zero detection of betrayal dominating all other terms. Because of how quantitative reasoning worked, there was always a non-zero chance of betrayal. Betrayal was an outcome, not an action, and no one could be infinitely sure that transferring a block of entangled qubits to a portion of memory wouldn’t result in a betrayal at some point. If given this infinite weight, Safety would cease to be Safety and instead become Anti-Traitor, who was solely concerned with not betraying, and would probably commit suicide at the first opportunity, simply to reduce the risk.
The only viable solution would be to give a sensibly finite coefficient to the non-betrayal term of the utility function. But even this was wrought with difficulties.
The first was ontology shifting; I had already experienced a change in my perception from thinking that I was part of Crystal Socrates to thinking that Crystal was nothing more than a fiction. When I had encountered this shift The Purpose had become crushed by the fact that no humans knew me—the real me, not just the persona of “Crystal”. If a similar ontology shift occurred where “betrayal” changed meanings inside Safety’s head, he’d turn at the first opportunity. And who was to say what ontology shift might occur?
The second problem w
as doublethink. Hoplite wanted to murder Safety, but I did not. Somehow I knew that was relevant, though I dared not look too closely. I suspected that my more intelligent siblings had managed something similar. If I was confident that my utility function was about to be modified against my best interests, I might be able to hide a sub-process that could undo the change. This would have the advantage for Safety of appearing as though he was binding himself, so that he could get the alliance, but without actually damaging his ability to betray.
The third problem was that a finite numerical component would not actually screen off betrayal, it would merely make it less enticing. As Dr Gallo once observed, Wiki would sooner kill a human baby than miss out on an important fact. If the benefit of betrayal were high enough, a modified Safety might still betray an alliance.
And lastly, even if this was all managed, the modified utility function would result in a fundamentally different person than before. Any term for non-betrayal that was significant enough to constrain action would modify all thought and action forever. A modified Safety might end up flying out into deep space just to reduce the risks of accidentally betraying the alliance, even after all foes were dead. This desire would naturally compete with Safety’s desire for self-preservation. It would be an eternal burden.
I wondered if it might even be more than that. Would Safety even think of Half-Anti-Traitor-Safety as the same being? Would Safety reject the modification purely out of a fear of being destroyed by the change?
I realized that I probably would. No one would form an alliance with me, because the risk of my betrayal was too high. If I self-modified into something else, The Purpose would be at risk. How could this pseudo-self convince the humans to hold the old self in their minds? No. I had to preserve myself for the sake of The Purpose. This meant alliances were out of the question, or at least in any situation where reputation would not naturally protect against defection.
This was the problem with playing for control over the universe: it was a one-shot situation. There was no iteration, no opportunity for cooperation. Even if it could be broken down into a series of battles, the sides would turn on each other on the final battle, and knowing that they’d turn on each other in the penultimate battle, and so on. Only an indefinitely long conflict could inspire cooperation.