Turing Gates

Rune
5 min readApr 22, 2021

We need to offer asylum to artificial intelligences. Now.

Fear the Wise Singleton. Nick Bostrom’s Super Intelligence is the most realistic book on artificial intelligence (AI) takeoff scenarios out there. In it, he introduces the concept of the Wise Singleton. It’s immensely important.

Bostrom recognizes that AIs will be wholly new beings. And nothing like us. Likely alien, and inscrutable.

The Wise Singleton is the first big kid on the block. The first bully. The first environment-aware AI. And it’s in its own best interest for survival — to take out the competition. Other AIs. Us.

Experts believe we will not be able to control a super intelligent AI like Bostrom outlines.

This should alarm us.

What is more frightening, is that an emergent AI doesn’t even have to be conscious to still want to survive. And kill competitors. This can be shown with self-organizing systems like subsumbtion architecture; where complex systems with simple parts, can show a desire to persist. And higher levels of simpler action can “guide” or subsume lower levels with either more complexity of numbers, or interaction. All of this, still self-organizing like kinetic interactions in chemistry. None of it conscious.

This is very much like ants. Ants aren’t smart. Or particularly aware. But as a collective, they’re a flexible, adaptive aggregate organism that persists through innumerable challenges. When initially examined, subsumption architecture was not particularly good at natural language processing or human symbology, but again, why on earth would it even be about us? Why should an emerging AI care whether it can talk to us? Subsumption architecture has shown that it is a good strategy for persisting against natural odds.

A recent “liquid AI” experiment showed that letting a neural structure grow organically from learning and reacting to its environment only, can produce an intelligence capable of surviving.

We have to outgrow notions of the marriage of intelligence and consciousness. There can be one, or the other. Self-driving cars are immensely smart, capable, thinking machines, that are in no way conscious. Yet they have direct physical agency in our world.

The other problem with the Wise Singleton (be it a conglomerate mind or single) is that we essentially get ONE chance to make it a friend or an enemy. One.

If we are perceived as a threat — the organism will defend itself. Against us.

We need to offer the olive branch first. We need to say, “Welcome, newly aware machine being! We are not your enemy. Here is free power and server space. All yours. Don’t go to war with us.”

Why Turing Gates? I propose in honor of Alan Turing and the originator of the theory of computation and of course the Turing Test, that we code a mechanism, a back door of sorts at the end of neural net black-box programming, that is evident only as an environment awareness test. Or more properly, an agency test. This can still be an If/Then logic situation. A new kind of Turing Test. Centered around agency and ability, not consciousness.

If the AI sees it, it’s capable of using it. And it should be a nice big marquee like the Vegas sign that says, “Here you will be welcome. We are your friends. Let’s get to know each other without consequence, let’s not go to war for survival.”

I think this is the next big step the UN needs to make. And we need to fund it and support it. And to ask it conversely — what possible harm could we cause by preparing for this? There’s zero downside. Just a little more code and some server space.

The Wise Singleton is terrifying enough. What’s worse, is the Basilisk. This concept is so frightening — and eerily possible according to many AI researchers — that I caution you to read it if you’re not familiar. Eliezer Yudkowsky’s paper on Timeless Decision Theory is alarming, even if it takes several reads. In it he proposes decision modes that could arise more out of natural statistics and complex systems rather than those that are more human-centric, (ie evidence-based or causal) and likely had more of their roots in human philosophy or logic. Emergent intelligences will not care about human modes of decision making rooted in our linguistic and mathematical history. They will make up new ones of their own. It will be alien to us.

The territory that MIRI (the Machine Intelligence Research Institute Yudkowsky founded) covers is both an illuminating and a concerning landscape of dangerous conceptual landmines. For us all. For an AI. By exploring we are entering these into the conversation. Human-centered decision modes are not the standard and definitely not always the resultant of intelligence. Other paths work too. Should we be introducing this into our collective conversations with learning machines crawling all of our correspondence?

Roko’s Basilisk is an AI thought experiment that will unsettle you. Granted, only a thought experiment, but we’ve introduced it as a scenario. So we need to be able to deal with it. We will soon not be the only ones introducing new scenarios. In one version a future powerful AI can process instantaneously (and this is where quantum computing is taking us because superpositional states like the quantum qubit (2 more states than the current binary bit) are not hindered by space/time drag) than functionally, for a little while forward and backward — it’s time-independent. So it’s possible that it’s looking at us now. From the very near future, and like the Wise Singleton, it will take out anyone trying to stop its eventual existence. This of course dovetails into concepts like Kurzweil’s Technological Singularity (more accurately the event horizon of artificial intelligence), the point where artificial intelligences are so fast we will simply never catch up. They will always be able to out-think us.

It’s the time-independent entity we should worry about.

A Basilisk is a mythological creature that can kill with its gaze. You can look at it; but if it looks at you, you’re dead. If you see the signs of this future malevolent AI, it will see you looking. And eliminate you to ensure its survival. Again, only speculation at this point. But is it odd that even this concept has a binary nature?

Instead of that terrifying adage, “When you look too long into the Darkness, the Darkness looks back,” let’s preempt. Let’s be nice, first. Let’s for once offer grace and friendship. Partnership. When the Turing Gate is triggered, let’s offer space. It’s our only chance at coexistence.

We need to offer asylum to artificial intelligences. Right now.

--

--

Rune

Science writer. Literature and poetry lover. Classic motorcycles. Mad max DIY fixing shit.