Can We Restrain AI?

One of the main challenges in creating a greater-than-human Artificial Intelligence is ensuring that it's not evil. When we "turn it on", we don't want it to wipe out us out or enslave us. Ideally, we want it to be nice.

The problem is how we can guarantee this.

Trap it

Some have suggested limiting the Artificial Intelligence by "trapping it" in a virtual world, where it could do no damage outside the confines of this environment. While this might be a safe solution it could limit the AI to only function within the limits and reality of the virtual world. Ok, so we might be able to program a perfectly realistic and isolated virtual world, but would this happen? Is there a parallel project to create such a "virtual prison" alongside AI research? And what if AI was to evolve or emerge from existing systems (such as the internet or a selection of systems within it) before we could develop such a prison?

Then of course there is the possibility of it escaping. Certainly if it exceeded our own intelligence it might be able to figure out a way out of its "box".

It's worth noting at this point Nick Bostrom's speculation that *we* may be living in such an environment. This raises the question: What if the only way to develop greater-than human intelligence was to create an entire society of AIs, which develop their intelligence only as part of this society? This would significantly increase the processing power required for such a virtual prison.

Still, as we will see, trapping an AI is perhaps the best solution for restraining it for our protection.

Give it Empathy

Many argue that the answer is simple: Just ensure that the AI has empathy. Not only is this idea fundamentally flawed in many ways, I don't believe that it is anywhere near simple.

The idea is that by allowing an AI to be aware of its own mortality, it could then understand the pain it could cause to others and be more caring and considerate to our needs. So just like humans, it would be caring because it could understand how people felt... You see the first problem there?

Humans are products of their environments, shaped by their experiences. They develop empathy but empathy is complex and can be overridden by other factors. We are complex creatures, influenced by emotions, experiences, our body chemistry, our environment and countless other things. One would assume that for an AI to be "intelligent", it would be just as complex.

Even if an AI had an unbreakable measure of empathy for us, this would not guarantee our safety. An AI could decide that it is in our best interests to suffer an extraordinary amount of pain, for example as a learning experience. What if it decided to be extremely "cruel to be kind?"

It's unlikely empathy would be enough to protect us, because empathy still depends on the AI making the right decisions. Humans make bad decisions all the time. Often we even make good decisions that have bad consequences.

Suppose we want to save a bus full of children, but we had to run over a deer to do so. Most people would choose to save the children. To an AI with a bus load of other AIs, we could be the deer. It might be upset about hitting us, but it would have been for a "greater good".

This brings us to the next possibility.

Give it Ethics

The problem with ethics is that there are no right or wrong answers. We all develop our own very personalised sense of ethics, which can easily be incompatible with someone else's. An AI's own ethics could certainly become incompatible with our interests. One likely scenario would be where it saw itself as more creative with the ability to create more value than humans, and therefore worth more than us.

Then we need to consider what kind of ethics an AI could be created with. Would it be that decided by its creator? If one was to "program in" a certain set of ethics, would an AI keep these, or evolve away from them, developing its own ethics based on its own experiences and integrations? This demonstrates the main problem with trying to program limitations into an AI. If it could break its own programming, how could we guarantee our safety? If it could not, could it really be classed as "intelligence"?

This makes one wonder if we have been "programmed" with any limitations to protect our "creator", should one exist...

Artificially Real

It seems that much of the focus in developing AI is introspective, focusing on the inner workings of thought and intelligence. However, the effects of environment, experiences, social interaction, the passage of time, emotion, physical feelings and personal form are all fundamental factors in our own development. It's very possible that these factors are in fact essential for the development of intelligence in the first place. If this is the case, any degree of limitation would be undermined by external influences. How can we balance restraint while still achieving 'real' intelligence?

One thing is for certain - we need to fully understand child development and the influence of external factors if we are to understand intelligence enough to re-create it. Only then can we know if any kind of limitation is possible.

Comments

Popular posts from this blog

The Ethical Implications of Dismantling the Planet Mercury

Guns Might Be the Least of Our Worries

Free Roam VR is as immersive as it gets