Can we Really Control Artificial Intelligence?

Can we Really Control Artificial Intelligence?

3532
SHARE

Artificial Intelligence, depending on who you speak to, will either be our greatest achievement or our last great achievement as a species. It will either provide complete freedom from laborious tasks or further enslave or exterminate us.  AI will either provide a world free of warfare or become the world’s greatest war machine turning on its masters.

The potential liberation and enhancement to mankind cannot be underestimated, but fears are well founded. Given the latter, let’s take a quick look at the concerns about AI. Most importantly, can AI be controlled?

Neuroscientist Sam Harris talked about the issue in his TEDTalk. He noted we haven’t seriously considered the problems associated with creating a superhuman intelligence. Such an intelligence may well look upon us as we do ants. Even Tesla’s Elon Musk shared his concerns in a September interview.

This article is less of a guide to the reader and more of thought experiment. As Sam Harris suggests, we must all think about this as a potential threat.

What is intelligence?

Intelligence is generally defined as the ability to perceive information and retain knowledge. This knowledge should then be applied to improve future behavior within an environment or context.

So, could we ever develop Artificial Intelligence? Computers are generally very stupid but superb at following orders or instructions – no matter how flawed they are. To give machines intelligence it would need to be able to fit the rather esoteric definitions of intelligence. Could we even recognize it? Could an excellent programmer “fool” those who looked upon it as intelligent?

Let us assume for now it is possible and have a look at some of the potential benefits, and fears, of AI.

Could Artificial Intelligence be great for us?

Are careers like accounting (there are many other examples) actually necessary for humans to labour at? Wouldn’t an AI be better suited?

Could the legends of the Oracle in ancient Greece actually become a reality? Using a super artificial “smart” and all-knowing intelligence? Would it, in fact, become the “god” of the future? – if in fact, this is beneficial.

The benefits of an AI to be set to task on issues such as medicare, science and technology are obviously huge. Think of the development we could make in days or even months.

Benefits aside would such “freedoms” be ethical? It would likely cause mass unemployment the likes of which mankind has never experienced. Societal structures would certainly undergo an enormous alteration. Would money and economics become redundant? Geopolitics would certainly be massively influenced. Perhaps the need for governments and politics at all would be eliminated.

[Image Source: Didem Durukan McFadden]

What are the fears about it?

We all seen the wide variety of sci-fi films or heard the ominous predictions of leading world authorities. Stephen Hawking and Sam Harris certainly worry about AI.

Most opponents to AI cite the danger of a massive divide in intelligence between the AI and us. A divide of this kind would be achieved in a very short time. Common analogies include the difference between mankind and the great apes or lower organisms such as ants. Sam Harris and others point out that AI could make great leaps in thought that would take mankind 20,000 years in perhaps 1 week. We simply cannot fathom that rate of advancement or even hope to understand it’s conclusions and advancements.

Can you imagine an intelligence so great that it would look upon man as we do to ants? Would it have empathy for such a lowly species?

If the AI was so advanced intellectually, the slightest divergence of its goals from ours could be devastating. Consider an AI that forms the conclusion to eliminate all disease is best achieved by having no humans to get sick.

Hal 9000 [Image Source: Pixabay]

Can we ever hope to control AI?

[Image Source: Pixabay]

Various scenarios have been offered, but most revolve around coding within the AI forms of control akin to human ideals. To truly be intelligent, however, the AI will need self-learning protocols as a fundamental part of its coding.

Many fear that this would enable the AI to self-correct its own coding and thus render any controls imparted on it redundant. It may even see coding made by such an inferior intellect to be fundamentally flawed and perhaps completely re-write or erase it altogether. If it lacks empathy it may well deem mankind a dangerous species which must be controlled or exterminated for its own good – think of the Matrix.

Other control methods involve complete isolation from the internet thus, in theory, preventing a Skynet-type global catastrophe. This would clearly severely limit its immediate influence and access to knowledge. It would then rely on “spoon feeding” of selected material to it. But would this be ethical? If the AI is truly intelligent would we have the right to restrict it in this way? These are difficult and serious questions to consider before “steaming ahead” with its development. A great thought experiment on this is the film ex-Machina.

Could we even hope to contain it in such a way? Would it be able to persuade it’s jailer to release it? Or even blackmail them? Would the end of mankind come from its empathy and compassion? That would be the ultimate irony. Perhaps we could give it “honey pot” strategies to control it over time?

Can’t beat them, join them

It may be our best hope for the future to become the AI rather than create an existential intelligence or set of them. Perhaps in the future, we will never truly die. Perhaps our minds will be “uploaded” to a great “web” of minds, joining a community that forms a council for all mankind. The technical requirement for this are clearly not available yet, but would it actually be ethical? If we copied our minds into the great “web” would we then need to terminate our physical bodies? Would the copied minds actually be able to think and feel without the fear of death that mortal humans do? Would theses digital minds remain “sane”. Or could we retain our flesh bodies and somehow gain the superhuman intelligence within ourselves?

It may be that a combination of controls is required whereby the AI is isolated and “summoned” on demand, much like the Oracle of Ancient Greece. What if this “Oracle” was, in fact, a collection of deceased consciousnesses with access to all of mankind’s knowledge and had the ability to self-learn for the good of man. Could compassion for the living be retained if it looked upon them as its children?

Would the collection of ancestral minds need to be kept “sane”? Would this require a simulation environment of their “living” past lives? Perhaps this would hamper it’s/their intellectual capacity.

Are you worried about AI? Can you think of any ways of controlling it?

Featured image courtesy of Didem Durukan McFadden