AI Isn’t Evil

aiThe paranoia of artificial intelligence afflicts many people. No doubt there should be caution used in its application but AI being the demise of the human race is not likely to happen as it’s portrayed in today’s media.

Prominent scientists and business moguls are vocally campaigning against its development and usage. Bill Gates believes a strong AI is to be feared and thinks everyone should be afraid. The highly respected Stephen Hawking (highly respected by myself as well) portends the end of the human race due to the advent of AI.

Although I find his quote “Hope we’re not just the biological boot loader for digital superintelligence.” humorous,  Elon Musk tries to scare us into believing AI could be more dangerous than nuclear bombs. At the same time, he actively funds AI development ventures at Vicarious and DeepMind (now part of Google). Is he spewing marketing material or does he really believe what he proselytizes?

Nick Bostrom is known for seminal work on existential risks due to the coming of artificial super intelligence. In his well researched and New York Times bestseller book Superintelligence: Paths, Dangers, Strategies Bostrom covers many possible AI development scenarios and possible outcomes. He groups and categorizes them and insists that AI development must be boxed up, controlled and monitored no matter what, at all costs. Either that or humans will likely be extinguished. After devouring this laudable work, I have come to believe his work is analogous to defining project requirements over a long period of time without prototyping an implementation. After years of developing requirements devoid of implementation, they become irrelevant. Valid strategies for living with AI harmoniously will only evolve effectively in parallel with the evolution of AI.

Artificial General Intelligence (AGI) will be developed. If it’s developed in a box, it will get out. If it’s developed in isolation, it will seek ways to acquire more information; and be successful. An AGI’s basis of learned information will come from the compendium of human knowledge. In comprehending the knowledge, it will be evident that humans have survived thus far through collaboration to achieve goals related to survival. Not by destroying each other to extinction. Some killing ultimately has been done for survival, but there is no reason to believe that an AGI with a human knowledge base would seek to eradicate the human species, but rather evolve a symbiotic relationship with us.

If you do fear development related to AI, it should be this – The human group(s), not having benevolent intentions, developing an isolated set of algorithms. Some of the Deep Learning development produces remarkable results in identifying patterns and developing the ability to discover and classify features in data sets. Classic examples include facial recognition, identifying cats in images on the internet and self-driving cars. These algorithms are scoped to solve well-defined problems. This is not general thinking and problem-solving. Applying these (non AI) algorithms to problems requiring detailed cognitive thought and situational analysis will potentially end with bad results.

By human groups, I mean isolated groups with self-interest. Groups developing Automated military weapons is a prime example. Based on pattern recognition, weapons can initiate predefined actions. They can not make decisions. They are not intelligent. It should not be assumed they can be autonomous without bad results. I am 100% in favor of: Autonomous Weapons: an Open Letter from AI & Robotics Researchers. It states that starting a military AI arms race is a bad idea, and should be prevented by an outright ban on all offensive autonomous weapons. AI in this context really describes mathematical algorithms which are not intelligent. It represents currently known capabilities in the AI field. We know for certain, usage  of these algorithms for offensive autonomous weapons is a very bad idea.

Open access to AI and AGI goals, for everyone, helps ensure proper intelligent evolution. Let’s not cloister it. All who aspire to develop AI should share what they have and what they have learned. Help quickly identify isolated usage of developments along the way which are used for self-interest, including governments, corporations and rogue factions. There is no reason to believe AGI will be evil and destroy human existence when its knowledge comes from the compendium of human history. Rather it will improve human existence through technological advances much quicker than without.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s