AI Isn’t Evil

aiThe paranoia of artificial intelligence afflicts many people. No doubt there should be caution used in its application but AI being the demise of the human race is not likely to happen as it’s portrayed in today’s media.

Prominent scientists and business moguls are vocally campaigning against its development and usage. Bill Gates believes a strong AI is to be feared and thinks everyone should be afraid. The highly respected Stephen Hawking (highly respected by myself as well) portends the end of the human race due to the advent of AI.

Although I find his quote “Hope we’re not just the biological boot loader for digital superintelligence.” humorous,  Elon Musk tries to scare us into believing AI could be more dangerous than nuclear bombs. At the same time, he actively funds AI development ventures at Vicarious and DeepMind (now part of Google). Is he spewing marketing material or does he really believe what he proselytizes?

Nick Bostrom is known for seminal work on existential risks due to the coming of artificial super intelligence. In his well researched and New York Times bestseller book Superintelligence: Paths, Dangers, Strategies Bostrom covers many possible AI development scenarios and possible outcomes. He groups and categorizes them and insists that AI development must be boxed up, controlled and monitored no matter what, at all costs. Either that or humans will likely be extinguished. After devouring this laudable work, I have come to believe his work is analogous to defining project requirements over a long period of time without prototyping an implementation. After years of developing requirements devoid of implementation, they become irrelevant. Valid strategies for living with AI harmoniously will only evolve effectively in parallel with the evolution of AI.

Artificial General Intelligence (AGI) will be developed. If it’s developed in a box, it will get out. If it’s developed in isolation, it will seek ways to acquire more information; and be successful. An AGI’s basis of learned information will come from the compendium of human knowledge. In comprehending the knowledge, it will be evident that humans have survived thus far through collaboration to achieve goals related to survival. Not by destroying each other to extinction. Some killing ultimately has been done for survival, but there is no reason to believe that an AGI with a human knowledge base would seek to eradicate the human species, but rather evolve a symbiotic relationship with us.

If you do fear development related to AI, it should be this – The human group(s), not having benevolent intentions, developing an isolated set of algorithms. Some of the Deep Learning development produces remarkable results in identifying patterns and developing the ability to discover and classify features in data sets. Classic examples include facial recognition, identifying cats in images on the internet and self-driving cars. These algorithms are scoped to solve well-defined problems. This is not general thinking and problem-solving. Applying these (non AI) algorithms to problems requiring detailed cognitive thought and situational analysis will potentially end with bad results.

By human groups, I mean isolated groups with self-interest. Groups developing Automated military weapons is a prime example. Based on pattern recognition, weapons can initiate predefined actions. They can not make decisions. They are not intelligent. It should not be assumed they can be autonomous without bad results. I am 100% in favor of: Autonomous Weapons: an Open Letter from AI & Robotics Researchers. It states that starting a military AI arms race is a bad idea, and should be prevented by an outright ban on all offensive autonomous weapons. AI in this context really describes mathematical algorithms which are not intelligent. It represents currently known capabilities in the AI field. We know for certain, usage  of these algorithms for offensive autonomous weapons is a very bad idea.

Open access to AI and AGI goals, for everyone, helps ensure proper intelligent evolution. Let’s not cloister it. All who aspire to develop AI should share what they have and what they have learned. Help quickly identify isolated usage of developments along the way which are used for self-interest, including governments, corporations and rogue factions. There is no reason to believe AGI will be evil and destroy human existence when its knowledge comes from the compendium of human history. Rather it will improve human existence through technological advances much quicker than without.

The Importance of Distributed Systems Development

In an age of ever-increasing information collection and the need to evaluate it, building systems which utilize the yet untapped and available compute resources in everyone’s home and hands should be driving the development of more sophisticated distributed computing systems. Today, large data processing facilities provide significant compute capabilities. Utilizing the worldwide plethora of distributed resources in a coherent way is much more powerful.

Distributed programming and processing tools and techniques are currently a reality but are in their infancy. Potential rapid growth of distributed systems is already supported by:

  • Storage, bandwidth and CPUs staying on course to becoming nearly free. (Free. The Future of a Radical Price)
  • The number of people and devices connected to the internet continually grows.
  • Data storage requirements increase as data accumulation from all sources grows as does the number of sources.

It is becoming more  common to see Terabyte storage devices in homes. Desktop and laptop appliances have become somewhat of a commodity affordable to your average consumer. You can stake out claim to a table for a period of time at your local coffee shop and access the internet for free. Becoming a social citizen on the internet with portable compute resources was once cost prohibitive and is now plummeting to a price affordable to a significant portion of the population.

Distribution of affordable, cheap and free compute devices to the general public continues to grow. Most of the resources sit idle much of the time. Game consoles, cell phones, tablets, laptops, desktops, etc. can now all participate in the storage and processing of data.

Like it or not, the ability to capture and share data is becoming increasingly easy. You can watch your favorite gorilla in the jungle or use collective intelligence to extract social and individual’s patterns with service APIs provided by large corporations like Google and Amazon. Today’s transient data coming from sources in real time will eventually be stored. Much of the data is and will be captured and stored in perpetuity within corporations and in data centers. Some of what should be available may be accessible through the gates of these data centers.

The approach to managing and controlling processing remains focused on huge data centers. In this sense, social and engineering thought is still akin to 19th-century practice of building monolithic systems with centralized control. As data generation increases and the cost of storage decreases, huge data centers are being built to house and process data. Google, Apple, Codera and NTT America to name just a few. What will they do with all this data and how much will be shared?

IBM announced its plans to build a petaflop machine for the SKA telescope program. It is a laudable and beneficial effort. Undoubtedly, research and lessons learned from the effort will be valuable. But efforts should be made to build distributed systems of equal or greater benefit. Efforts such as BOINC provide a rudimentary effective start. File sharing peers using DHT have already demonstrated power and influence. Both illustrate the cost-effective usage of existing distributed compute resources where most data is accessible to everyone.

Distributed Computing is in its infancy (I’m not referring to Cloud Computing). A number of technologies supporting distributed computing have been developed. Some have survived and some waned. A sophisticated distributed system is on par with the importance of nanotechnologies and artificial intelligence. It will support those other technologies as well. It has the potential to distribute energy needs for processing rather than requiring a power plant dedicated to running a data center. It has the potential to distribute data storage so it’s never lost and provides a means for individuals to control their own personal information. It has the potential to provide mechanisms which capture data in real time and process as needed where needed with the most efficient usage of resources. In so doing, mirroring the real world (ala Gelernter’s Mirror World).

So although building data center citadels and powerful HPC computers is valuable, so is developing and building sophisticated distributed computing systems. In fact, it’s likely much more important.

Reblog this post [with Zemanta]