Friendly AI Research
New Omohundro Article

Steve Omohundro has posted an early copy of the article he has submitted to Springer’s The Singularity Hypothesis, titled “Rationally-Shaped Artificial Intelligence.” Abstract:

Systems with the computational power of the human brain are likely to be cheap and ubiquitous within the next few decades. As technology becomes more intelligent, we need to ensure that it remains safe and beneficial. This paper describes a rational framework for analyzing intelligent systems and a strategy for developing them safely. The analysis is based on von Neumann’s model of rational economic behavior. We introduce the “Rationally-Shaped Minds” model of intelligent systems with bounded computation. We show that as computational resources increase, there is a natural progression through stimulus-response systems, learning systems, reasoning systems, self-improving systems, to fully rational systems. We show that rational systems are subject to “drives” toward self-protection, resource acquisition, replication, goal preservation, efficiency, and self-improvement. Several of these drives are anti-social and need to be counteracted with analogs of human cooperativeness and compassion. We analyze the three basic strategies for controlling the behavior of intelligent systems. We describe the “Safe-AI Scaffolding” strategy which builds intentionally limited but safe systems to use in the construction of more powerful systems.

The piece builds on his earlier work, “The Nature of Self-Improving Artificial Intelligence" (2007) and "The Basic AI Drives" (2008). The latter was cited in the latest edition of Russell and Norvig’s famous AI textbook.

Friendly AI Papers from AGI-11

The conference proceedings from the Artificial General Intelligence 2011 conference (AGI-11) have been published. See the intelligence explosion bibliography for a full list of its contents. The conference proceedings include several papers relevant to Friendly AI, including Yudkowsky’s “Complex Value Systems in Friendly AI” (confusingly, titled elsewhere as “Complex Value Systems are Required to Realize Valuable Futures”).

Yudkowsky will be giving a talk called “Open Problems in Friendly Artificial Intelligence" at the 2011 Singularity Summit in NYC.

Upcoming Friendly AI Publications

Academic publisher Springer has commissioned an edited, peer-reviewed volume devoted to the technological singularity. The Singularity Hypothesis will probably include chapters that discuss Friendly AI.

The January 2012 issue of Journal of Consciousness Studies will be dedicated to responses to David Chalmers’ article The Singularity: A Philosophical Analysis. A few articles may discuss Friendly AI.

This post says that Nick Bostrom is writing a book called Intelligence Explosion: Groundwork for a Strategic Analysis, which should discuss Friendly AI at length.

Friendly AI for Beginners

Brief introductions to the Friendly AI research program:

Details of the Friendly AI research program are given in this September 2011 interview with Luke Muehlhauser.

The concept of Friendly AI is linked to the notion of an ‘intelligence explosion’, so many relevant works and researchers are listed at IntelligenceExplosion.com.

The key researcher in the field is Singularity Institute’s Eliezer Yudkowsky. See especially Artificial Intelligence as a Positive and Negative Factor in Global Risk.

Friendly AI is mentioned in many other publications, including David Chalmers’ The Singularity: A Philosophical Analysis.