The conference proceedings from the Artificial General Intelligence 2011 conference (AGI-11) have been published. See the intelligence explosion bibliography for a full list of its contents. The conference proceedings include several papers relevant to Friendly AI, including Yudkowsky’s “Complex Value Systems in Friendly AI” (confusingly, titled elsewhere as “Complex Value Systems are Required to Realize Valuable Futures”).
Academic publisher Springer has commissioned an edited, peer-reviewed volume devoted to the technological singularity. The Singularity Hypothesis will probably include chapters that discuss Friendly AI.
This post says that Nick Bostrom is writing a book called Intelligence Explosion: Groundwork for a Strategic Analysis, which should discuss Friendly AI at length.
Brief introductions to the Friendly AI research program:
Details of the Friendly AI research program are given in this September 2011 interview with Luke Muehlhauser.
The concept of Friendly AI is linked to the notion of an ‘intelligence explosion’, so many relevant works and researchers are listed at IntelligenceExplosion.com.
The key researcher in the field is Singularity Institute’s Eliezer Yudkowsky. See especially Artificial Intelligence as a Positive and Negative Factor in Global Risk.