Ensuring Smarter-than-Human Intelligence has a Positive Outcome

Nate Soares gives a talk at Google on the problem of aligning smarter-than-human AI with operators' goals. The talk was inspired by Eliezer Yudkowsky's "AI Alignment: Why It's Hard, and Where to Start," and serves as an introduction to the subfield of alignment research in AI.

Actions
1 Connection