Statement on AI Risk

The Center for AI Safety organized a simple Statement on AI Risk. The statement, in its entirety:

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

Hooray for short, narrow, to-the-point consensus-building mechanisms like this!

Notably, Eliezer Yudkowsky signed this one (after declining to sign the FLI Pause Letter).