Yudkowsky Critiques OpenAI's Stated Goals
Eliezer Yudkowsky, founder of the Machine Intelligence Research Institute, argues unaligned AI may cause human extinction and calls for strict international bans to prevent catastrophe.
- Eliezer Yudkowsky, in his new book with Nate Soares, argues near-future AGI will cause global Armageddon and calls for an international ban up to nuclear retaliation; he said, `That was the day I realized that humanity probably wasn’t going to survive this` about OpenAI's launch.
- Eliezer Yudkowsky cofounded the Singularity Institute for Artificial Intelligence, later renamed the Machine Intelligence Research Institute, to prevent doomer scenarios and helped define the alignment problem.
- High-Profile tech figures like Sam Altman have praised Yudkowsky, who produced extensive fanfiction including 1.8M words of Harry Potter and the Methods of Rationality, reflecting his unconventional output.
- OpenAI's Superalignment Team has run with alignment ideas within a multi-billion dollar company supporting the US economy, while David Krueger at the University of Montreal says superhuman AI will kill everybody.
- Yudkowsky's personal tragedy, with Yehuda's 2004 death, led him to donate $1800 to Machine Intelligence Research Institute and cite a `99.5%` catastrophe risk in his longtermist calculus.
20 Articles
20 Articles
How AGI became the most consequential conspiracy theory of our time
The idea that machines will be as smart as—or smarter than—humans has hijacked an entire industry. But look closely and you’ll see it’s a myth reminiscent of more explicitly outlandish and fantastical schemes.
The guru of the AI apocalypse
After two decades influencing some of the world’s most powerful people through blogging and fanfiction, writing a mainstream (if not airport) book seems unnecessary. Eliezer Yudkowsky was already one of the key figures providing the intellectual underpinnings of the artificial intelligence industry that is the sole thing keeping the US economy from recession. Every breathless editorial that makes any intelligent person feel like we’re living thr…
Will AI save us or destroy us?
For Eliezer Yudkowsky, the day OpenAI launched, the world ended. “That was the day I realized that humanity probably wasn’t going to survive this,” he said on a recent podcast with Ezra Klein. For the uninitiated, Yudkowsky is no fringe voice. He founded the Machine Intelligence Research Institute. He helped define the “alignment problem” — how to make sure superintelligent systems share human values. Wherever you land on the AI spectrum — doome…
The end is nigh – or is it?
When most people start screaming that the sky is falling, they can safely be ignored. But Eliezer Yudkowsky is not most people. He was one of the first to take the idea of superintelligent AI – artificial intelligence that greatly surpasses humanity – seriously. He played a role in introducing the founders of Google DeepMind
Coverage Details
Bias Distribution
- 45% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium










