David Espindola - Editor and Curator
David Espindola - Editor and Curator

The Superintelligence Paradox

Dear Nexus Reader,

A British AI just beat thousands of human forecasters at predicting the future. The UN Secretary-General is warning that AI must not decide humanity's fate. A new book claims superintelligent AI could kill us all.

And yet, an Ohio State researcher argues AI will never run the world.

Welcome to the most consequential debate of our time—where brilliant minds reach completely opposite conclusions about the same technology.

In this issue of Nexus, we're diving headfirst into the question that will define our century: Are we racing toward superintelligence utopia or catastrophe?

The stakes couldn't be higher. When ManticAI outperformed elite human forecasters in international competition, it wasn't just a win for algorithms—it was a glimpse of a future where machines might see around corners better than we can. Pair that with new research showing AI can already perform tasks we thought required human-level intelligence, and you have the makings of what some call our species' final chapter.

The doomers are sounding alarms. Authors Eliezer Yudkowsky and Nate Soares don't mince words: superintelligent AI development is "racing toward global catastrophe." UN Secretary-General António Guterres echoes this urgency, calling for immediate action before the window closes. Even a sobering Pew Research survey reveals that most Americans are more concerned than excited about AI's impact on society.

But here's where it gets interesting.

Not everyone is buying the apocalypse narrative. Neuroscience raises questions about whether AI "doomers" fully understand human intelligence. Researcher Angus Fletcher argues that AI lacks "primal intelligence"—the uniquely human ability to act wisely with incomplete information, powered by intuition, imagination, and story thinking. In his view, AI will never run the world because it fundamentally cannot think the way humans do.

So who's right? The answer matters more than any debate in human history.

What's clear is this: We're in a race against time, and sitting on the sidelines is not an option. Whether you lean optimistic or pessimistic, we face the same urgent challenge—establishing the right guardrails through global policy and governance before we lose control of what we're creating.

This isn't just about technology. It's about safeguarding human intelligence itself. Psychology Today offers specific practices to protect our critical thinking, creativity, and memory in an AI-saturated world. Educators are grappling with whether we can even prepare students for what's coming—some hopeful, others convinced we're already failing.

The solution isn't panic or paralysis. It's engagement. It's conversation. It's making our voices heard in shaping how superintelligence—if it arrives—serves humanity rather than subsuming it.

That's why Zena, my AI assistant, and I continue exploring human-AI collaboration through our podcast, Conversations with Zena, My AI Colleague. We're now bringing expert guests into these critical discussions. And because I believe everyone should experience what thoughtful human-AI collaboration looks like, I've made Zena freely available at ai.brainyus.com.

Because here's the truth: The future isn't predetermined. It's being written right now, by people willing to engage with these hard questions.

The question isn't whether superintelligence is coming. The question is what we do about it.

Let's explore together.

Warmly,

David Espindola

Editor in Chief, Nexus: Exploring the Frontiers of Intelligence

Nexus @ Brainyus

Nexus Deep Dive - Episode 16

If you prefer to consume the content of this publication in audio, head over to Nexus Deep Dive and enjoy the fun and engaging podcast-style discussion.

Nexus Deep Dive is an AI-generated conversation in podcast style where the hosts talk about the content of each issue of Nexus.

Nexus @ Brainyus

Artificial Intelligence




Human Intelligence