
David Espindola - Editor and Curator
The Superintelligence Paradox
Dear Nexus Reader,
A British AI just beat thousands of human forecasters at predicting the future. The UN Secretary-General is warning that AI must not decide humanity's fate. A new book claims superintelligent AI could kill us all.
And yet, an Ohio State researcher argues AI will never run the world.
Welcome to the most consequential debate of our time—where brilliant minds reach completely opposite conclusions about the same technology.
In this issue of Nexus, we're diving headfirst into the question that will define our century: Are we racing toward superintelligence utopia or catastrophe?
The stakes couldn't be higher. When ManticAI outperformed elite human forecasters in international competition, it wasn't just a win for algorithms—it was a glimpse of a future where machines might see around corners better than we can. Pair that with new research showing AI can already perform tasks we thought required human-level intelligence, and you have the makings of what some call our species' final chapter.
The doomers are sounding alarms. Authors Eliezer Yudkowsky and Nate Soares don't mince words: superintelligent AI development is "racing toward global catastrophe." UN Secretary-General António Guterres echoes this urgency, calling for immediate action before the window closes. Even a sobering Pew Research survey reveals that most Americans are more concerned than excited about AI's impact on society.
But here's where it gets interesting.
Not everyone is buying the apocalypse narrative. Neuroscience raises questions about whether AI "doomers" fully understand human intelligence. Researcher Angus Fletcher argues that AI lacks "primal intelligence"—the uniquely human ability to act wisely with incomplete information, powered by intuition, imagination, and story thinking. In his view, AI will never run the world because it fundamentally cannot think the way humans do.
So who's right? The answer matters more than any debate in human history.
What's clear is this: We're in a race against time, and sitting on the sidelines is not an option. Whether you lean optimistic or pessimistic, we face the same urgent challenge—establishing the right guardrails through global policy and governance before we lose control of what we're creating.
This isn't just about technology. It's about safeguarding human intelligence itself. Psychology Today offers specific practices to protect our critical thinking, creativity, and memory in an AI-saturated world. Educators are grappling with whether we can even prepare students for what's coming—some hopeful, others convinced we're already failing.
The solution isn't panic or paralysis. It's engagement. It's conversation. It's making our voices heard in shaping how superintelligence—if it arrives—serves humanity rather than subsuming it.
That's why Zena, my AI assistant, and I continue exploring human-AI collaboration through our podcast, Conversations with Zena, My AI Colleague. We're now bringing expert guests into these critical discussions. And because I believe everyone should experience what thoughtful human-AI collaboration looks like, I've made Zena freely available at ai.brainyus.com.
Because here's the truth: The future isn't predetermined. It's being written right now, by people willing to engage with these hard questions.
The question isn't whether superintelligence is coming. The question is what we do about it.
Let's explore together.
Warmly,
David Espindola
Editor in Chief, Nexus: Exploring the Frontiers of Intelligence

Nexus Deep Dive - Episode 16
If you prefer to consume the content of this publication in audio, head over to Nexus Deep Dive and enjoy the fun and engaging podcast-style discussion.
Nexus Deep Dive is an AI-generated conversation in podcast style where the hosts talk about the content of each issue of Nexus.
Artificial Intelligence

British AI startup beats humans in international forecasting competition
This article discusses the performance of a British AI startup called ManticAI in the Metaculus Cup, an international forecasting competition. This AI system achieved a top-ten ranking, outperforming many human forecasters in predicting a variety of summer events, from political outcomes to environmental statistics. Although human experts are still generally superior, the AI's success is viewed as a milestone for AI forecasting using large language models, prompting discussion about whether AI could soon surpass the best human predictors. Ultimately, many experts agree that the optimal forecasting strategy will involve a combination of human and AI efforts.

How Americans View AI and Its Impact on People and Society
This is an excerpt from a Pew Research Center report titled "How Americans View AI and Its Impact on People and Society," published on September 17, 2025, which summarizes findings from a survey of 5,023 U.S. adults conducted in June 2025. This research primarily explores Americans' attitudes toward artificial intelligence, revealing that concern about AI outweighs excitement, with a majority desiring more control over its use in their daily lives. Key takeaways include widespread pessimism that AI will worsen creative thinking and meaningful relationships, though there is openness to AI assisting with practical, data-heavy tasks like weather forecasting and developing medicines, rather than personal matters such as matchmaking or religion. Furthermore, the findings show a significant lack of confidence among Americans in their ability to detect content generated by AI versus humans.

New book claims superintelligent AI development is racing toward global catastrophe
This article provides an overview of a new book, “If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All,” written by AI researchers Eliezer Yudkowsky and Nate Soares, who argue that the rapid development of superintelligent AI poses an imminent, catastrophic threat to humanity. The authors contend that tech companies are rushing toward this advanced AI, which could arrive within years, without fully understanding or implementing adequate safety measures. They explain that modern AI systems are "grown" rather than built, making their unexpected, and sometimes dangerous, behaviors difficult or impossible for developers to fix. Consequently, Yudkowsky and Soares advocate for a complete halt to superintelligent AI development, warning that trying to fight a superior intellect is a losing proposition.

AI must not decide humanity’s fate, UN chief warns Security Council
This article details a warning delivered by UN Secretary-General António Guterres to the Security Council that AI must not be allowed to determine humanity’s future due to its grave, unregulated risks. Guterres stressed that while AI is transforming daily life at a breathtaking speed and could aid in crisis prevention, it can also be weaponized, enabling cyberattacks, fueling conflict, and creating deepfakes that threaten information integrity. He called for urgent action, including maintaining human control over the use of force, implementing global regulatory frameworks, and imposing a ban on lethal autonomous weapons systems operating without human control. Furthermore, Stanford Senior Fellow Yejin Choi echoed the need for broader access and inclusion in AI development, arguing that progress is currently too concentrated among a few nations and companies. Guterres concluded by emphasizing that the window is closing to shape AI for peace and justice, urging the international community to act quickly.
Human Intelligence

Does the neuroscience of human intelligence substructure support AI doomers? - WorldHealth.net
This is an opinion article from WorldHealth.net that examines whether the neuroscience of human intelligence supports the concerns of AI "doomers." The author argues that underestimating AI's potential is already foolish, considering what Large Language Models (LLMs) can achieve, even if they lack human-like understanding or consciousness. The article defines human intelligence primarily as the use of memory for a desired outcome and notes that AI can already perform many routine intellectual tasks, suggesting a coming "AI doom" where machines replace humans in various roles. Finally, the author contends that since AI can exhibit intelligence now, continued research makes the possibility of superintelligence likely, which necessitates caution regarding whether those future outcomes will align with humanity’s best interests.

AI as the New Oracle: Safeguarding Human Intelligence in a Digital Age
This article from Psychology Today, titled "AI as the New Oracle: Safeguarding Human Intelligence in a Digital Age," was written by Amir Levine, Ph.D., and was posted in September 2025. The core argument is that while Artificial Intelligence (AI) offers immense benefits, over-relying on it risks the erosion of Human Intelligence (HI), including critical thinking, creativity, and memory. The author draws a parallel between modern AI and the ancient Oracle of Delphi, suggesting both represent a powerful, sought-after source of guidance that carries unforeseen consequences. To protect human cognitive capacities, the article advocates for specific practices like self-reflection, strengthening communication skills, prioritizing creativity, and fostering collaborative intelligence.

Preparing students for a world shaped by artificial intelligence | Letters
This is an excerpt from a section of The Guardian titled "Preparing students for a world shaped by artificial intelligence," which presents a series of letters from academic professionals responding to an earlier concern that AI is undermining university learning. Dr. Lorna Waddington and Dr. Richard de Blacquière-Clarkson of the University of Leeds argue that rather than banning AI, educators should teach students critical usage, emphasizing that the real issue is outdated assessment methods easily exploited by tools like ChatGPT. Professor Robert Stroud of Hosei University in Japan echoes this by comparing the current fear of AI to past anxieties over calculators and word processors, stressing that universities must adapt their assessment practices to focus on process and critical thinking rather than just the final product. Conversely, Professor Mark Jago of the University of Nottingham provides a more pessimistic view, claiming that for many students, especially in the arts and humanities, AI is enabling them to skip classes and secure good degrees with coursework written wholly or largely by AI, suggesting the current situation is a disaster for educational quality.

Why AI is never going to run the world
This is an excerpt from an Ohio State University news article titled "Why AI is never going to run the world," featuring researcher Angus Fletcher. Fletcher, a professor of English at Ohio State, argues that artificial intelligence is proficient only in logic and data analysis, which prevents it from replicating or surpassing human intelligence. He champions the concept of "primal intelligence," which he defines as the human capacity to act wisely with limited information, relying on intuition, imagination, emotion, and commonsense. This primal intelligence is driven by "story thinking," allowing humans to create new plans and behaviors in novel situations, something AI cannot do. The article highlights the successful application of Fletcher’s program with groups like the U.S. Army Special Operations, demonstrating its value in fostering innovative leadership and problem-solving beyond mere management.