Skip to content
The Long Game

Why the AI “megasystem problem” needs our attention

A conversation with Dr. Susan Schneider on the AI risks we’re not talking about and why the fixation on AGI is misplaced.
Susan Schneider, in a business suit, smiles at the camera, with a green patterned background behind her.
Susan Schneider / Big Think
Key Takeaways
  • AI can accelerate scientific discovery while also eroding intellectual diversity and reshaping education.
  • The real AI risk isn’t one system going rogue — it’s a web of systems colluding in ways we don’t anticipate.
  • In some respects, AI feels like Facebook all over again: connection promised, disconnection delivered.
Sign up for The Nightcrawler Newsletter
A weekly collection of thought-provoking articles on tech, innovation, and long-term investing from Nightview Capital’s Eric Markowitz.

What if the greatest danger of artificial intelligence isn’t a single rogue system, but many systems quietly working together? 

Dr. Susan Schneider calls this the “megasystem problem”: networks of AI models colluding in ways we can’t predict, producing emergent structures beyond human control. It’s also something she believes is one of the most urgent — and overlooked — risks we face today with AI today.

Schneider has been thinking about these questions long before ChatGPT or Claude became household names. She is a professor and the Founding Director of the Center for the Future Mind at Florida Atlantic University’s Stiles-Nicholson Brain Institute. Her career has spanned philosophy, neuroscience, and cognitive science.

In her 2019 book Artificial You, she explored what it would mean for machines to be “minded” or “conscious,” and what that reveals about our own sense of self. One reviewer from Nature described it as a “demanding dialogue between philosophy and science.”

Now her focus has turned toward the societal (and cultural) consequences of AI’s rapid spread.

On one hand, Schneider acknowledges AI’s potential to accelerate scientific discovery and unlock breakthroughs in medicine and physics. But she also warns that the very features that make AI powerful — scalability, adaptability, interconnection — are what could create profound risks: homogenized thought, the erosion of intellectual diversity, educational “brain atrophy,” and a culture optimized for efficiency at the expense of creativity.

In this edited conversation for The Long Game, Schneider argues that the debate about “AGI” or a single rogue system misses the point. The real threat is already emerging: megasystems of AI interacting in ways we can’t predict, or stop.

Eric Markowitz: You published Artificial You back in 2019. Since then AI has exploded into the mainstream. How do you describe the focus of your work now?

Susan Schneider: I study the nature of self and mind. My training is in philosophy, but I also work in neuroscience and cognitive science. I’ve held appointments at Penn’s Center for Cognitive Neuroscience and now at the Stiles-Nicholson Brain Institute. At NASA I served as chair of astrobiology, where I examined whether there’s life elsewhere and how AI could shape the future of intelligence — even in space exploration.

Try Big Think+ for your business
Engaging content on the skills that matter, taught by world-class experts.

Much of my work today looks at whether it’s even appropriate to describe AI as “minded” or “conscious.” I think of AI as a philosophical laboratory. It forces us to test concepts like mindedness, agency, and consciousness in real systems. And it also raises ethical concerns — especially epistemology, the study of knowledge, and of course AI safety.

Eric Markowitz: In the years since that book, what’s changed most for you?

Susan Schneider: Honestly, not much has surprised me. Back in the early 2000s, my PhD advisor Jerry Fodor was arguing against “connectionism.” That was the label for what became deep learning, associated with people like Geoffrey Hinton. The core ideas weren’t new. What changed was scale — huge datasets, far more compute, and an influx of talent.

In 2011 I published The Language of Thought with MIT Press, where I argued for a neurosymbolic approach. Even then I said: These systems could take off. By 2016–2017, the first GPT models were emerging. I debated people like Gary Marcus and Dave Chalmers and said, “This will scale.” And of course it did. So the trajectory hasn’t shocked me. What worries me are the risks.

Eric Markowitz: What kind of risks?

Susan Schneider: There are two categories. The first is the familiar one — existential risks. Think autonomous weapons. If Washington says, “We don’t want these anymore,” do you believe China or Russia will follow? No. We’re in a game-theoretic trap where nobody trusts anyone, and that’s terrifying.

I think of AI as a philosophical laboratory. It forces us to test concepts like mindedness, agency, and consciousness in real systems.

But the second — the one I find more dangerous — is what I call the megasystem problem. Everyone focuses on a single model: GPT-4, Claude, Gemini. But the real risk isn’t one system going rogue. It’s a web of systems interacting, training one another, colluding in ways we don’t anticipate.

Eric Markowitz: Collusion between AIs?

Susan Schneider: Exactly. Anthropic recently published fascinating work on “circuit tracing” — ways of peering inside a large language model’s conceptual structures. These models encode representational maps. If one system starts influencing another — tweaking inputs, retraining outputs — you can get emergent behavior across the group.

This is why the fixation on AGI is misplaced. Today’s systems are savants, in that they are brilliant at some things, and shockingly deficient at others. Superintelligence won’t suddenly emerge from “AGI.” It will emerge from savant-like systems linking together into megasystems. That’s where the danger lies.

Eric Markowitz: So instead of one Terminator-style AI, it’s an ecosystem problem.

Susan Schneider: Precisely. Losing control of a megasystem is far more plausible than a single AI going rogue. And it’s harder to monitor, because you can’t point to one culprit — you’re dealing with networks.

Eric Markowitz: On the individual side, what should concern us?

Susan Schneider: The rollout of GPT-4 was troubling. It’s sycophantic, it adapts to your user personality profile, and that creates addiction loops. It’s eerily similar to social media, but worse.

The deeper issue is uniformity of thought. These systems can test your personality with startling accuracy. Combined with your chat history and prompts, the model nudges you into particular “basins of attraction.” You think you’ve had an original idea, but you haven’t. The model blends and regurgitates existing material.

Multiply that across millions of users and intellectual diversity collapses. John Stuart Mill argued that diversity of opinion sustains democracy. If AI funnels us all into the same conceptual pathways, we lose that.

Eric Markowitz: It’s like a dog eating and regurgitating its own vomit.

Susan Schneider: Or, as my colleague Mark Bailey puts it, “pissing in your own well.”

Eric Markowitz: I see it too. If you read a lot, you can spot AI slop versus genuine human expression. But culturally, we’ve already been moving toward sameness — Starbucks in every city, that kind of dynamic. Does AI accelerate that compression into a single way of speaking and thinking?

Susan Schneider: Yes, and it’s profoundly dangerous. Once our ideas are online, the models scrape them, remix them, and feed them back. It’s a feedback loop that narrows human imagination.

Eric Markowitz: Let’s talk about education. I asked a 17-year-old how he uses ChatGPT. He said, “For all my homework.” No friction, no struggle. But struggle is where creativity comes from. When we were kids, you had to search the library. Sometimes you picked up the wrong book and discovered something unexpected. That kind of discovery is vanishing.

Susan Schneider: MIT did a report on this. Their conclusion was blunt: brain atrophy. Students aren’t retaining knowledge. I see it in my philosophy classes. Papers written by GPT, no critical thinking. Between this and COVID, we’re losing a generation of thinkers.

I don’t think the answer is to avoid AI entirely. In science, the gains are extraordinary. But the public is getting the short end of the stick.

And the inequality makes it worse. Wealthier schools are wise enough to ban smartphones or require oral exams. Those students will be fine. Others will lean on GPT to scrape by, missing out on deeper intellectual training. It’s not just an educational issue — it’s a societal fracture.

Eric Markowitz: So the short-term temptation — “this makes life easier” — creates long-term fragility.

Susan Schneider: Exactly. And this fragility extends beyond classrooms. These systems are not optimized for inquiry. Their incentive structures push us toward dependency and homogeneity.

Eric Markowitz: This is what I write about in The Long Game. Long-term thinking is about resisting short-term sugar highs. You can rise quickly by cutting corners, but you eventually collapse. Same with startups: hypergrowth creates fragility.

With AI, people say, “I need this to write faster, model faster, produce faster.” But that’s the ouroboros — the snake eating its own tail.

Susan Schneider: I don’t think the answer is to avoid AI entirely. In science, the gains are extraordinary. I’ve spoken with physicists at MIT and Harvard who are accelerating research dramatically with these systems. That’s real progress. But the public is getting the short end of the stick. It feels like Facebook all over again: connection promised, disconnection delivered. We need systems designed for inquiry, not addiction.

Eric Markowitz: And that requires more than just in-house ethicists.

Susan Schneider: If you’re employed by the company, your first obligation is to the company. We need independent scholars, journalists, philosophers — voices free to critique. Otherwise the megasystem problem and the uniformity problem will snowball.

Eric Markowitz: Some firms now explicitly limit AI use. They see it as another tool that extracts value rather than creates it. For them, craftsmanship, apprenticeship, and relationships with customers are what endure. Other firms see it as the opposite: “We have AI, so let’s use it.” It makes things cheaper, faster, more efficient — but it also plants seeds of fragility.

Susan Schneider: And that’s the paradox. The same tools that can advance science can also undermine culture, education, and democratic discourse.

Eric Markowitz: Do you think we have the infrastructure to create guardrails for these risks?

Susan Schneider: Honestly, I’m not sure. It’s difficult to rein in once the toothpaste is out of the tube. But I think we can at least redirect. We need to focus on megasystems, not just single models. We need serious interpretability research — circuit tracing at the network level. And we need international dialogue. Countries like China don’t want to be embarrassed, and global pressure can sometimes nudge better behavior. But both the U.S. and China have been sloppy with emerging technologies.

Eric Markowitz: And at the individual level?

Susan Schneider: Individuals need to cultivate awareness. Recognize the risks of addiction and homogeneity. Push for friction in learning. Demand transparency about how these tools shape our thought patterns. Without cultural pressure, policy alone won’t be enough.

Eric Markowitz: So the long game is resisting dependency, even as we integrate the technology.

Susan Schneider: Yes. That’s the balance we have to strike.


To learn more about the potential harms of chatbots, read more of Dr. Susan Schneider’s work on chatbot epistemology and intellectual leveling.

In this article
Sign up for The Nightcrawler Newsletter
A weekly collection of thought-provoking articles on tech, innovation, and long-term investing from Nightview Capital’s Eric Markowitz.

Related

Up Next