Skip to content
The Well

Social media supercharged our disagreements. Could AI help us resolve them?

Duke sociologist Dr. Christopher Bail on the tech’s potential to foster empathy in an age of division.
A human silhouette filled with birds and insects is overlaid with a circular target, binary code, and abstract shapes against a sky background with clouds.
Library of Congress / Adobe Stock / Jacob Hege / Big Think
Key Takeaways
  • Generative AI can help scale the tools of conflict mediation online in real time. 
  • AI-assisted conversations lead to greater openness across ideological lines. 
  • The future of AI and civil discourse depends on intentional, visionary tech design. 
Sign up for Big Think on Substack
The most surprising and impactful new stories delivered to your inbox every week, for free.

Big question: What will AI mean for how humans communicate with one another?

We’re only beginning to glimpse the answer. ChatGPT maker OpenAI is reportedly building a social platform to rival X — a move made even more intriguing by CEO Sam Altman’s recent claim that we’ve reached “the singularity.”

Meanwhile, Meta has poured another $14 billion into AI model training, likely to supercharge its generative chatbots and AI Ray-Bans — a bold bet, considering Instagram and Facebook are already flooded with havoc-wreaking deepfakes.

We’re communicating through AI in more ways than we can count. But what this means for how we understand each other — or ourselves — remains an open question. And it’s a question that matters. 

In his 2021 book Breaking the Social Media Prism, Christopher Bail, Ph.D., a Duke sociologist and founder of the university’s Polarization Lab, showed how online strife fuels real-world division. As AI increasingly shapes our conversations, filters our news, and influences what we believe, it risks pushing us further apart — not a recipe for productive civil discourse or legislative consensus.

For most users, the goal of social media is to improve status or win an argument. It’s not really about learning or intellectual humility.

Christopher Bail, Ph.D.

And yet, there’s real reason for hope. In a follow-up study published in the Proceedings of the National Academy of Sciences, Bail found that generative AI could do the opposite: encourage more respectful conversation and increase openness to opposing views. That is — if we choose to build and use it that way.

In this exclusive conversation, The Well spoke with Bail about the dangers of polarization, the promise of AI-mediated dialogue, and his call for tech leaders to design these tools with care, clarity, and purpose.

The Well: What first drew you to study communication technology and polarization? 

Bail: The story of my research, oddly enough, begins in 1991 in the French Congo. My father worked for the World Health Organization, and we lived there during a long and very violent civil war. So from a young age I had a natural appreciation for what can go wrong, and how precious what we have in the United States is.

As I went to college and grad school and became a professor, my generation experienced foundational events like September 11th and divisive elections, plus the rise of unprecedented technologies like social media and artificial intelligence. For me, it was natural to put those two things together and study how technology drives social conflict.

The Well: Why do people love to fight so much online?

Bail: For most users, the goal of social media is to improve status or win an argument. It’s not really about learning or intellectual humility. It’s like Roman times: There’s an audience, there’s a fight, and someone wins and someone loses. Extreme content gets a lot of comments, so algorithms will push it to the top of feeds, even if the comments are, “you’re crazy.”

I’ve proposed a “bridging algorithm,” which surfaces content that gets engagement from [a more diverse range] of users. It signals that you can’t just antagonize people and get to the front page. When a student of mine studied this intervention, we saw a 25% decrease in the spread of misinformation. Algorithmic strategies like that can incentivize better behavior and make it less fun for trolls to play.

The Well: But how do you make people want to connect and understand each other?

Bail: We have randomized controlled trials suggesting that getting together at a dinner table with a skilled conflict mediator works well. But even if there were a surplus of people willing to do that, there’s only so many skilled conflict mediators. So, how do you take this successful intervention and try to scale it?

The Well: That’s where AI comes in?

Bail: Yes. I was very fortunate to have early exposure to large language models (LLMs) like ChatGPT, and I immediately realized they’re very good at active listening. If you’ve gone to marriage counseling, you’ve probably heard this term, right? It’s like Conflict Mediation 101 — you rehearse someone’s statement back to them to demonstrate that they’ve been heard. It’s a very simple thing, but in the heat of an argument, we’re not good at this.

LLMs are also skilled at “style transfer,” like taking a Picasso and turning it into a Van Gogh. These models [assess] how pixels or words relate to one another, and then transpose — preserving the meaning of something while changing the style of it.

So we recruited a bunch of students at Duke and Brigham Young University to have a completely unstructured online conversation about gun control — there’s some nice ideological heterogeneity across the two schools.

The students in the study’s experimental condition received AI assistance, where GPT3 would suggest rephrasings using evidence-based conflict mediation. For example, if someone hadn’t demonstrated active listening, the bot suggested restating the other person’s argument in the response.

Illustration contrasts dismissive, polite, and empathetic responses in a conversation about guns and democracy, emphasizing the impact of different dialogue approaches.

The Well: Did people actually use the bot’s more diplomatic language?

Bail: We were pretty surprised that about 80% of people accepted the rephrased messages. Some of the other 20% chose to edit their responses on their own. The exciting finding was that when people used the rephrased messages, their discussion partner had more favorable attitudes towards the conversation. They thought it was more productive, less stressful, and they expressed a greater desire to have future discussions across ideological divides.

A horizontal dot plot on parchment paper shows tone differences in politeness features between rephrased and original messages for five categories.

The Well: OK, but would this concept hold up in the wild?

Bail: Yeah so a little while after that, someone from Nextdoor approached me. It’s meant to bring neighborhoods together — that’s the vision — but a nationally representative survey had named it America’s most toxic social media platform. Admirably, they were looking for solutions.

The question became, can the LLM tone down an already toxic conversation? Over a year, Nextdoor deployed a version of our tool in a large field experiment, again with the bot suggesting rephrasings based on conflict mediation. This created about a 15% decrease in toxic language.

When people used the rephrased messages, their discussion partner had more favorable attitudes towards the conversation. They thought it was more productive, less stressful, and they expressed a greater desire to have future discussions across ideological divides.

Christopher Bail, Ph.D.

It suggests that this intervention’s potential stretches towards people who are already in active and heated conflict — the smaller population of people who tend to ruin the internet for the rest of us.

The Well: Do you think AI could help us have more productive conversations offline, too? 

Bail: Some of our research has been picked up by a young faculty member at Stanford named Diyi Yang, who really cleverly realized that LLMs can be trained to impersonate people. That can be bad in the case of identity theft or phishing, but it could be really useful for people who need to practice a difficult conversation — for example, a first-year male medical resident who’s trying to talk to a teenager about birth control — someone a great social distance from another person.

And so Diyi’s team built a tool called Rehearsal where [study subjects] practiced talking to digital versions of the people they were about to have difficult conversations with. Later, the real conversations went much better in many different ways.

The Well: What about at the society-level — can AI help there?

Bail: So, a really interesting group from Oxford and Google DeepMind found that you can ask an LLM to interview everybody involved in a debate and come up with a consensus statement. Groups that used the LLM-assisted process found consensus about twice as fast as those who didn’t.

The Well: Let’s dream a little. What would it feel like to live in a world where we’re finally able to have productive conversations on big issues?

Bail: I don’t want to get too Pollyanna-ish here — I’m an optimist but not a techno-optimist. It’s a near-certainty that malicious actors will use AI to create confusion or spread misinformation. But there’s grounds for excitement, too. 

What I hope is that we will learn from the lessons of social media. We’ve spent so much time focused on this unwinnable war, and too little time trying to articulate what we want platforms to achieve. What we need is for the social media and AI platforms that come next to really have a vision for how humans should use them — how we should live together and broker conflict. Until we get there, we can expect an uphill struggle.

We interviewed Christopher Bail for The Well, a Big Think publication created in partnership with the John Templeton Foundation. Together, we’re exploring life’s biggest questions with the world’s brightest minds. Visit The Well to see more in this series.

Sign up for Big Think on Substack
The most surprising and impactful new stories delivered to your inbox every week, for free.

Related

Up Next