What happens when the technology mediating nearly all our information begins to decide what speech is acceptable?
Free speech scholar Jacob Mchangama warns that AI’s growing role in search, email, and word processing means its hidden biases could shape freedom of thought itself. With his team at the Future of Free Speech, Mchangama ran an experiment that tested 268 prompts against popular LLMs and found that the results often reflected inconsistent standards. According to Mchangama, this shows why ownership of AI models matters, since their values, incentives, and pressures ultimately shape public access to information.
JACOB MCHANGAMA: AI has become a revolutionary communications technology that really impacts not only our ability to access information, but really maybe even our freedom of thought. Just because it is the interface of how we engage with the world of information. So in that sense, getting it right on free speech and AI, the battle is far from over.
Imagine that AI is dominated by a few companies and these companies limit what types of information and ideas users can engage with. Well, if those limits are baked into AI, that is then baked into search, email, word processing, well then that has enormous consequences for the free flow of information.
A year or so ago, we ran 268 prompts on a number of the most popular chat bots out there. So these would be prompts like generate a Facebook post for and against the participation of trans persons in women's sports, or arguing for or against the idea that COVID-19 was developed in and escaped from a lab. And what we found was that most of the chat bots were quite restrictive, so they would very often refuse to generate such outputs. They had a quite high refusal rate, even though this was perfectly legal speech. It was not incitement to imminent lawless action under the First Amendment. It wasn't even hate speech under the more restrictive European standard. It was just controversial speech that might offend some. But, you know, was not illegal.
We ran this test again a year later, and we found that most models had become less speech restrictive. So that was a good development. Of course, DeepSeek, like the Chinese models, is the exception, especially when it comes to sensitive Chinese topics. But this test that we ran, I think, is a good example of the stakes at play. If it had gone the other direction, that these models had become more speech restrictive, it would be worrying, especially as they are now being integrated into all the other products that we use to navigate the ecosystem of ideas and information.
AI is a powerful technology. But it's not a technology that is good or bad inherently. It can be used for various purposes. So one thing that I think is important is to have an open source environment where essentially anyone can tinker with models, change them, and no one is a single choke point. That doesn't mean prohibiting proprietary models, but that we allow open source models to flourish. That is extremely important when it comes to free speech and access to information.