Two Key Guardrails for AI

This content is locked. Please login or become a member.

6 lessons • 26mins
1
Why We Should Think of AI as an Alien Intelligence
05:23
2
Hallmarks of AI
02:51
3
Understanding AI’s Potential to Control Us
06:45
4
Institutions’ Role in Governing AI
04:24
5
Two Key Guardrails for AI
04:08
6
What Leaders Must Take Into Account About AI
02:32

AI is not infallible. It’s not this kind of magic technology that has all the answers and all the solutions. It is a very powerful technology. It can make enormous positive contributions to humankind. It can provide us with much better health care and education and environmental policies, but it’s not infallible. So we need guardrails to protect us against the dark side of this technology.

At the present moment, two very important guardrails or regulations is that we should ban fake humans. AIs develop the capability of counterfeiting humans, of pretending to be human themselves. If you talk online with someone and you don’t know whether it’s an AI or a human, this will destroy the democratic conversation. The whole meaning for democracy is that you have large numbers of people conversing about the issues of the day. Now imagine a large group of people standing in a circle and talking, and suddenly a group of robots entering the circle and start talking very loudly and very emotionally and persuasively, and you can’t tell the difference who is a human and who is a robot. That is a situation we are now living through. So the same way we have a ban on fake money to protect the financial system, we need to ban bots from the conversation. We need to ban fake humans. AIs should be welcome to talk with us only if they identify as AIs.

Another key guardrail is to hold corporations responsible for the decisions and actions of their algorithms. A key example is social media that is being overrun by fake news and conspiracy theories and so forth. Big tech companies tell us that they don’t want to censor their human users, and this is a good argument. We should be very careful before we start censoring human expression. However, what is really driving this wave of fake news and conspiracy theories is the algorithms that deliberately spread and recommend and play for us this misinformation, this fake news and conspiracy theories. And this is something that the corporations should be liable for. If some human invented a conspiracy theory, that’s his or her problem. That’s their fault, not the fault of Facebook or Twitter or whatever, YouTube. But if the Facebook or Twitter or YouTube algorithm decided to promote this specific content, to recommend this conspiracy theory, this is on the company. It is liable for the actions and decisions of its own algorithms.