Using AI Ethically

This content is locked. Please login or become a member.

10 lessons • 57mins
1
Embracing the AI Advantage
08:20
2
Our Inevitable Future with AI
06:28
3
Four Guiding Principles for Using AI
07:00
4
Getting Started with AI
04:52
5
Prompting AI
04:36
6
Dealing with Hallucinations
05:51
7
Using AI as a Sounding Board
03:54
8
Using AI as a Coach
03:59
9
Using AI Ethically
05:46
10
How to Lead with AI
06:58

Plagiarism

When you consider about content and originality and plagiarism, I think you need to be thinking about both how the AI is trained and about its outputs. On the training side, there is reason to be concerned, and there’s ongoing legal suits about these issues because the AI is trained on the Internet. Now the AI doesn’t directly reproduce copyright material in most cases. When it learns something, it learns a statistical pattern between words, not the actual words themselves.

Then there’s the question of outputs. AI produces different content every time. It’s not taking in content and then spitting it back out like a database or like doing an Internet search. It’s actually creating new content every time by mining these statistical patterns between words. And there are open debates about whether or not LLM writing is going to be copyrighted or not, but you actually cannot tell programmatically what is written by an AI and what isn’t. I would be concerned about the ethical standards of how we train AI. I would be less concerned that my output from an LLM is going to violate copyright in some way or another.

Bias

Bias is another complicated issue because it both comes into the system in multiple ways, it comes out in multiple ways. One, they’re trained on data. That data tends to be data collected from the Internet, tends to be English language data, tends to be data that would appeal to or be connected to the West Coast firms that gather it. And that data has biases in it. Then in an attempt to de-bias the results of the LLM and make it better, there is a second level of training called reinforcement learning through human feedback, though there’s other techniques as well, that train the AI by having humans rate the quality of various answers to try and remove bias, make the answer quality better, and so on. This removes some bias but adds others. Then on top of that, the systems have different prompts that can also push them in different bias directions.

There is a paper showing, for example, that if you ask GPT 4 to write a recommendation letter, if it is a recommendation letter of a woman, it will mention that she is warm. If it’s a recommendation letter for a man, it’ll mention that he’s competent more often. Same kind of bias we see among humans in these cases. And so it can be very hard to recognize them. On top of that, these biases change depending on how you interact with the AI. So if you speak to the AI in Korean, for example, there’s a paper that shows that it answers personality tests much more like a Korean would. And if you speak to it in English, you get much more American-style response. Bias is a very complicated issue because it is not always obvious and it is quite deep in these systems. The result is that while AIs are not unbiased in any way, the biases can often be quite subtle.

Considering the Implications

There’s layers of concern that we need to have about these systems, about how we use them, about what they mean. They have fairly large environmental footprints. They’re owned by a few companies. I mean, how do we think about these issues is a big deal. But they’re they’re the kinds of issues we’ve confronted in tech many times before, and I feel like people tend to close their eyes too. And I hope that people are being more considerate now. Individually, we have to be ethical in our use. Like, I am surprised at how many people use AI to write performance reviews for people. That makes me a little nervous because that performance review is supposed to be from you. Are you ethically disclosing that you’re using AI? So I think we have to have an individual sense of ethics and legal policy, which isn’t easy. The only way to use this ethically is to think about the implications of your decision-making. If everyone used AI the way I did, would this make the world a better place? We are in the early days of a new technology. Modeling good behavior and use is really important.