This content is locked. Please login or become a member.


Encourage open use
So I spoke to somebody who wrote the policy at a bank banning ChatGPT use and she used ChatGPT to do that. So there’s a question of like, is that harmful? Is that bad? Is that good? A lot of companies have either blanket bans or completely unclear policies. And what that’s doing is everyone’s using AI because I don’t wanna go into work and handwrite essays anymore. Like, that would be ridiculous. If I have an AI that can do some of the work for me, I’m gonna wanna use that tool. And if I have to use that secretly on my phone, I’ll use it secretly on my phone. Companies have to be realistic about what their expectations of employees are. Issuing guidelines like, you know, you could be punished, you could not, just encourages people to not tell you they’re using it. They all become secret cyborgs.
So for organizations to be effective at using AI, they have to take advantage of the one thing an organization has that startups don’t have, which is people. AI works best at the individual level. It works best when people figure out what works well for them and what doesn’t, when they figure out the jagged frontier for themselves. As a result, it’s really hard for a central authority, a centralized committee, to decide how to use AI inside your company because you’re missing the chance for individuals to use it in the best possible way. And we already talked about how there’s secret cyborgs that people are already automating their work in tons of different ways. How do you get the secret cyborgs to tell you what they’re doing? For that to work, you need to have the right kind of incentives. That means both clear policies to operate with, but also positive incentives as well. Why would someone share data with you if they think they are going to get fired? You may have to have it say, we are not going to fire anyone because of generative AI. We are going to use it to make your job better. Here is a hundred thousand dollars in cash at the end of every month to whoever comes up with the best prompt to automate their job.
Companies that already have good cultures, that already have sharing cultures and cooperative cultures, we are going to be in much better shape than companies with competitive cultures, where I am going to hoard how I use AI and not tell anyone about it.
Set clear data privacy rules
Depends on what model you use and your approach, but I see a lot of companies saying we’re not gonna use AI because of privacy concerns. Privacy concerns with AI can be real, but they’re often exaggerated and misunderstood. It is subtle but kind of important to understand how privacy works in AI. When they think about privacy in AI is that they think about ChatGPT or Gemini or, you know, Bard or whatever as an entity, as something that is watching you. If it sees something in one place, it also knows about it when it sees it somewhere else. We don’t think about Dropbox the same way. We do not think about it then when I upload something to Dropbox, Dropbox knows that kind of thing. We think we have our own Dropbox. And unless something goes very wrong, nobody else will see that Dropbox. That is actually how AI works. It is a process running on something. If the company uses your data as training data to train the AI model, then, yes, your data is being used in a way that, you know, might violate some of your proprietary rights. But all the AI systems, or at least a lot of them, have the ability to say, don’t train on my data. And then if that’s happening, the AI is not learning from your information.
Privacy concerns are still very real. You wanna make sure you’re technically compliant and legally compliant, and every country and region and industry has its own legal compliance issues. But I think people misunderstand the privacy issue as assuming there’s a fundamental issue with privacy, which is anything that I tell the AI, it can learn from and repeat back to people, and that’s not actually how the privacy situation works. It’s much more nuanced than that, and there’s lots of ways of getting private versions. In the US, you can get versions of ChatGPT that legally work with healthcare data or legally work with financial data. It just requires a different set of approaches. So that’s not an absolute limit, but it’s a concern. You have to make choices about how you’re using AI and what kind of AI you’re using. There are lots of ways of getting AI to work with your data in a way that doesn’t violate your data’s proprietary rights and doesn’t get that data out into the world.
Model the behavior you want to see
As a leader, one of the most powerful things you can do is model the behavior you want to see. So if you hope people will use AI to automate part of their job, you should show how you are doing that, but in a responsible way. Are you citing that information? How are you describing it? You might want to talk about your failures too. Like, I tried to make AI work here, and I could not make it work successfully. Can anyone help me figure out how to do this? So that you are showing a willingness to both, I do not know the answers to the question. Nobody knows the answer. We are all working out together. Classic psychological safety technique. So, I think you need to be thinking about these big-picture views and about modeling behavior in an obvious way. One thing I noticed is a lot of leaders don’t actually use AI themselves. They wait for a report to give them information about how to use AI. You really need to put the ten hours on yourself. You need to bring AI to the table and start working with it.