Three AI Limitations to Watch Out For

This content is locked. Please login or become a member.

7 lessons • 29mins
1
Surfing the AI Tsunami
01:51
2
Understanding AI’s Ongoing Evolution
04:05
3
Five Principles for Designing Human-AI Hybrid Systems
05:49
4
Three Techniques for Becoming a Prompt Engineer
05:25
5
Three AI Limitations to Watch Out For
03:53
6
Navigating AI’s Ethical Dilemmas
06:01
7
Being an Engaged AI Leader
02:48

As you’re learning to prompt these systems, there’s a few limitations and watch outs you should be keeping in mind. The first is what’s known as the hallucination problem, which is the tendency for these systems if they don’t have an answer, not to say, “Hey, I don’t know,” but instead to give you some fact that turns out to be completely erroneous. When I started using these systems to write articles, for example, I would ask it for some references, and it would very confidently give me three or four references, which when I checked did not exist. You cannot trust any fact that these systems give you. Now you can trust their creativity. You can trust their ability to help you generate ideas, but when it comes to facts you are well advised, very well advised, to be double checking almost anything. You don’t want to end up in the situation that a lawyer in the US ended up arguing in court using citations of previous cases that turned out not to exist and losing your legal license.

A second watch out is these systems are programmed with personalities that are intended to be helpful. They’re going to tell you nice things. They’re going to be complimentary to you. And if you call them on it sometimes, you get very, very interesting results. I was working with a system recently, again, writing a particular article, and I asked it to do something, and it gave me an answer, you know, that it thought I would be happy with. And I said, “Well, that doesn’t really seem right at all. Why don’t you give me a a more critical answer about this particular situation?” And it’s like, “Well, gee, sorry. Yeah. It’s really not, you know, not the case. Right? And your article really isn’t all that wonderful.” If you need critical thinking, you need to prompt them to be critical. Prompt them to not be nice to you.

And then that leads to a third big watch out, which is they seem to have a programmed desire to think that everything is going to be okay. You ask it a question. I did this recently about the future of employment impacts from AI and what the scenarios would be. And it gave me a really rosy scenario, a pretty okay scenario, and a scenario that was, not so bad. And then it ended with all this stuff about how if we all work together and play well together, everything will be alright. And so understanding that there is this seeming inherent bias in these systems towards a kind of optimistic set of outcomes, especially regarding the impact of AI on the world. And again, you’ve got to be very precise and very thoughtful to ask the system directly to debias itself, to give you the real truth as it sees it. And even then, you can’t necessarily be sure that that’s absolutely what you’re going to get.