Three Techniques for Becoming a Prompt Engineer

This content is locked. Please login or become a member.

7 lessons • 29mins
1
Surfing the AI Tsunami
01:51
2
Understanding AI’s Ongoing Evolution
04:05
3
Five Principles for Designing Human-AI Hybrid Systems
05:49
4
Three Techniques for Becoming a Prompt Engineer
05:25
5
Three AI Limitations to Watch Out For
03:53
6
Navigating AI’s Ethical Dilemmas
06:01
7
Being an Engaged AI Leader
02:48

Today, the way we interact with AI mostly is through what’s known as prompting, which essentially boils down to asking the AI either in writing or verbally to answer certain kinds of questions for us. One of the paradoxes of AI is that it has enormous capability, but virtually no user’s manual for how to tap in and use it effectively. The ability to prompt well has turned out to be a huge differentiator in terms of people’s capability to really get value fundamentally out of the system. So I advise everyone to focus on becoming what sometimes is called a prompt engineer, which basically means the ability to craft prompts and not just one prompt, but a series of prompts that really leverage the tremendous power of these systems to create answers and insights that are going to give you the value that you really need.

As you think about learning to be a great prompter of AI, there’s three basic techniques that I’d like you to be thinking about. One is really how you set up the system from the outset. This is known as role setup or persona setup. A simple example would be “Act as a financial analyst.” And as soon as you say to the system “Act like a financial analyst,” every question and answer that it gives you from that point on is from the point of view that it is taken as a result of you giving it that instruction.

Then that brings us to the second big technique that you need to be focused on. Do you start broad or do you start narrow? It’s a bit counterintuitive because it’s quite different often than how you would interact with another human being. Because that other human being understands the context, understands the emotional resonances of what you’re talking about. If you ask a broad question, they can typically narrow it down sufficiently to at least begin a conversation. AI systems have no such understanding of the kinds of questions you’re asking. What this usually means is that you need to be much more precise about what you’re asking the system from the outset. This is especially true with the reasoning models because if you think about it, the reasoning these systems are doing is going through a chain of steps of thinking. And if you start at the wrong point, it’s rapidly going to head off in directions that are not going to be all that useful. Lots of context, lots of specificity is absolutely the way to go. Now the caveat here is that with the generative models, not the reasoning models, you sometimes can have a very interesting conversation by asking initially a fairly broad question and just seeing what the system gives you.

And then the third big technique that you should be thinking about is the ability to craft a series of prompts that progressively narrow and focus the system. Because if you ask one of the generative systems a broad question, you’re likely to get a whole bunch of stuff, some of which may be useful, some of which may not be useful. You may hit the gold right on the on the outset, but likely you won’t. Being able to then ask the follow on questions to get the increasing precision is absolutely critical. So if you ask it to be a financial analyst, you’re then hopefully asking it a question that a financial analyst would be good at answering, and then also, of course, asking the follow on questions that you need.