Five Principles for Designing Human-AI Hybrid Systems

This content is locked. Please login or become a member.

7 lessons • 29mins
1
Surfing the AI Tsunami
01:51
2
Understanding AI’s Ongoing Evolution
04:05
3
Five Principles for Designing Human-AI Hybrid Systems
05:49
4
Three Techniques for Becoming a Prompt Engineer
05:25
5
Three AI Limitations to Watch Out For
03:53
6
Navigating AI’s Ethical Dilemmas
06:01
7
Being an Engaged AI Leader
02:48

Perhaps one of the most interesting dimensions of what’s going on with AI is how artificial intelligence and humans are going to be combined in what you might think of as hybrid systems or intelligence systems, and what role respectively the AI will play and the human will play. Now this is not a new problem. So it’s not like it’s suddenly brand new that people and calculating machines are working together. But what’s different about this is the greater capability that these systems have and the rapidity at which they are developing. The role of people in that system is largely to kind of oversee, regulate, ensure that the system is performing and doing what it needs to do.

From a practical point of view, leaders need the ability to guide the process through which AI-human hybrid systems are designed and implemented. And there are five basic principles that you need to adopt to do that. The first principle is to be very clear about the use case and the mixture of AI capability and human capability necessary to get superior results. If you take the example of a customer service chatbot, the AI needs to be capable of answering simple, basic queries efficiently and effectively. It also needs the capability to make judgments about when what the customer is asking is beyond its scope and to refer those requests over to a human who needs to be trained to handle the hard stuff.

That then brings us to the second principle, which is being clear about roles and responsibilities and the boundaries between what the AI system is doing on one hand and the human is doing on the other. It has to be absolutely unambiguous. What it is that the AI system is there to do and critically not do, and what the human is there to do and critically not do. The third principle is to ensure that the system, both the AI side and the human side, is learning and improving and delivering high quality results. Over time, the system is going to accumulate experience with, to return to the customer service example, a range of inquiries that customers may be making. The system should be built to learn from those inquiries, both on the AI side, but also on the human side, and potentially also change to some degree the roles and responsibilities. As the AI system learns more, it may be capable of handling more of the inquiries that previously were sent to the human. So you really need that adaptability built into the system.

The fourth principle is to have the right governance and guardrails in place. And this is a place where you absolutely have to have humans in the loop. These systems, as wonderful as they are, are capable of being biased, are capable of making unethical decisions, are capable of doing surprising things that are not necessarily so wonderful. The fifth principle is to anticipate where the technology is going and ensure you’ve built expansion possibility into the system. AI is so dynamic, and the capability is increasing so rapidly that a system that may seem state of the art today may seem a little tired tomorrow and may seem utterly obsolete the day after. The key here is not to make the very common mistake of designing systems for the current state of technology and not thinking about where the technology is likely to go. By doing so, you can begin to put places in the system where you can start to slot increasing capability fairly fluidly as it develops, so long as, of course, as you’re doing the right kind of testing and upgrading the governance and guardrails of the system in parallel.