‘Lessons Learned’ is not my favorite phrase. Although it implies reflection, it does not portend what you are going to do in the future. For me, ‘Lessons Forward’ works much better and is much more relevant. Lessons forward represents two fundamental aspects of learning: contemplation (what happened—a backward and historical reflection) and projection (a forecast of what you are going to do with the newfound knowledge). With that, allow me to share a quick backstory.
For the past 18 months, I have been working on a project that came to be highly informed by artificial intelligence (AI). This project involved creating an innovative process to enable leaders to help manage wicked problems they face at work. Using the concept of polarity management, we wanted to help leaders see what management problems truly had a solution and what problems they were going to consistently manage over time. And if this were the case, using lessons forward, how might AI help them to better address these key dilemmas?
The question was how to take 4,000 different binomial polarities, each with 8–16 additional qualifiers, and orient them in an easily accessible database organized around six critical polarities, allowing for integrative reasoning, delivering reliable, valid, and credible answers to an infinite number of wicked problems?
We learned a few lessons forward about AI while we were addressing our own wicked problem. Here are five lessons forward from our journey (still ongoing):
- AI is not the answer. But it can be a very good beginning. When we first began working with AI, we thought it was going to provide us with the answers we needed. As we dove into using the tool, it took a few tries to realize AI was providing us with different ideas and an expanded perspective, but it was a starting—not an ending—point. AI can put together different facts and arrange them in a manner we might not have used. This different perspective allowed us to see the challenge differently and at times altered our thinking about the problems we wanted AI to address.
- AI requires human interface and engagement. We learned the lesson that after working with AI for a few weeks, we were actually very important. We realized the role and importance of our subject matter expertise, intuition, and decision-making capabilities. The lesson we took forward was the team needed to do the first 10% of the work—understanding the information we wanted and how to craft the prompt to receive the information in a manner where we could rapidly use it. AI would then develop an answer to the prompt we provided. Then we would need to go back and review, doing the next 40% of the work to check the information, hone the writing (AI uses a lot of adjectives), and ensure the information provided met our product and client needs.
- AI does not offer the same answer twice. A key part of our process was to develop valid, reliable, and credible information to feed an information process. Our lesson forward was that AI is neither reliable, valid nor credible over time. A key part of our process was that if the process received similar questions, the clients would receive similar answers. That simply did not happen. We could put the same prompt into AI and receive different answers each time. Although some of the answers would overlap, the answers were so different we could not determine how reliable AI was going to be to address our prompt. And without that reliability, credibility is questioned as is the validity. Our lesson forward was to seek to find a way to both craft prompts and assign information searches that would increase credibility and trust in the product. This lesson forward is still ongoing as I write.
- AI is flawed. Tough words, but true. When we began our process, we were not aware of the different biases that are a normal part of AI logarithms. We had to learn the hard lessons about the difference between ‘Narrow AI’ and ‘General AI.’ We had to learn the challenges with algorithmic bias (how the information is coded and recorded) and content bias (the source and worldview of the content). Our lesson forward here was to become engaged with the ethical use of AI (whose information was this and how are we getting it?) and security of AI (how do we protect our information in a way where we get credit and people can check our info for accuracy). This lesson forward is also ongoing, and we remain at a logjam trying to find a way forward to get the best information and honor the information we are receiving.
- AI is amazing. This was our deepest revelation. We had originally planned to take two years for definitions, coding, application, and testing. Because of the lessons learned and applied from working with AI, we were able to do most of this in one year. And the real lesson forward is that as we go forward and find solutions to some of the challenges stated above, we should be able to cut that time by 75% (because we will have the process and calculations completed to do it).
In sum, the lessons forward from working over 18 months with AI far outweigh the lessons learned. This is a dynamic process, AI-informed and human-confirmed. And perhaps that is our greatest learning—AI may yet take over the world, but not yet. Not today. Today it is a new tool, and like all tools, has specific applications for specific work. And it still requires the human element, with action, intuition, and knowledge.