A perspective on using AI for lessons learned analysis.
Across industries, professionals are under pressure to make sense of ever-growing volumes of lessons learned. For many, the lessons are captured in spreadsheets that stretch to hundreds of entries. Faced with this challenge, it is tempting to paste the entire dataset into a tool such as ChatGPT and ask it to “summarize the lessons and provide recommendations.”
At first glance, this may appear to be a quick and cost-free solution. But in practice, this approach introduces significant problems in terms of quality, security, and data integrity.
The first challenge is with the data itself. Most spreadsheets of lessons include a proportion of low-quality entries – lessons without a clear problem, root cause or recommendation. The lessons may not have been reviewed and are low quality or even duplicates of other lessons. These provide little analytical value, but when dumped into ChatGPT they influence the output, distorting results.
More seriously, many lessons contain sensitive information: internal project issues, client project details (potentially subject to dispute), or proprietary processes. Entering this data into unsanctioned AI tools risks breaching confidentiality agreements or exposing corporate knowledge. For many enterprises, this is a clear compliance red line but Excel databases of lessons learned are nearly impossible to control.
Large Language Models (LLMs) such as ChatGPT operate within a fixed context window — the maximum amount of text they can consider at one time. While these limits are growing, pasting hundreds of lessons often exceeds this boundary. The result is that:
Even when the text fits, ChatGPT is being asked to turn a long unstructured text into a shorter, meaningful output but AI perception is different from Human Perception and LLMs will not evaluate a theme the way a human analyst or clustering algorithm would. The AI has no inherent way to distinguish between themes, recurring issues or isolated outliers and may give the same attention to a one-off outlier as a systemic issue.
This is because an LLM, like ChatGPT, will work with statistical patterns in the text and it will pay disproportionate attention to features such as length, tone and emphasis. This introduces two common distortions:
When a Large Language Model is given hundreds of lessons in a prompt, it does not perform a structured thematic analysis. Instead, it processes the text as a flat sequence and generates a compressed summary which leads to further issues:
A Project Manager, loading lessons into ChatGPT, is hoping that the tool will analyze trends and provide insights but the results aren’t satisfactory and attempts to tweak and improve the output are hampered by the probabilistic nature of the LLM as explored below.
Large Language Models are inherently probabilistic: the same input can produce different outputs from one run to the next. When hundreds of lessons are submitted in a single prompt, this variability becomes more pronounced:
For lessons learned to inform governance, risk reduction and planning, the outputs must be defensible and when the outputs change unpredictably or cannot be regenerated then an organization cannot rely upon then with confidence.
We apply a combination of machine learning and AI techniques implemented specifically for lessons learned analysis.
Lessonflow ensures that lessons are pre-processed, scored and filtered before entering our semantic workflow, this means that quality lessons are grouped using advanced techniques that identify similarity in meaning, not just wording.
A full analysis of themes and topics is performed with multiple steps prior to a generative AI providing focused summaries and recommended actions. This results in:
Using ChatGPT directly on a spreadsheet of lessons may provide a quick experiment, but it is neither reliable nor secure. For organizations that take KM and performance improvement seriously, it introduces more risk than value.
By applying structured machine learning and AI methods, it is possible to achieve analysis that is not only more accurate and actionable, but also compliant with the standards expected in modern organizations. The outcome is a process that delivers real insights while protecting the integrity of the data.
Explore how structured AI analysis can transform lessons into real organizational learning.