The Three Buckets Framework: Bucket #1 - Personal Productivity Tools
As a keynote speaker, I have recently contributed to several company-internal events where the leadership team has announced that they are now rolling out Copilot to all employees. This always piques my curiosity around their expectations for ROI and productivity growth.
Introduction: AI Adoption and Expectations
In the article 'The board's recipe for constructive AI discussions’, I outlined a simple framework to help boards and C-suites structure their discussions around AI in a meaningful way. "AI" is not one single technology, but rather a large collection of technologies with a wide spectrum of characteristics, capabilities, risks, and requirements. Value-creating and constructive discussions therefore require that the technology in question be assessed based on the relevant attributes.
In this article, I will dive deeper into Bucket #1 in the framework, i.e. personal productivity tools. For the purpose of a board-level framework, I have limited the scope to chatbots and related knowledge management tools, such as ChatGPT, Claude, Gemini, Copilot, Perplexity, Mistral, and NotebookLM. Agentic tools, such as Claude Code, OpenAI Codex, Microsoft Copilot for Sales, and Salesforce Agentforce, plus all enterprise-specific agents, belong to Bucket #2 (automation and AI agents) and will be discussed in more detail in a later article.
Chatbots vs. Agents: A Necessary Distinction
Why do we need to differentiate between chatbots and agents? In daily conversations, these terms are often used interchangeably, interspersed with assistants and workflows. The key difference, from a top-level perspective, is the degree of autonomy.
In this perspective, chatbots and AI assistants are tools where the user must initiate and drive the interaction through prompts. Agents, on the other hand, are autonomous entities that are triggered by events (such as a scheduled time, new incoming data, or an explicit user command), that can interact with their environments, and that can operate autonomously within defined scopes. Both potential ROI and governance requirements are significantly different.
Three Questions Boards Should Ask
For now, let's return our focus to the chatbots. The big questions are 1) what do boards need to understand about chatbots, 2) what do they need to do with respect to chatbots, and 3) which financial returns may be expected from the adoption of chatbots?
1) What your board needs to understand about chatbots
A chatbot is technically a mathematical model of language (Large Language Model, or ‘LLM’) surrounded by a massive software architecture and laced with a dialogue interface. The mathematical model is based on the probabilities of words, phrases, and meanings belonging together. The surrounding architecture keeps track of meaningful dialogue structures, search queries and fact-checking, long-term memory, and context. It's a machine, not magic. It can be extremely powerful and useful, but it can also be incredibly stupid and completely useless. The degree of uselessness is gradually going down as the technology matures, and the degree of usefulness increases with the user's ability to select the right tool and prompting approach for the right use case.
All chatbots are not equal, and trying to rate them is like counting children in identical puddle suits playing in the nursery playground. They all move all the time, in different directions.
2) What your board needs to do with respect to chatbots
Keep in mind that there are several regulations which come into play when an employee puts company information into a chatbot. Some are European, such as GDPR and the AI Act, whilst others are national or industry specific. The most important aspects are which information is used for which purposes, where does that information end up, and does this processing potentially lead to information security breaches.
There are three key actions that the board should take with respect to chatbots:
Make sure there is a policy in place, including how to handle personal data, how to handle confidential information, and restrictions on the use of personal accounts for company information.
Make sure the company holds enterprise-level licenses. Free or personal licenses do not have either data processor agreements nor sufficient information security to be used for most professional purposes.
The AI Act mandates AI literacy. That does not mean teaching all employees to prompt. On the contrary, it means teaching employees cyber hygiene principles, critical thinking, how to avoid cognitive offloading, and how to ensure regulatory compliance through everyday actions. Therefore, invite AI to your own table on a daily basis. AI literacy has to start from the top, and learning by doing is highly effective.
3) What your board should expect from company-wide adoption of chatbots
Accumulated across the company, not much. On an individual level, however, the effects will vary. Some employees will become faster. Some will change how they approach their tasks. Some will gain access to information they did not previously have. Others will produce more, but at mediocre quality. Some will use it for everything. Others will refuse to use it at all.
This variability is exactly the point. The impact is real, but it is uneven, difficult to measure, and highly dependent on individual behavior.
Let's take Excel as an example. Many people depend heavily on Excel in their professional roles. However, I still have not met one single CFO who could provide a business case with a convincing ROI case for the use of Excel. They don't need to, either, because Excel and similar spreadsheet applications have become a hygiene factor. Similarly, chatbots have also followed the same trajectory to becoming hygiene factors. The question is therefore not what the ROI of chatbot usage is, but what the cost of not governing it will be. If the company does not provide enterprise licenses and clear policies, employees will default to their own tools and shadow AI becomes the real risk.
Closing Reflection: Leadership Starts at the Top
As an end note, I'd like to point out the importance of board directors being part of both the policy regime and the license regime. Board directors typically handle a lot of highly confidential information. You do not want them to thoughtlessly load company secrets and HR documents into a free version of ChatGPT.
I frequently speak with board directors who talk about their own use of ChatGPT as a demonstration of their AI competence. That's when I know that they have no idea. The world of mathematics and data is infinitely much bigger than that.
If your board or leadership team would benefit from more constructive and informed discussions about AI, I work with organisations through keynotes and advisory engagements to create clarity and shared understanding.
Feel free to contact me to continue the conversation.
About the Author
Elin Hauge is a keynote speaker, AI strategist, and trusted advisor to business leaders and boards. She specialises in helping organisations make sense of artificial intelligence beyond the hype, connecting technology to strategy, governance, and real-world value. With a multidisciplinary background in physics, mathematics, business, and law, Elin brings both analytical rigour and practical perspective. Her talks and advisory work empower leaders to ask better questions, make wiser decisions, and navigate AI with confidence.
FAQs
-
In this framework, personal productivity tools refer mainly to chatbots and related knowledge management tools such as ChatGPT, Claude, Gemini, Copilot, Perplexity, Mistral, and NotebookLM. These are tools that support individual work through prompting and dialogue. They are different from agents, which operate with a higher degree of autonomy.
-
Because the distinction affects both governance and expectations. Chatbots depend on the user to initiate and guide the interaction. Agents, by contrast, can be triggered by events, interact with their surroundings, and act autonomously within defined limits. That difference matters when discussing both risk and value creation.
-
A board needs to understand that a chatbot is not magic. It is a language model surrounded by software that manages dialogue, context, search, memory, and other supporting functions. These tools can be very useful, but they can also be unreliable. Their value depends on the quality of the tool, the use case, and the judgement of the user.
-
First, make sure there is a policy in place covering personal data, confidential information, and the use of personal accounts for company work. Second, make sure the company provides enterprise-level licences rather than leaving employees to rely on free or private versions. Third, take AI literacy seriously. That starts at the top and should include critical thinking, cyber hygiene, and responsible everyday practice.
-
Boards should be careful not to expect a neat or uniform ROI. The effect of chatbots is real, but it is uneven and difficult to measure. Some employees will work faster. Some will change how they approach tasks. Some will produce more, but not necessarily better. In many organisations, chatbots are becoming a hygiene factor rather than a standalone business case.
-
The real risk is unmanaged use. If the company does not provide proper licences and clear guidance, employees will often turn to their own tools. That is when shadow AI becomes a governance problem, with consequences for confidentiality, compliance, data protection, and information security.