The board's recipe for constructive AI discussions
AI is a thoughtlessness enabler. The higher up in the organisation, the greater the potential for thoughtlessness. Luckily, thoughtlessness can be cured through knowledge, attention, and deliberate reflection.
Leaders have in general acknowledged that AI is here to stay, in some form. The big question which non-executive boards and C-suites currently grapple with is how to move forward from the general acknowledgement of AI to actual and sustainable value for their business or organisation. Having spoken with many of these leaders, I have come to realise that the main hurdle is the classical challenge of talking past each other. This means that for boards and C-suites to be able to create real progress with AI, the first move should be to ensure a common understanding of which characteristics, capabilities, risks and options belong together.
Leaders have in general acknowledged that AI is here to stay, in some form. The big question which non-executive boards and C-suites currently grapple with is how to move forward from the general acknowledgement of AI to actual and sustainable value for their business or organisation. Having spoken with many of these leaders, I have come to realise that the main hurdle is the classical challenge of talking past each other. This means that for boards and C-suites to be able to create real progress with AI, the first move should be to ensure a common understanding of which characteristics, capabilities, risks, and options belong together.
I recently did this particular exercise with an executive team of an engineering company. For pedagogic purposes, I termed the categories "buckets". In this article, I have summarised these buckets. Each bucket represents different strategic importance and different levels of business and compliance risks. The role of the non-executive board or C-suite should be different for each of the buckets. My hope is that after reading this, you will be able to help your company to navigate more wisely and efficiently through the AI landscape.
Bucket 1: Personal Productivity Tools
Personal productivity tools in this context refer to chatbots such as ChatGPT, Copilot, Claude, Perplexity and Gemini. A chatbot is essentially a Large Language Model (LLM) presented through a dialogue interface, allowing users to communicate directly with the machine through written or spoken human language. The adoption of these tools has been remarkably fast. As of early 2026, independent estimates suggest ChatGPT has roughly 800–900 million weekly active users worldwide [1], while Google reports that Gemini had around 650 million monthly users [2] at the same time.
Chatbots exist in many variations, from general-purpose assistants to highly customised versions tailored to specific organisations or tasks. Regardless of form, they are rapidly becoming commodity tools. For comparison, a large share of the workforce would struggle to get through a normal workday without Excel, Word or PowerPoint. These tools are essential, but they are hygiene factors rather than sources of strategic differentiation. Chatbots are now moving into a similar position in our lives, both privately and professionally.
Chatbots can be extremely powerful and incredibly useful, but they can just as easily be strongly manipulative, utter nonsense, or completely useless. Whether the outcome is valuable and useful depends on several factors, such as context, quality of the prompts, and how we humans balance the output from the chatbot with our human expertise. From a board or C-level perspective, the key is to ensure clear and pragmatic policies which minimise the risk of shadow AI and information security breaches. Financially, the question is really only about the cost of licenses. Whether this is a topic for the board or the C-suite typically depends on the size of the company.
From a value creation perspective, the board and C-suite should consistently invite AI to the table. Only through personal experience will the top-level leadership fully understand the strengths, weaknesses, and risks of these personal productivity tools.
Bucket 2: AI Agents
An AI agent is a piece of software wrapped around an LLM, typically connected to external tools through protocols such as MCP (Model Context Protocol). This makes an AI agent an autonomous software program with the ability to follow instructions in human language, create its own instructions, plan tasks and sequences, chunk tasks into smaller workloads, utilise external tools, and make decisions in ambiguous contexts.
The business value of AI agents is first and foremost automation of multi-step workflows that previously required human judgment and tool use, enabling organizations to scale knowledge work without proportional headcount increase. The most fascinating aspect, though, is the speed and relative ease of creating agents. Coding skills are no longer needed, which is a significant difference from conventional IT development. This means that the development of AI agents can easily be done by almost anybody in your organisation with the relevant system permissions. It is also a potential governance nightmare.
The main categories of risks surrounding AI agents are:
Decision authority and control:
The agent needs boundaries. Who defines what actions it can take? What requires human approval? When agents interact, errors compound—one agent's output becomes another's flawed input. And without explicit limits, you're delegating authority you don't understand to systems you can't fully predict.
Accountability and liability:
When an agent discriminates, leaks data, or costs money, who's responsible? The developer who wrote the code? The business unit that deployed it? The executive who approved the budget? Your organisational hierarchy, contracts, insurances, and legal frameworks weren't written for autonomous systems making cross-jurisdictional decisions based on probabilistic models.
Security, privacy and trust:
Agents create attack surfaces that traditional security can't address. Prompt injection lets attackers manipulate agents through carefully crafted instructions. Agents cross data boundaries in ways GDPR wasn't designed to handle. And when agents fail, you need audit trails that probably don't exist.
For more about the AI agent governance challenge, check out my last article: AI agents - a potential governance nightmare
From a board and C-level perspective, the most important aspects to handle are the governance and information security risks. The value realisation mainly happens at operational level. Even though AI is often touted as innovation and completely different from the traditional risk and control approach, the introduction of AI agents underpins the board’s responsibility for risk assessment and mitigation. From a financial perspective, AI agent deployment is a business case issue, primarily operational. In most companies, the size of the investment will be below board/C-suite level.
Bucket 3: Enterprise Solutions and Digital Twins
The third bucket contains a wide range of technologies. The common denominator is that the real strategic value lies in applying AI tools and methods to the business’ own data and business processes to solve business-specific needs. We can divide this bucket into two compartments:
Solutions built on foundation models:
Foundation models are mostly language models (Large Language Models - LLMs, Small Language Models - SLMs). On top of these, companies build their own applications to solve specific needs or develop new products or services to their clients.
Applications of AI tools and methods to digital assets:
In today’s world, many physical and operational assets have digital representations. When these are sufficiently detailed and continuously updated, they are referred to as “digital twins”. A digital twin is a living digital counterpart of a physical asset, process, or system, reflecting how it behaves over time.
By applying AI methods to these digital twins, organisations can simulate scenarios, predict outcomes, and test decisions before acting in the real world. This enables forecasting, optimisation, and better decision-making across operations, logistics, and strategy.
In most cases, the real strategic value for the business lies in this bucket. However, in order to identify the opportunities, understand the implications and risks, and wisely align the AI initiatives with the business strategy, the board / C-suite needs significant competence on business processes, data piping, information management, information security, and probability-based decision-making. Enterprise solutions are likely to require significant investments in several areas, in particular on data engineering.
Conclusion
The value of this framework lies in its application. Before your next board discussion on AI, assign each agenda item to a category. Policy questions related to Copilot, ChatGPT etc. belong in category one. Risk and governance questions related to AI agents belong in category two. Strategic investment questions related to innovation and development belong in category three. If an item spans multiple categories, split it.
This discipline prevents the common failure mode: conflating chatbot policies with enterprise AI strategy, or treating agent governance as an IT matter rather than a board accountability issue.
One practical starting point: ask your executive team which AI agents are currently operating in your organisation, who authorised them, and what access they have. If the answer is unclear, you have identified your first governance gap.
If your board or leadership team would benefit from more constructive and informed discussions about AI, I work with organisations through keynotes and advisory engagements to create clarity and shared understanding.
Feel free to contact me to continue the conversation.
About the Author
Elin Hauge is a keynote speaker, AI strategist, and trusted advisor to business leaders and boards. She specialises in helping organisations make sense of artificial intelligence beyond the hype, connecting technology to strategy, governance, and real-world value. With a multidisciplinary background in physics, mathematics, business, and law, Elin brings both analytical rigour and practical perspective. Her talks and advisory work empower leaders to ask better questions, make wiser decisions, and navigate AI with confidence.
References
¹ ChatGPT User Statistics — Exploding Topics. https://explodingtopics.com/blog/chatgpt-users
² Gemini 3 Announcement and Usage Notes — Google Blog. https://blog.google/products-and-platforms/products/gemini/gemini-3/#note-from-ceo