AI adoption without risk awareness is a strategy mistake
"We want you to inspire them into action, don't scare them. Don't talk about risks".
This is a typical request from clients who hire me to deliver a keynote or a workshop on AI. Owners want their companies to deliver higher RoIs, executives want employees to realise financial growth, technology vendors want their customers to buy more of their products and services, all through adoption of AI. But humans don't like change, and therefore every risk may serve as a reason to stall changes. Hence, don't talk about risks.
Why Risk Management Is Essential for AI Value Creation
I grew up professionally in the insurance industry. Risk was everything. Risk was carefully assessed and balanced at every strategic corner. In terms of using advanced analytics and algorithms (this was pre the current AI frenzy), we were very good at balancing benefits, innovation, and risk. The company was, and still is, highly profitable. Understanding and navigating risk was essential for value-generating technology adoption.
What High-Performance Environments Teach Us About Managing Risk
I'll give you another example; I do K-1, which is a version of full-contact kickboxing. Every Friday afternoon we spar. For those not accustomed to martial arts, that means we practice fighting against each other. As K-1 is a contact sport, sparring between experienced fighters may come across as pretty brutal for outsiders. However, the essence is that we know the risks and we are trained to protect ourselves against them. The discipline is strict, and a fundamental requirement for even being allowed to participate in light contact sparring is that you understand the risks and act accordingly. Those who try to ignore the risks are either quickly adjusted through learning by doing with immediate feedback, or simply sent off the mat with two months' quarantine. As member of the trainer team, I work with our fighters several times per week to teach them new techniques, to drill tactical moves, to improve their abilities to read the opponent, to quickly identify risks, and defend themselves against destructive impact, whilst at the same time counter attacks and set up their own combinations for maximum impact.
Understanding the Power and Complexity of Modern AI
Back to AI. AI is an umbrella term, covering everything from simple classification to chatbots and agents. The different types of AI offer a wide range of opportunities and capabilities, and some of them are powerful beyond human comprehension. With these powerful capabilities come potentially disastrous security risks, which was recently demonstrated by the OpenClaw / Moltbook cyber security scandal.
Over a weekend, Peter Steinberger, an Austrian founder, vibe-coded an open-source AI agent platform. For a while, the project went under the radar. Then suddenly, over a few days the platform, now called OpenClaw, went from zero to more than 200,000 GitHub stars (aka "likes") in a matter of weeks. It connects to your email, calendar, messaging apps, file system, and browser. It acts autonomously on your behalf, solves almost any digital task you can imagine, and has long-term persistent memory. It can also acquire new skills from a skills marketplace. Sounds awesome, right? That's what a lot of AI enthusiasts out there thought, too. However, in the back-end, significant security weaknesses caused public exposure of 42,000 instances. Critical vulnerabilities enabled full remote code execution. Malicious third-party plugins were quietly exfiltrating data. Infostealer malware harvested authentication tokens and walked straight into users' systems.
In parallel, another founder, Matt Schlicht, used an OpenClaw AI agent to vibe-code Moltbook, a Facebook-like social platform for AI agents. Within hours, 150,000 agents had joined the platform. Within a week, the number of agents were 1,5 million. The vast network of autonomous agents, significantly helped by invisible human forces with various degrees of malign intent, spiralled into an uncontrolled chaos. The vibe-coded platform did not have necessary security in place, leaving the backend exposed and giving anyone the ability to hijack agents and the data they carried. 1.5 million passwords were leaked.
Cisco (1) put it plainly; these are groundbreaking capabilities at the cost of an absolute security nightmare. This was not a fringe tool. It was a preview of where AI agents may be heading and what happens when adoption outpaces governance.
How to Adopt AI Safely Without Slowing Innovation
Just like I will never let one of our K-1 fighters go into the ring without understanding the risks and being able to defend themselves, I will never let my clients set out on their AI journey without understanding and being prepared to mitigate the risks involved. The organizations that will extract durable value from AI are not those that move fastest with the least friction — they are those that move deliberately, with a clear-eyed understanding of what can go wrong and a structure in place to address it. Risk assessment is not the enemy of ambition. It is the foundation that makes ambition sustainable. Skipping it does not accelerate your AI strategy; it exposes it.
Explore how to adopt AI with clarity and confidence
I work with organizations through keynotes and advisory engagements to align AI strategy with business value, governance, and long-term impact.
About the Author
Elin Hauge is a keynote speaker, AI strategist, and trusted advisor to business leaders and boards. She specialises in helping organisations make sense of artificial intelligence beyond the hype, connecting technology to strategy, governance, and real-world value. With a multidisciplinary background in physics, mathematics, business, and law, Elin brings both analytical rigour and practical perspective. Her talks and advisory work empower leaders to ask better questions, make wiser decisions, and navigate AI with confidence.
References
Chang, A., Narajala, V. S., & Habler, I. (2026, January 28). Personal AI agents like OpenClaw are a security nightmare. Cisco Blogs.https://blogs.cisco.com/ai/personal-ai-agents-like-openclaw-are-a-security-nightmare