Looking ahead: the role of AI in 2026

We are in the first days of January 2026, and news media and LinkedIn are overflowing with posts on projections for the new year. Unsurprisingly, a fair share of these projections are about Artificial Intelligence. It is against my nature to jump on the bandwagon of fashionable topics. However, a colleague challenged me to write what I really think about the role of AI in 2026, without the veneer of political correctness. 

Here we go. Brace yourself.

The AI snow globe vs everyday life

Both my husband and I live our professional lives in the digital tech space. Our families, though, do not. Our family members are doctors, public servants, actors, journalists, kindergarten teachers, construction machine operators, truck drivers, and construction workers. They have tested chatbots, like ChatGPT, and some of them use genAI tools more actively for work purposes. However, none of them worry about being replaced by AI. 

During the Christmas holiday, my husband and I stayed with my family for four days. During these days, we were completely immersed in their world, where life is about family values, a financially stable future, a fair and democratic society, and concerns about the looming shadows of war across Europe. Of course, as my professional life takes place in the digital tech space, I can easily argue that AI may indeed be both a catalyst and a risk in all these aspects. However, to my family, the role of AI is of little significance. It's there, yes. It offers new capabilities and poses new risks, yes. Nevertheless, beyond being a part of the digital toolbox, they couldn't care less. Just like Snapchat is fun, but hardly a question of significant societal importance. 

Are our families ignorant and naïve? Perhaps. But most of all, I think they are representative of the majority of people in our societies. They perform important work, and they are drivers and holders of our democratic values. The nature of their work suggests that even though certain tasks may be automated, the core is not going to change dramatically in the foreseeable future. Spending time with them is a great reminder that life is about so much more than digital technologies and AI. It is also a great reminder that even if everything feels like it is changing extremely fast in the tech landscape, outside this snow globe-like tech landscape, human inertia suppresses both the speed and the effects. Everything and nothing changes. I believe 2026 will continue along the same path. 

Outside the snow globe, and back inside it

And then we return home… and we are thrown back into the digital tech hamster wheel: some entity has created and published an AI-generated video "showing" a Ukrainian drone attack on Putin's residence, probably with the intention of building more internal support for Putin in Russia and further straining the relationship between Donald Trump and Europe. A few days into the new year, and the internet is flooded with a fake image of Maduro being captured by American DEA agents (albeit, the real image was not too different). That AI has been, is, and will continue to be a toolbox for geopolitical power players is well known among those who are engaged in this space. Humans have always engineered and executed plots to manipulate and persuade fellow humans. The internet and social media have simply provided efficient means for targeted and rapid information attacks. The AI toolbox puts these capabilities on steroids. 

The current geopolitical climate, combined with the personal incentives of the tech oligarchs, sets the stage for unprecedented manipulation campaigns by and against nations on all continents, fuelled by AI technologies. In 2026, the lines between facts and alternative facts are going to become more blurry, even invisible. Staying educated and informed will require increasingly more effort, in a world where most people lean unconsciously towards consumption, convenience, and comfort. 2026 will probably and hopefully force us to learn to question and verify information not just once or twice, but thrice. 

From AI hype to hard choices in 2026

When AI competence reaches the boardroom

In 2023 and 2024, the typical keynote request I received was "explain what AI is". In 2025, the big question was typically "explain how to create value with AI". Whereas early projections from the large consulting companies after the launch of ChatGPT in November 2022 were steep productivity growth hockey sticks and "AI everything", numerous recent studies indicate limited ROI from AI initiatives. 

In 2026, I foresee more boards and C-suites asking for help with pragmatic and informed utilisation of the AI toolbox in strategy execution. However, wise adoption of AI tools means understanding and mitigating the inherent risks. Strategy and risk assessment have to go hand-in-hand. GDPR and the AI Act provide formal frameworks, but regulatory compliance is not a guarantee against stupidity. AI tools are prediction models. Their outputs come with probabilities of being correct, which means that they also come with probabilities of being wrong. Wise adoption of AI tools is about setting realistic and accountable ambitions and understanding the cost of being wrong. I hope 2026 will be the year when boards and C-suites decide to strengthen the required competence around the table, and thus enable themselves to have the right discussions.

From science fiction to clarity and realism

I have a confession to make; when I worked for the IT industry, I was part of the commercial push towards cramming any type of advanced analytics, optimisation and maths-based automation into the AI term. We did it because the term “AI” was hot and triggered customer attention. Now, the AI umbrella term needs to be detangled back into advanced analytics, optimisation, operational research and the other related mathematical disciplines involved in using data for diagnostic, descriptive, predictive, and prescriptive analytics. 

LLMs have demonstrated that mathematical modelling of language and of the meaning integrated therein, is possible, and this particular technology is indeed powerful. However, we humans are, as always, slow at adapting. In the process of adapting, we confuse ignorance with godlike beliefs and make very stupid choices. 2025 was a year of massive friction, where the AI toolbox was applied to just about all aspects of life, society, and business, with massive failures as the outcome. These failures were not surprising. They were expected. And still, under the rhetorical “innovation” banner, the tech oligarchs pushed their narratives and became even richer and more powerful than ever. I hope, of all my heart, that 2026 will be a year of clarity and realism, and that we as humans stop confusing technologies we don't fully understand with science fiction and magic. 

Sustainability can no longer be ignored

Digital technologies in general and AI in particular are not sustainable per se. Anything digital requires several electronic hardware components, manufactured with precious metals, large volumes of water and lots of electricity. In the light of recent events, I also have to mention that the access to these precious metals are important drivers in the current geopolitical power game. Trump wants Greenland. Greenland has massive precious metal resources. 

Training of AI models requires massive computation power in large data centres. All use of any cloud-based digital technology, including your precious chatbot, also requires computation power in large data centres. Data centres are extremely power and water hungry, they take up massive areas of land, and the noise created by the fans is highly anti-social. 

I repeat: digital technologies in general, and AI in particular, are not sustainable per se. However, we readily trade our engagement for the environment when faced with the option to consume. I am not claiming that this is a choice between data centres or no data centres, but rather a call for holistic assessments and planning, where national resources, societal needs, and business and personal needs should be balanced. The market is starting to wake up to the unsustainable aspects of data centres and electronic hardware, and there are enough critical voices out there to force developers and authorities to take the sustainability issues seriously in 2026. 

The return of human authenticity

The phrase "AI slop" was born in 2025 as a description of mediocre output from genAI tools. More than half of the content on the internet is now created by genAI, and when AI models train on data created by AI models, which in turn were trained on data created by AI models, [...repeat], the output ends up as a mediocre average of mainstream content without nuance or depth. The academic term is model collapse. It's hard to predict where, and how, this slide towards grey mud is going to stop. 

What is already evident, though, is that humans are quickly growing a prominent distaste for AI slop. Instead, authenticity is seeing a renaissance. To what extent we are willing to pay for authenticity remains to be seen, but we can safely assume that 2026 will bring strengthened attention around human authenticity, both in text, visuals, and music.

Choosing courage over convenience

When Nazi-Germany attacked Norway on April 9th 1945, the government was paralysed by cowardice and naïvety. In the absence of clear orders, one single courageous colonel, Birger Eriksen, ordered the bombing of the incoming heavy cruiser (war ship) Blücher just south of Oslo, causing a significant delay of the invasion. Because of that delay, both the King, the Government, and the state gold left the country safely before the Nazi regime clamped its claws around Norway. 

You may be wondering why I bring this up all of a sudden. This is not a story about hero worship. It is a reminder that democracy and freedom often hinge on individuals who choose responsibility over convenience. To protect our democracy and freedom, you - yes, you - need to choose to stand up for what is right, rather than to lean comfortably into convenience.

Do I see hope? As a writer and speaker, it would probably be better for my business if I displayed a solid, positive outlook, selling the pretty idea that everything will be ok as long as we continue to innovate with technology. There is hope, if each and one of us takes the responsibility of thinking thrice, questioning convenient "truths", and making the responsible choices for our societies even when they come at a personal inconvenience. 

Unfortunately, I believe most people will choose consumption, convenience, and comfort over discipline and courage. I believe that power and greed will trump (pun intended) democracy and freedom. Still,I sincerely hope you will prove me wrong.

I wish you a courageous 2026 filled with learning, growth, and wisdom. 


I do not sell AI optimism. I approach data and AI from a perspective grounded in reality, risk, and responsibility. If you are looking for a speaker or board advisor with that perspective, I would be glad to connect. 

Contact Me

About the Author

Elin Hauge is a business and data strategist, pragmatic futurist, and eternal rebel. With a unique background spanning physics, mathematics, business, and law, she brings fresh and thought-provoking insights into artificial intelligence and other data-driven technologies. Her work connects today’s realities with tomorrow’s possibilities, focusing on demystifying AI, harnessing data power, managing algorithmic risk, and guiding leaders through digital transformation.

Next
Next

Forget AI, we need to talk about the next step in digital transformation