2025's Hard Lessons and 2026's Reality Check
Let's end the year with something more useful than "AI is changing everything".
Let’s end the year with something more useful than “AI is changing everything”.
Here is what I saw up close in 2025, what I think is coming in 2026, and one thing I believe every serious leader should do about it.
Three things 2025 made brutally clear
1. Many leaders were set up to fail with AI
A lot of you told me some version of this privately:
“My execs want instant AI ROI, tiny budget, no extra people, and somehow my existing workload still magically gets done.”
On paper, AI “adoption” looks impressive. One recent read of McKinsey’s latest AI survey: most organisations are experimenting or piloting AI, but nearly two-thirds have not scaled it across the enterprise.
Another summary of the same data: roughly 88 percent of organisations use AI somewhere, yet only around a third report any P&T impact, and only a tiny slice qualify as genuine high performers.
In other words:
- AI use is high.
- AI value is patchy.
- Expectations are often delusional.
That disconnect showed up everywhere this year:
- Executives assuming AI is a feature you “turn on”, not a capability you build.
- AI projects launched with no clear owner, no real outcome, and a “see how we go” budget.
- Teams forced to squeeze AI on top of a full time job, then blamed when it does not magically transform the P&L.
The emotion underneath what you told me was not fear. It was frustration.
Frustration that decisions about AI were being made by people who did not actually understand what they were approving, or what it would really take to make it work.
And yes, the data backs this up. One 2025 report found almost all companies are investing in AI, but only 1 percent believe they are at full maturity, and the biggest barrier is leadership, not staff.
My translation: the people with the signatures are still catching up.
2. The world moved faster than your AI business cases
While organisations were still running “prompting 101” sessions and forming AI steering committees, the tech sprinted ahead.
New models, new agentic capabilities, new tools every other week.
By the time some business cases got to the steering committee, the underlying assumptions were already stale.
At the same time:
- Cyber incidents kept climbing. Australia’s own cyber agency reported an 11 percent increase in incidents in FY2024-25, with more low-level but persistent attacks hitting organisations.
- Globally, phishing and social engineering surged, with roughly 82 percent of organisations reporting such incidents.
- Threat actors started openly adding AI to their toolkit, with reports of nation state groups using AI to speed up intrusion and make attacks harder to spot.
So while many leadership teams were still debating whether AI is “ready”, bad actors quietly decided it was ready enough.
On the AI side, a new buzzword arrived: agentic AI.
In simple terms, agentic AI refers to systems that can pursue goals and take actions with limited supervision. They do not just answer questions. They plan, decide, and do things, often by chaining multiple tools together.
That means:
- Less “type a prompt, get an answer”.
- More “tell the system what outcome you want. It figures out the steps and executes them”.
Great for productivity. Terrifying if you are not across the risk.
And this is the uncomfortable truth from 2025:
AI capability is advancing faster than leadership literacy. That means risk is growing faster than benefits in many organisations.
3. I overestimated how well people understood AI
This one is on me.
Coming into 2025, I assumed most senior people had a decent grip on the basics:
- Roughly how large language models work.
- What context and data actually mean in practice.
- The difference between “chatting to a model” and building a proper solution.
I was wrong.
In many rooms, very smart executives were sitting quietly, nodding along, and not asking the questions they really had.
And again, the data backed up what I was seeing:
- One survey in 2025 found that 82 percent of C-suite executives and 79 percent of workers admitted they have pretended to know more about AI at work than they really do.
- Another study found executives were more than twice as likely as frontline staff to believe employees were enthusiastic about AI, which simply is not true.
So if you have felt a bit lost or behind on AI this year, you are not alone. Most people have been winging it.
That realisation forced me to change how I work.
This year I started deliberately including fundamentals in almost everything I did:
- Simple explanations of transformers, GPT, LLMs and why they matter.
- How prompting actually works, beyond “type better”.
- Why context and data shape everything.
- Detailed real-world use cases, not just headline slides.
A lot of the feedback I received was some version of, “Thank you, no one has actually explained it to me like that before.”
If I had to summarise my biggest lesson from 2025:
Never assume the literacy is there, even if the title is senior and the company is big.
Two things I believe are coming in 2026
1. The good: clarity and focus, finally
Here is the upside.
I think 2026 will be the year most organisations finally get past the AI swirl, and into something more useful.
- Less “we need an AI strategy because everyone else has one!”.
- More “here are the two or three business problems we will solve with AI, properly funded and resourced”.
What will unlock that clarity?
A few things:
- Real examples. Seeing peers in your industry actually succeed with specific AI use cases. Things like reducing claim processing times, speeding up decision support, or automating gnarly manual work in ways that demonstrably work, not just look good in a case study.
- Time to organise programs properly. The whiff of 2024 is gone. Boards and executives are starting to accept that AI is not a side project. It needs planning, budget, and owners like any other serious change initiative.
- Agentic AI getting practical. As agentic tools become more useable, you will see more “AI as a framework” pattern: agents that can own a repeatable workflow end-to-end, not just spit out content. The organisations that have their data, access controls, and guardrails in order will benefit first.
I am not saying it will be easy. But I do think:
Early movers who pick a small number of meaningful use cases and do them properly in 2026 will compound advantages very quickly.
You will see them hiring better, delivering faster, and quietly widening the gap.
2. The bad: AI-powered attacks will grow faster than your training program
Now the part everyone likes less.
The same capabilities that make agentic AI powerful for good are also incredibly attractive for bad actors.
You have already seen hints:
- Nation-state groups using AI to accelerate reconnaissance and intrusion.
- Tooling that can script, test, and refine attacks at machine speed, not human speed.
In 2026, I think we will see:
- More sophisticated phishing and social engineering that feels eerily personal.
- Faster exploitation of vulnerabilities, especially around identity, cloud, and misconfigured tools.
- “AI as a service” for crime: off-the-shelf kits that lower the bar for less skilled attackers.
Here is the part I want to underline.
The weakest link in many organisations will not be the firewall or tools.
It will be AI literacy at the executive level.
Leaders who:
- Do not really understand how these systems think and act.
- Underestimate the creative ways AI can be used to probe, manipulate, and exploit.
- Assume “the tech team has it covered” while signing off on risky experiments and integrations they do not fully grasp.
You cannot defend well against something you do not understand.
And you definitely cannot govern it.
One action I think every serious leader should take in January
If I could ask you to do one thing in January 2026, it would be this:
Make executive level AI education your first “project”.
Not a vendor demo. Not a motivational keynote. Not another panel.
I mean a focused session (or series) where your senior leaders get a practical understanding of:
- How modern AI actually works in plain language
- Models, data, context, prompting
- How solutions are composed from these building blocks
- What agentic AI is really about:
- Systems that can plan and act, not just chat
- Where they can safely help and where they can do real damage
- How to think clearly about risk and value
- What is genuinely high-value in your context
- Where you are exposed technically and organisationally
- How to spot nonsense in vendor claims and avoid overpaying for “AI washing”
Why am I so insistent on this?
Because literacy changes the questions leaders ask.
Once executives understand the mechanics at a reasonable depth, they:
- Stop asking for “a chatbot like that other company”, and start asking for outcomes.
- Stop assuming AI is a magic add-on and start planning for data, change, and capability.
- Stop being paralysed by hype and fear, and start making clear, confident decisions.
If you get this right early in 2026, everything else becomes easier:
- Your AI strategy becomes grounded.
- Your cyber and risk posture becomes more realistic.
- Your conversations with vendors become less “shiny objects” and more “show me the value and the downside”.
If you do not, you are effectively outsourcing your future to whoever shouts the loudest in the room.
A closing thought
2025, for me, was the year of watching very capable leaders quietly admit:
“I am not sure I understand this well enough to bet my organisation on it.”
If that is you, you are not behind. You are honest.
My hope for 2026 is simple:
- We panic less.
- We learn more.
- We choose a few important problems and solve them properly, using AI as a real lever, not a gimmick.
And if you want help turning that one January action into something real for your executive team, you know where to find me.
In the meantime, wishing you and your loved ones a safe and happy holiday period. Get some rest… 2026 will be a big one!
Switching off work over the holidays is healthy. Switching off your curiosity isn't.
If you want to stay connected with people who care about AI and leadership without getting dragged back into email or the usual social scroll, the AI Leadership Academy is the place to be.
Dip in, explore, and keep your finger on the pulse without it feeling like work.