We help Microsoft-centric enterprises fully adopt the cloud & adapt to new ways of working.
Essential Solutions :
Get essential updates instantly! Subscribe
Illustration of a mystic eye symbolising foresight and predictions about the future of AI.

Learning Management

My predicted AI trends for 2026 (and what they mean beyond the hype)

Clare Knight

Commercial Operations Lead

If you listen to conference keynotes and vendor roadmaps, AI is already reshaping how work gets done. But when I speak with humans in my orbit – school parents, finance managers, or frontline staff, the picture is very different.  

Many people are experimenting and getting on just fine, especially when AI is used for personal wins: organising a family shopping list, planning a holiday, or getting advice on interior design.

In the workplace, though, the experience is different: According to Reuters, enterprises are still waiting to see the promised impact on their ROI.

Putting my seasonal Panto hat on, the AI genie is out of the bottle: We’re all on a journey and there’s no U-turns. So instead of rehashing whether AI will change the enterprise, this article predicts how organisations will respond to the AI challenge, seizing the opportunities, while navigating the risks.

TL;DR?

The AI hype of 2025 didn’t result in the anticipated ROI. In 2026, value will come from targeted use, clear policies, training, and patience – not blanket rollouts or over-ambitious ROI expectations.

1: AI becomes the default feature – but it won’t necessarily get used

As I said earlier, the AI ‘genie bottle’ is no longer something organisations actively choose to open. It is increasingly something they need to control.

At Microsoft Ignite this year, the ‘most announced’ enhancements for 2026 centred on AI: Extensions to Copilot, the introduction of AI agents, and new tools for managing and governing AI featured heavily.

This pattern is not unique to Microsoft. Across enterprise software, AI is now a default part of product roadmaps rather than an optional add-on. 

The practical implication is that AI capabilities are increasingly available inside the tools people already use – but that does not mean they are widely switched on, enabled, or actively used.

Case in point from inside our own business: we work with several platforms where the vendors have invested heavily in AI enhancements, and yet the usage statistics reveal very low take up from customers!

Hesitation in making ‘AI wishes’ across the industry seems to be owing to a mix of factors including lack of training or confidence, concerns about cost and licensing models, and a broader fear of the unknown..keep reading.

2: Productivity gains from AI will come from targeted – not blanket – use.

Despite confident claims about productivity, the results vary widely between organisations, teams, and individuals. In practice, AI tends to work best where processes are already reasonably clear, information is reliable, and people feel confident using the tools available to them. 

Where these conditions are not in place, productivity gains are often modest or inconsistent. AI can speed up individual tasks such as drafting, summarising, or searching for information, but it rarely improves end-to-end workflows on its own. In many cases it simply accelerates existing ways of working, for better or worse. More than once this year I have abandoned using an AI feature in an existing platform, because it just wasn’t cutting it and getting me the output I needed, so I carried on using my own approach. 

I believe this helps explain why reported productivity figures differ so much. Some organisations are seeing meaningful benefits in specific roles or activities, while others struggle to move beyond basic use. For most, AI has yet to change how work is structured or decisions are made. 

In 2026, this variability is unlikely to disappear. Productivity gains from AI will continue to depend on process clarity, data quality, and human confidence rather than the sophistication of the underlying technology alone/

Organisations that create space for experimentation, provide practical guidance, and allow teams to build confidence over time are more likely to see steady progress than those chasing headline productivity claims.  See also prediction 7.

3: In 2026, chasing AI ROI too early will hold organisations back

In the summer of 2025 MIT released research to suggest that 95% of AI projects so far have failed to produce any ROI. On the surface this figure sounds alarming, but it reflects how early on most organisations still are in their use of AI, rather than outright failure. 

Unpacking the research, an interview with lead author Aditya Challapally suggests that one of the reasons for the figure is that “generic tools like ChatGPT excel for individuals because of their flexibility, but they stall in enterprise use since they don’t learn from or adapt to workflows.” This aligns with the figures from IBM and EY which suggest that AI is not being used to its full potential to fundamentally reshape processes.  

In many cases, AI delivers value at an individual level before it delivers value at a system or organisational level. 

Time savings, reduced friction, or improved quality of output are real, but they are hard to pin down and even harder to tie directly to financial outcomes. Generic tools also tend to sit alongside existing workflows rather than adapting to them, which limits their measurable impact.  

I think this is unlikely to change significantly in 2026. ROI will remain uneven and difficult to prove conclusively, particularly where AI is being layered onto existing processes rather than used to redesign them.

Organisations risk drawing the wrong conclusions if they judge success or failure too quickly. A more realistic approach is to treat early AI investment as capability building, with clearer ROI emerging only once usage matures and processes evolve. 

4: Content governance becomes critical to AI success

As AI becomes embedded into tools such as intranets, search, and collaboration platforms, it increasingly acts as a front door to organisational knowledge. When employees ask questions, AI responds with summaries, extracts, and recommendations drawn from existing content. 

The quality of those responses depends entirely on the quality of the underlying information. Out-of-date documents, duplicated guidance, unclear ownership, or overly permissive access can quickly undermine trust in AI outputs. When AI surfaces inaccurate or sensitive content confidently, users tend to lose faith in the tool rather than question the source material. 

Illustration of a stylized Pac-Man-like shape consuming icons like envelopes, calendar, and chat bubbles with the text “Improve the quality of what you feed Copilot,” showing input items turning into structured documents and charts.

In 2026, content governance will move from being a background concern to a visible dependency for AI adoption. Organisations that invest in clear ownership, lifecycle management, regular review, and sensible archiving will get far more value from AI than those that do not. In this context, AI does not create content problems; it exposes the ones that already exist. 

“Rubbish in, rubbish out” is an old saying in technology circles, but it still holds true for AI projects.  We think in 2026 the issue of content governance will really come into focus; content needs to be managed properly through its lifecycle, with: 

  • Clear ownership and oversight
  • Automated reviews at regular intervals
  • Archiving polices in place
  • Approval workflow for changes if necessary. 

See also our earlier article on getting value out of Copilot.

5: AI agents will enter the workplace, but be kept on a tight leash

AI agents capable of carrying out tasks on behalf of users and making autonomous decisions – often referred to as agentic AI – are beginning to appear across enterprise platforms. In theory, they promise greater automation and efficiency by handling routine or multi-step activities with minimal human input. 

In practice, we think most organisations will approach agentic AI carefully in 2026. Use of agents raise questions around accountability, oversight, and risk – particularly where they are allowed to act autonomously or interact with multiple systems.

Many early use cases will remain assistive rather than fully automated, with humans retaining control over decisions and outcomes – which is critical in my humble opinion.  

For most organisations, the value of AI agents in 2026 will come from learning rather than large-scale automation. Pilots and controlled experiments will help teams understand where agents genuinely save time and where they introduce unnecessary complexity. Clear governance, defined boundaries, and realistic expectations will matter more than the number of agents deployed.  

Trust is critical, and agentic AI needs to be cautiously approached, with careful pilots and testing alongside guardrails and governance. The concept of ‘Move fast and break things’ does not apply here. 

6: AI will heighten cybersecurity risks

Generative AI is making social engineering attacks more convincing and easier to scale. Phishing emails, fraudulent messages, and impersonation attempts are becoming harder to spot, increasing the likelihood of successful attacks that rely on human error rather than technical weaknesses. New threats such as deepfake videos impersonating senior leaders sound far-fetched, but they are already here. 

This does not change the fundamentals of cybersecurity, but it raises the stakes. Controls, monitoring, and configuration remain important, but awareness and behaviour play an increasingly central role. Employees are more likely to be targeted with believable, personalised attacks that exploit trust, urgency, or authority. 

In 2026, organisations will need to ensure that cybersecurity training keeps pace with how AI is being used by attackers as well as employees. AI guidance and security awareness should be closely aligned, reinforcing good judgement and safe practices rather than treating AI as a separate or purely technical risk. 

It will also be more important than ever to follow security best practices where you can across Microsoft 365 and SharePoint, to reduce threats and minimise their potential impact. 

7: In 2026, AI success will hinge on the people not the technology

In 2026, our final and possibly ‘biggest prediction’ is the organisations that will start to see bottom-line value from AI will be those that invest in learning, support and AI usage policies as much as AI technology.  

Advice from Microsoft states that “to drive successful AI adoption, treat it as a people-first transformation, not just a technology deployment.”  We agree.

Clear communication, practical examples, peer learning, and visible leadership backing will all help create the conditions for successful adoption.

Crucially, organisations will also need clear guardrails in place (being in the West Country – we like to call them ‘pig boards*’) so employees understand things like:

  • Where boundaries apply, including agreed rules on ethical AI use
  • How to get the most out of the AI tools they have access to
  • How to avoid the many risks associated with AI use (not least of which include excessive charges if you go over your AI monthly subscription points).

*A pig board is a lightweight, durable panel used by farmers and pig handlers to guide, sort, and move pigs by directing their movement, preventing them from going where unwanted, and protecting handlers from being bitten.

We’ll be writing an article on AI Acceptable Usage Policies shortly – make sure you sign up for our newsletter to stay in the loop.

Conclusion – Preparing for AI in 2026

By 2026, AI will be embedded across most workplace platforms, whether organisations actively pursue it or not. What will differentiate outcomes is not how much AI is deployed, but how well the underlying foundations are prepared. 

For most organisations, progress will remain incremental. Productivity gains will vary, ROI will be hard to isolate, and newer capabilities such as AI agents will need careful oversight. This does not indicate failure but reflects the reality of introducing a general-purpose technology into complex organisations. 

Those that see the greatest benefit will focus on practical readiness. That includes content governance, clear boundaries for use, and ongoing support to help people build confidence. Treating AI adoption as a change in how work gets done, rather than a one-off rollout, will matter far more than chasing the latest features. 

AI is not something to ignore, but it is also not something to rush. A measured, deliberate approach will deliver more value than speed alone.  

Want AI to deliver real value to your enterprise?

Our enterprise AI readiness review will show you how ready you are for Copilot, assessing data quality, governance, and user training so you can deploy AI with confidence.