The BI and analytics conversation has swung hard toward AI over the past two years.
In 2026, it will swing back – not away from AI, but toward the unglamorous foundations required to make it useful. Expect fewer grand promises and more investment in semantic structure, reporting infrastructure, and operational workflows.
.jpg)
1. “Chat with your data” loses momentum
Conversational analytics was always a narrow value proposition: ask a question, get an answer quickly. It saves time, but only if you already know what to ask. You lose peripheral visibility into related metrics, struggle to explore context, and depend heavily on prompt quality.
Unsurprisingly, everyone built it: BI vendors, internal data teams, and even downstream platforms, such as Snowflake and Databricks. As a result, it has been rapidly commoditised. What was once positioned as a category-defining capability is becoming a feature.
There is also a more fundamental problem: quality. Without well-defined, semantically annotated data, conversational answers are unreliable. And large language models are inherently poor at numerical reasoning. They are statistical language predictors – translating numbers into text and patterns – not systems designed for aggregation, calculation, or financial accuracy.
This doesn’t mean conversational interfaces disappear. It means vendors that made them their core value proposition will struggle.
2. The semantic layer finally wins
Data quality has always been imperfect, yet businesses have managed to operate successfully regardless. The real bottleneck is not raw data quality, but business interpretation.
In customer conversations, almost nobody comes in asking for a semantic layer. What they come in with are familiar symptoms: teams disagreeing on numbers, dashboards breaking when schemas change, and a growing fear of “fixing” anything because of what it might break elsewhere.
A recurring pattern in demos is that the semantic layer only becomes visible once these problems surface. Questions shift from how fast dashboards can be built to how definitions propagate, whether logic is reusable, and how much confidence teams can have that changes won’t ripple unpredictably through the system.
Over time, trust erodes not because data is wrong, but because it is interpreted differently in too many places. Dashboards are copied instead of fixed. Metrics are redefined locally to unblock requests. Validation replaces decision-making.
What ultimately matters is the ability to define metrics once, govern them centrally, and apply them consistently across dashboards, reports, and applications — so change becomes safe again.
In 2026, this stops being a BI hygiene issue. The semantic layer becomes foundational infrastructure: the shared contract that restores trust for humans today and provides the consistency that AI systems will require tomorrow.
3. BI & Analytics for agentic AI
Agentic AI is still largely experimental in enterprise settings, but the direction is clear. Agents won’t replace human decision-making in the near term — they will replace the repetitive analytical work around it.
In practice, early agents are responsible for monitoring metrics, detecting deviations, triggering predefined actions, and preparing structured decision inputs. They do the work humans already do manually: checking dashboards, reconciling numbers, watching thresholds, and escalating issues. What remains human is interpretation, prioritisation, and judgment.
For this to work, agents need the same thing humans do: trusted, consistent access to business metrics. This is where the semantic layer becomes critical. It effectively acts as a headless dashboard — a machine-readable source of truth that both humans and agents can consume without ambiguity.
What’s holding this back today isn’t model capability, but data structure. Inconsistent metric definitions, fragmented access patterns, and read-only BI architectures make agent behaviour brittle or unsafe. Without a governed semantic layer, agents either act on the wrong numbers or require so much hard-coding that they fail to scale.
I don’t expect widespread production deployments in 2026, but strategically the shift is decisive. Access standards — whether via MCP or equivalent interfaces — to semantic layers and even dashboards will become table stakes for any organisation serious about agentic workflows.
4. Scheduled reporting to PDF, PowerPoint and Excel makes a comeback
Static reporting once felt obsolete: stale data, no interactivity, little flexibility. In practice, it never went away.
Executives still need reports that can be read linearly. Organisations still need consensus-building around a fixed, point-in-time view of the numbers. Compliance, audit trails, and historical records remain non-negotiable.
What’s changed is not the format, but the tolerance for duplication. In customer conversations, reporting repeatedly surfaces as a pain point — not because PDFs or PowerPoint are undesirable, but because they are disconnected. Dashboards are one system, board decks another, compliance exports a third. The same numbers are rebuilt, revalidated, and re-explained every cycle.
Because scheduled reporting was never “sexy”, it was rarely modernised. It worked well enough. That stops working once dashboards, analytics, AI-driven insights, and formal reporting are all expected to coexist — and yet draw from different logic.
In 2026, reporting quietly becomes strategic again. Not as a separate tool, but as the same governed analytics rendered in a frozen, distributable form. Warehouse-native, “what you see is what you get” reporting becomes the mechanism that restores trust at the edge of decision-making.
5. From spreadsheet-first BI to workflow-first analytics
The appeal was obvious. Excel struggles with cloud-scale data, governance, and collaboration. Spreadsheet-inspired BI tools promised familiarity with fewer limitations.
In practice, fully replicating the flexibility, power, and universality of Excel has proven unnecessary and unrealistic. Excel remains one of the most successful technologies ever built, and with live connectivity to modern cloud warehouses and semantic layers now becoming standard in Excel and Google Sheets, the case for a “better spreadsheet” as the centre of the BI stack weakens significantly.
This is not lost on vendors like Sigma, which are increasingly repositioning around workflows and data applications rather than spreadsheets alone.
6. Writeback, data apps, and operational BI become mandatory
The terminology is still unsettled – writeback, data apps, workflows – but the direction has been clear for years.
In customer conversations, many of the most important analytics use cases are already operational. Forecasts are adjusted, plans are revised, statuses are updated, and numbers are approved before they are final. The only thing missing is that these actions typically happen outside the BI tool – in spreadsheets, emails, or bespoke processes.
The ability to write data back from the same interface where it is analysed closes that loop. It enables planning, budgeting, and operational workflows to move out of fragile Excel processes and into governed environments that still retain analytical context.
This matters because many of these workflows sit in an awkward middle ground. They are too small or too dynamic for enterprise planning systems, but too critical to remain manual. Traditional workflow tools, meanwhile, struggle with analytical depth and context.
Incumbent BI platforms such as Qlik, Tableau, and Power BI were designed for an on-premise world where data extracts were necessary. That architectural choice locked them into read-only paradigms that are difficult to escape.
Warehouse-native platforms – including Astrato – are betting on bi-directional analytics as a first-class capability. In 2026, this shifts from differentiation to necessity for organisations that want to operate on data, not just observe it.
Agentic AI will require the same capabilities – but that story probably belongs to 2027 and beyond.
Conclusion: less AI magic, more AI readiness
AI will not be doing everything – at least not yet. In 2026, it will mostly amplify what already exists.
In organisations with fragmented metrics, brittle dashboards, and manual workflows, AI will accelerate confusion. In organisations with governed definitions, operational analytics, and clear ownership, it will accelerate decisions.
This is why the focus is shifting from AI features to AI readiness. The hard work is no longer experimentation, but standardisation. The competitive advantage will belong to teams that invest in the unglamorous foundations that allow humans and machines to work from the same version of the truth.
The time to invest is now – before automation makes inconsistency impossible to ignore.






.avif)








