Modern BI

From Legacy Dashboards to AI-Native BI: A Migration Plan

A realistic BI migration plan with 6–18 month timelines, five phases, team requirements, and a small/medium/large scenario rubric.

Nikola Gemeš
May 15, 2026
12 min
read

You've decided to migrate. Leadership wants a plan. The vendor your CTO has been talking to said "we can have you fully cut over in 90 days." Your gut says that's wrong, and you're trying to put numbers behind your gut before the next steering committee meeting.

This article is for you. It's the project plan you wish someone had given you before you started scoping — phases, timelines, team composition, the stall points everyone hits, and a three-scenario rubric you can take to your CFO. 

It assumes you've already settled the "should we migrate" question. 

If you haven't, start with our legacy BI vs. cloud-native BI guide — that's the conceptual primer. 

If your environment is compliance-heavy (HIPAA, SOC 2 with deep audit requirements, regulated financial services, FedRAMP-adjacent), our article on replacing legacy on-prem BI with AI-native analytics covers the five architectural decisions you'll need to add on top of what's here. 

Either path lands you back at this article eventually, because the project shape is the same in both worlds — compliance just adds decisions, not phases.

This article gives you a defensible project plan you can present internally, with realistic numbers your CFO won't laugh at.

TL;DR

A realistic BI migration runs 6 to 18 months depending on dashboard count, integration complexity, and team experience. 

Vendors who promise faster timelines are usually skipping the inventory step or the parallel run. 

The project breaks into five sequenced phases

  • foundation
  • inventory and triage
  • parallel build
  • user enablement
  • sunset 

— and needs a core team of 3 to 7 FTE across six named roles. Five stall points show up reliably: undocumented business logic, license overlap costs, change management resistance, IT/security review delays, and scope creep. The timeline rubric at the end gives you small, medium, and large scenarios you can paste into a deck.

The honest timeline: 6 to 18 months, not 90 days

The "90-day migration" claim exists because it's good marketing, not because it's good project management. Migration partner agencies sell against it. New BI vendors lean on it during procurement to make a switch feel painless. And the claim is technically defensible for one narrow case: a small mid-market team with fewer than 25 dashboards, a single cloud warehouse, no embedded analytics, no row-level security complexity, and a BI team that already knows the destination tool. That team can move in a quarter.

You are probably not that team.

The realistic ranges, based on what migrations actually take in practice:

  • Small (fewer than 50 dashboards, single warehouse, mid-market): 4 to 6 months
  • Medium (50–200 dashboards, multi-source, mid-enterprise): 8 to 12 months
  • Large (200+ dashboards, complex integrations, enterprise): 12 to 18 months
  • Very large (1,000+ dashboards, multiple business units, federated governance): 18 to 30 months and regularly longer

The wider you scale the project — more dashboards, more business units, more compliance overhead, more embedded customers depending on the output — the longer it runs. None of this is unusual. The migrations that come in under these ranges either started with an unusually clean baseline (a recent dashboard audit, a centralized BI team, a single source of truth in the warehouse) or skipped a phase that came back to bite them.

The compression pressure you'll feel from leadership is real. Your CFO wants the cost saving sooner. Your CTO wants the AI features in production this quarter. The vendor's account exec keeps showing you reference customers who "went live in 14 weeks." What's usually missing from those reference stories is how much business logic those customers carried forward from their legacy tool versus how much they quietly left behind — and how many of their dashboards are still running in the old system a year later because no one had time to migrate them.

Hold the line on the range. A project that's honestly estimated at 12 months and delivered in 11 is a win. A project that's promised in 4 months and delivered in 9 is a credibility problem you'll spend the next year recovering from.

Phase 1 — Foundation (4–8 weeks)

Phase 1 is where you turn the procurement decision into a working environment. The vendor is chosen. Contracts are signed. Now you're building the substrate everything else depends on.

What you deliver:

  • New tool environment stood up in production and a non-prod tier
  • Identity integration configured — SAML or OIDC for SSO, SCIM for user provisioning
  • Initial security review completed (vendor's SOC 2 Type II, DPAs, sub-processor list, network architecture review)
  • Network and access policies in place — IP allow-listing, VPC peering or PrivateLink where relevant, warehouse network policies updated
  • Warehouse connection configured with appropriate service accounts and role hierarchies
  • One pilot dashboard built end-to-end as proof of concept — chosen specifically because rebuilding it stresses the parts of the architecture you most need to validate

Team you need: Migration lead (full time), data engineer (half time), security or IT reviewer (quarter time, but with veto power), warehouse admin (quarter time). Three to four people for four to eight weeks.

Where teams stall: Security review. You scoped Phase 1 at six weeks. Your security team's vendor questionnaire process is a 12-week SLA. If you don't get the security review started in week one — before the contract is fully signed if your procurement allows — Phase 1 turns into a three-month phase that blocks everything downstream. Start the vendor risk assessment paperwork the moment the vendor enters serious consideration, not after the contract is signed.

Where warehouse-native architecture changes the phase: Identity integration is the longest sub-task in most legacy migrations because the legacy tool talked to on-prem Active Directory and the new tool talks to your cloud IdP. Warehouse-native platforms like Astrato support SAML 2.0, OIDC, and SCIM out of the box, and inherit row-level security from your warehouse so you don't rebuild access controls in the BI layer. If you've already done the work to define RLS in Snowflake or BigQuery — and if you haven't, the row-level security article is worth reading before Phase 2 — Phase 1's access work shortens meaningfully.

Switch RCM is the cleanest example of this. They run a behavioral health revenue cycle management business — HIPAA-covered, security-sensitive, audited regularly. Their migration target was Snowflake as the governance substrate, with Astrato as the access and action layer. The Phase 1 work concentrated on getting the security review through procurement; once that cleared, the identity work itself was straightforward because the warehouse already held the access model.

Customer evidence · Phase 1 anchor

Switch RCM: HIPAA-grade security through the warehouse

Behavioral health revenue cycle management. Migrated from Qlik Sense onto Snowflake + Astrato, with the warehouse as the governance substrate.

Switch RCM Phase 1 · Foundation

HIPAA-grade security

Access controls inherited from the warehouse meant Phase 1's identity work shortened — the bottleneck moved from architecture to security review timing.

“Astrato is a game changer. It integrated directly into our Data Cloud. Security and data privacy are critical for our work with behavioral health, addiction, and recovery support providers. Astrato allows us to maintain our high security in the Snowflake Data Cloud while opening more insights to more levels of care.”

Melissa Pluke
Co-Founder & Chief Analytics Officer, Switch RCM
Read the full story →

Why it matters here: security-sensitive Phase 1 setup is where most migrations stall. Inheriting access controls from the warehouse shortens the path through review.

Don't skip the pilot dashboard. It looks like ceremony — pick something small, rebuild it, show leadership a screenshot. The reason it matters is that the pilot stress-tests the architecture in ways your vendor's POC didn't. Your POC ran on demo data with a clean schema. Your pilot runs against your warehouse, with your governance, in front of your security review. The bugs you find here are the bugs you'd otherwise find six weeks into Phase 3 when you have ten dashboards in flight at once.

Phase 2 — Dashboard inventory and triage (4–6 weeks)

Phase 2 is the phase migrations are most likely to skip — and the phase where the value of doing it shows up clearly six months later, when the migrations that skipped it are stalled and the migrations that did it are on track.

The work is unglamorous. You audit every dashboard in the legacy tool. For each one, you document: who owns it, what data sources it touches, what business logic it contains, what its access pattern looks like (how many users, how often, at what time of day), and what it's used for in the operational workflow. Then you categorize:

  • Critical — runs the business. People will call you on a Saturday if it breaks. Rebuild as a first priority.
  • Important — used regularly, drives meaningful decisions. Rebuild in the second wave.
  • Orphaned — was built three years ago for someone who left, hasn't been opened in 12 months. Sunset with stakeholder sign-off.
  • Replaceable — duplicates another dashboard or solves a problem that no longer exists. Sunset.

What you deliver: A complete inventory in a spreadsheet or lightweight tool (Airtable, Notion, an Excel sheet — the format matters less than the discipline of finishing). A triage decision for every dashboard. A rebuild prioritization for Phase 3. An owner sign-off on the sunset list.

Team you need: Migration lead (full time), BI developer (half time, doing the technical inventory), business champions across each function (a quarter to a half day per week each, signing off on what's critical versus orphaned). Two to four people, plus distributed time from the business.

Where teams stall: The inventory consistently reveals 30 to 50 percent more business logic than expected. Calculated fields nobody documented. SQL queries with hardcoded business rules that someone wrote in 2019 and is no longer at the company. Scheduled jobs that turn out to power a monthly report that turns out to power the board pack. Every migration finds these. The teams that handle them best treat the inventory phase as a logic-extraction project, not a UI cataloging exercise.

Where the phase quietly saves the project: A third of what's in the legacy tool typically shouldn't be rebuilt at all. Sunsetting that third before Phase 3 starts means you're rebuilding 70 dashboards instead of 100, and your timeline drops accordingly. Teams that skip this phase rebuild everything and then sunset the orphans afterward — same outcome, except they paid for the rebuild.

Where warehouse-native architecture changes the phase: The inventory will surface a recurring pattern — business logic that lives in dashboard expressions instead of in the warehouse. 

A Tableau workbook with a calculated field called Active_Customer. 

A Power BI report with a DAX measure that defines Net_Revenue. 

A Qlik script with a transformation that calls Inactive_Account_Threshold = 180. 

Each one of these is a metric definition trapped inside a dashboard. In a warehouse-native model, these definitions move down into the warehouse — into dbt models, into the semantic layer, into views with documented business owners. The inventory phase is where you decide which definitions migrate as-is and which get refactored on the way down. Refactoring them takes longer in Phase 2 and Phase 3, but you only do it once.

Migration scoping · Approach decisions

Two decisions you make before Phase 3 starts

Big-bang versus phased rollout, and internal-led versus consultant-led. Neither has a universal right answer — but each has a defensible one for your situation.

Decision 1 · Rollout pattern

Big-bang cutover Hard switch

Rebuild everything, validate it, cut over the whole organization on a single date.

Pros

  • Shorter total parallel-run window — lower license overlap cost
  • One training event, one governance reset, one change-management push
  • Forces inventory and sunset discipline before the date
  • Cleaner project close — no long tail of unmigrated dashboards
  • Easier to defend a single hard timeline to leadership

Cons

  • Every dashboard has to be ready on day one — one slip moves everything
  • Power users see a worse tool, all at once, with no fallback
  • Validation defects discovered post-cutover hit production immediately
  • Higher executive blast radius if anything goes wrong
  • Not viable for large or compliance-heavy environments

Fits when: small migration (<50 dashboards), centralized BI team, clean inventory, no embedded customers depending on the output.

Phased by function Default

Roll out function by function (finance → sales → operations → ...) with parallel run for each.

Pros

  • Each function gets focused training and champion support
  • Defects surface in one function before they hit the others
  • Adoption metrics per function give you go/no-go signal
  • Power users in late-phase functions learn from earlier ones
  • The default for medium and large migrations — lower execution risk

Cons

  • Longer total parallel run — more license overlap cost
  • Cross-function dashboards have to work in both worlds during transition
  • Project fatigue — the team stays in migration mode for 12+ months
  • Late-phase functions resist as the new tool's flaws become visible
  • Requires more sustained executive sponsor engagement

Fits when: 50+ dashboards, multiple business units, regulated environment, or embedded analytics where customers depend on continuity.

Decision 2 · Who runs the project

Internal team-led Default

Your BI and data engineering team owns all five phases. Vendor support is involved; an external partner isn't.

Pros

  • Institutional knowledge stays inside the company after sunset
  • Business-logic decisions made by people who understand the business
  • Cheaper — no consultant day rates on top of license costs
  • Direct relationship with the vendor — faster support feedback loop
  • The team that owns the platform after migration is the one that built it

Cons

  • Slower start if no one on the team has run a migration before
  • Day jobs don't stop — team capacity stretches across BAU and project
  • Inventory phase takes longer without outside pattern recognition
  • Estimation accuracy weaker on first migrations
  • Burnout risk on the migration lead if the project runs long

Fits when: team has prior migration experience, dashboard count is in scope for the existing team, or the political cost of an outside consultant is high.

Consultant-led Outside partner

An external migration partner runs the project, with your team embedded for knowledge transfer and ongoing ownership.

Pros

  • Pattern recognition from prior migrations — faster Phase 2 inventory
  • Dedicated bandwidth — not splitting attention with day jobs
  • Estimation accuracy stronger because they've seen the surprises
  • Brings tooling for legacy-tool inspection (Tableau scrape, Qlik script parse, etc.)
  • Defensible to leadership when internal team is stretched

Cons

  • Significant day-rate cost on top of license costs
  • Knowledge walks out the door when the engagement ends
  • Business-logic decisions made by people who don't know your business
  • Vendor-aligned partners may push capabilities you don't need
  • Long-tail handover work often understated in the SOW

Fits when: first migration at this scale, no prior migration experience on the team, or dashboard volume exceeds what the internal team can absorb.

Phase 3 — Parallel build (3–9 months)

Phase 3 is the longest phase and the one where the bulk of the project's calendar lives. You're rebuilding critical and important dashboards in the new tool while the legacy tool keeps running in parallel. Validation happens here. Net-new capabilities — data apps, writeback, AI features — get built here, where they earn their place.

What you deliver:

  • All critical dashboards rebuilt in the new tool, with output validated against the legacy version
  • All important dashboards rebuilt, in priority order
  • Semantic layer definitions moved into the warehouse (dbt models, governed views, metric definitions) where they previously lived in the BI tool
  • A validation log showing each dashboard's old-vs-new outputs reconciled to the row
  • Net-new capabilities built where they earn their place — typically the data apps, writeback workflows, or AI features that the legacy tool couldn't support
  • Documentation for each rebuilt dashboard: data sources, business logic, owner, validation date

Team you need: Migration lead (full time), BI developers (1 to 3 full time, depending on dashboard count), data engineer (full time for semantic layer refactoring), business champions (half day per week for validation sign-off). This is the heaviest team phase — four to six FTE across three to nine months.

Where teams stall: Validation. You rebuild a revenue dashboard. The new version shows $4.2M for last quarter. The legacy version shows $4.18M. The difference is real, it's small, and it has a cause — a calculated field handles null values differently, or a date filter uses a different timezone, or a deduplication step was implicit in the legacy tool's data prep and explicit (or missing) in the new one. Finding the cause is forensic work. Some teams have a rule: no dashboard goes live until the new version matches the old to within an acceptable tolerance, with the cause of any difference documented. Skip this rule and you'll spend the rest of the project debating whose numbers are right.

Where warehouse-native architecture changes the phase: Three things shorten Phase 3 meaningfully if you pick a warehouse-native destination.

First, no extract rebuild. Legacy migrations to other legacy tools mean rebuilding extracts — Hyper files, QVD layers, tabular models — alongside the dashboards. Live-query architecture means you skip that work entirely; the warehouse is the data layer.

Second, the semantic layer accelerates over time. The first dashboard you rebuild takes the longest because you're defining the metrics in the warehouse for the first time. The tenth dashboard is faster because half its metrics already exist. The hundredth dashboard is dramatically faster because the semantic layer is now mature. This compounding effect is why teams on warehouse-native platforms often finish Phase 3 in the lower half of the range.

Third, validation is easier when both tools query the same warehouse. With a live-query target, you can run the legacy dashboard and the new dashboard against the same warehouse at the same moment, isolate differences to the BI layer rather than the data layer, and debug from there. PetScreening, which embeds Astrato in front of more than 24,000 property management firms, ran their migration this way against Snowflake — every customer's data lived in the warehouse, the BI layer queried it live, and the parallel build phase validated tenant by tenant. The cost outcome — 75% reduction versus their legacy embedded BI bill — is meaningful, but the structural point is that live-query made the validation tractable for a multi-tenant SaaS migration.

Customer evidence · Phase 3 anchor

PetScreening: 75% cost reduction in the parallel build

Multi-tenant SaaS embedded analytics across more than 24,000 property management firms. Live-query against Snowflake made tenant-by-tenant validation tractable.

PetScreening Phase 3 · Parallel build

75% cost reduction

Versus the legacy embedded BI bill. The headline number matters, but the structural point is that live-query validation made a multi-tenant migration practical at all.

“Providing our customers with Astrato’s self-service embedded dashboarding is a complete game-changer for our business. Astrato is helping us win new customers as a result, and we are on target to double the number of units this year.”

Beau Dobbs
Director of Business Intelligence & Operations, PetScreening
Read the full story →

Why it matters here: Phase 3's parallel build is the longest phase and the heaviest cost driver. Live-query architecture made validation a per-tenant exercise instead of a per-extract rebuild.

One specific decision to make in Phase 3: which net-new capabilities you build now versus defer to after sunset. Writeback, data apps, and AI features are tempting to build during the rebuild because the team is in the tool every day. The risk is scope creep — every "while we're at it" capability extends the timeline and dilutes the validation work. A defensible rule: in Phase 3, you build only the net-new capabilities that the business sponsor explicitly named in the original project scope. Everything else goes on a Phase 6 roadmap that starts after sunset. The cluster's data products article is the right reference if you're scoping which net-new capabilities to prioritize after migration.

Phase 4 — User enablement (2–3 months, overlapping with Phase 3)

Phase 4 is the phase that decides whether the migration actually lands. The dashboards are rebuilt. The data is right. None of that matters if your business users won't open the new tool.

Phase 4 starts inside Phase 3 — somewhere around month three of the parallel build, when the first wave of critical dashboards is validated — and runs until the legacy tool is sunset. It overlaps with Phase 3 deliberately; you don't wait for the rebuild to finish before you start training, because by then your power users have already decided the new tool is a downgrade.

What you deliver:

  • BI developer training completed for everyone who'll build dashboards in the new tool
  • Business user training organized by function and seniority (the CFO doesn't need the same session your AP clerks need)
  • Business champions identified and equipped in each major function — the people other users will go to first
  • Documentation: how-tos for common workflows, video walkthroughs for the dashboards your users hit most, a feedback channel for fast-turnaround fixes
  • A new governance model documented — who owns which dashboards, who approves new ones, how change requests get triaged

Team you need: Migration lead (quarter time), BI developers (quarter to half time for training delivery and documentation), business champions (full day per week during their function's rollout), executive sponsor (occasional, for the kickoff and to break ties).

Where teams stall: Power user resistance. Your most experienced legacy-tool users have years of muscle memory. They know which workbook to open at 8 a.m. on Monday. They know the keyboard shortcut for refreshing the regional pivot. They have a folder of saved filters. The new tool is faster, cheaper, and more capable — and it's also unfamiliar, which costs them time they don't think they should be spending. The fix is to put them on the design side of the rebuild, not the receiving side. The power user who helps design the rebuilt revenue dashboard becomes its evangelist. The one who first sees it in a generic training session becomes its critic.

Where warehouse-native architecture changes the phase: Self-service capability changes the shape of training. When the tool's design lets non-technical users build their own analyses against governed data, you don't have to train every user on every dashboard — you train them on the patterns. Doctena, a European healthcare scheduling platform that runs across six countries, made this work explicitly. Their data analytics team sits under marketing, and they used the migration to push self-service down to the business.

Customer evidence · Phase 4 anchor

Doctena: from two-week cycles to fifteen minutes

European healthcare scheduling platform across six countries. The number isn’t a software claim — it’s a workflow change enabled by self-service plus the right training in Phase 4.

Doctena Phase 4 · User enablement

2 weeks → 15 minutes

The data analyst stopped being a bottleneck. Business teams open the tool and self-serve. That outcome is a Phase 4 outcome — it requires training, champions, and a working governance model.

“What used to take two weeks now happens in 15 minutes.”

Melanie Menkes
Chief Revenue Officer, Doctena
Read the full story →

Why it matters here: Phase 4 is the phase that decides whether the migration lands. Self-service plus champions plus governance equals the data team no longer being the bottleneck.

The two-week-to-15-minute number isn't a software claim. It's a workflow change: their data analyst no longer fields the dashboard request, builds it manually, sends it back, and iterates. The business team opens the tool and self-serves. That outcome is a Phase 4 outcome — it requires the training, the champions, and the governance model to land in place. The article on self-service BI on Snowflake covers the four-tier self-service spectrum if you're scoping which functions get self-service first.

The metric to watch through Phase 4 is the ratio of new-tool logins to legacy-tool logins by function. When it crosses 80/20 in a function, that function is ready for sunset. When it's stuck at 30/70 three months in, you have a Phase 4 problem you need to address before Phase 5.

Phase 5 — Sunset (1–3 months)

Phase 5 is the phase teams rush, and rushing it is the phase's signature mistake. The last 10% of the migration takes longer than it should. The dashboards you didn't rebuild in Phase 3 — the orphans, the long-tail reports, the ones that only run quarterly — all surface here. So does every license and infrastructure cost that's still running on the legacy side.

What you deliver:

  • Final disposition for every dashboard in the inventory — rebuilt and live, sunset with sign-off, or archived for retention
  • Hard cutover date for the legacy tool, communicated with at least 90 days of warning
  • Legacy licenses decommissioned and contracts wound down
  • Legacy infrastructure archived for whatever retention window your compliance framework requires
  • Final knowledge transfer — runbooks for the new tool's day-to-day operations, on-call rotation, vendor support process
  • Project retrospective — what worked, what to do differently, written down before everyone forgets

Team you need: Migration lead (half time), data engineer (quarter time for archive work), procurement and legal (quarter time each for license wind-down), executive sponsor (occasional, for the formal sunset decision).

Where teams stall: License overlap costs. Most legacy BI licensing is per-seat and renews annually. If your renewal date is October 1 and your sunset date is December 1, you've just paid for a year of licenses you'll use for two months. Some teams discover this in Phase 5. The ones that discover it in Phase 1 build their phase plan around the renewal date — either to align sunset with renewal or to negotiate a partial-year exit clause during the renewal conversation. The Snowflake spend article has the broader four-lever cost framework if you're modeling the warehouse-side spend changes alongside the BI license math.

Where warehouse-native architecture changes the phase: Per-seat versus capacity-based pricing changes the license-overlap arithmetic. Per-seat models lock you in for a year of seats whether or not anyone logs in. Capacity-based models like Astrato's are sized to your usage envelope, which gives you more flexibility in the overlap window — you're paying for capacity you can actually use rather than seats that may sit dormant.

Don't skip the retrospective. Migration teams routinely lose institutional knowledge the moment the project closes — the team disbands, the migration lead moves to the next project, the documentation never gets finished. Six months later, when a new dashboard breaks for a reason the migration team would have recognized immediately, no one knows where to look. Spending two days on a retrospective and a runbook at the end of Phase 5 saves the next year of operational pain.

The team you actually need

Migration projects don't run on roles; they run on people with names. But you can scope the headcount from the role list. The realistic core team for a medium-scale migration:

Migration lead. Owns the project. Runs the steering committee, manages the timeline, resolves cross-team friction. Typically a senior BI manager, BI architect, or analytics director with prior migration experience. Active in all five phases. Realistic commitment: 0.5–0.75 FTE across the project. Below 0.5, the project drifts; above 0.75, the lead burns out.

BI developers. Build the dashboards, refactor the metrics, validate the outputs. The team's largest seat count and the phase-shape that drives total project hours. Active most heavily in Phase 3, present in Phase 1 (pilot dashboard), Phase 2 (technical inventory), and Phase 4 (training delivery and champion enablement). Realistic commitment: 1–3 BI developers at 1.0 FTE during Phase 3, scaling down outside it.

Data engineer. Refactors business logic from the BI tool down into the warehouse — dbt models, governed views, semantic layer definitions. Owns the warehouse-side of identity and access. Active in Phase 1 (warehouse configuration), Phase 3 (the heaviest workload, parallel with the BI developers), and Phase 5 (archive). Realistic commitment: 1–2 data engineers at 0.5–1.0 FTE depending on how much business logic moves down.

Business champion. Distributed across functions — one per major business unit (finance, sales, marketing, operations, customer success). Signs off on what's critical versus orphaned in Phase 2, validates dashboard rebuilds in Phase 3, leads adoption in their function in Phase 4. Not a full-time role; each champion gives 0.1–0.25 FTE. The migrations that under-resource champions land badly in Phase 4 every time.

Project manager. Runs the operational mechanics — Jira board, weekly status, dependency tracking, vendor relationship management on the BI vendor side. In smaller migrations, the migration lead covers this; in larger ones, you need a dedicated PM. Realistic commitment: 0.25–0.75 FTE depending on scale.

Executive sponsor. Usually the VP of Data, CTO, or CFO. Owns the budget. Breaks the ties the migration lead can't. Defends the project externally to the rest of the C-suite when timelines slip. Active mostly at Phase 1 kickoff, Phase 3 milestone reviews, and the Phase 5 sunset decision. Realistic commitment: 0.05–0.10 FTE — but the sponsor's visibility matters more than their hours.

Add it up and a medium-scale migration needs 3–5 core FTE during the heaviest phases, with another 2–3 FTE of distributed business champion time and occasional sponsor involvement. Small migrations run lean — 2–3 core FTE. Large migrations need 5–7 core FTE plus a dedicated PM. Anyone telling you a 200-dashboard migration is a side project for a two-person BI team is selling you optimism, not a plan.

The five most common stall points

These are the five places migrations stall most reliably. Naming them in the project plan upfront — and budgeting for them — is the difference between a project that lands on time and one that doesn't.

1. Undocumented business logic in legacy dashboards. Years of accumulated calculated fields, custom SQL, scheduled scripts, and tribal knowledge that no one wrote down. The fix is to invest in inventory and semantic-layer refactoring before you start rebuilding. Treat the migration as a logic-extraction project first, a UI rebuild second.

2. License overlap costs during parallel run. Per-seat licenses don't pause while you migrate. If your legacy renewal lands in the middle of your migration, you'll pay for a year of overlap unless you've negotiated otherwise. Build your phase plan around the renewal date — and start the renewal conversation early so partial-year exits are on the table.

3. Change management resistance from power users. Your most experienced legacy-tool users will resist the new tool by default. Pull them into the design phase of the rebuild — Phase 2 and Phase 3 — so they become evangelists rather than critics. Power users who help design the new dashboards adopt them faster than ones who only see the rebuilt version.

4. IT and security review delays for the new vendor. Your security team's vendor questionnaire process is slow by design. Start it as early as your procurement allows — ideally before the contract is fully signed — and budget eight to twelve weeks even when the vendor's SOC 2 and DPA are clean. Security review timing is the single biggest schedule risk in Phase 1.

5. Scope creep from "while we're at it." Every migration accumulates wish-list capabilities — net-new dashboards, AI features, writeback workflows, data apps. Build only what was in the original scope during Phase 3. Everything else goes on a Phase 6 roadmap. The migration's job is to land the new platform; expanding the platform's footprint comes after.

If your project plan addresses all five stall points explicitly — even with a single sentence each — you've already pulled ahead of most migrations.

What changes when you pick a warehouse-native platform

Most of this article is paradigm-neutral. The five-phase structure, the team composition, the stall points — all of that applies whether you're migrating to a warehouse-native platform like Astrato, a cloud-modernized incumbent like Power BI Service or Tableau Cloud, or a hybrid stack. But a few project-shape factors do change depending on which side of the cloud-native line you land. Worth naming them honestly.

The semantic layer moves down, not across. Legacy migrations to other legacy tools mean rebuilding the semantic layer inside the BI tool. Warehouse-native migrations mean moving the semantic layer into the warehouse — dbt models, governed views, metric definitions sitting in Snowflake, BigQuery, or Databricks. The work is similar; the destination is different. The compounding benefit is that subsequent migrations (or BI tool changes) get cheaper because the semantic layer outlives the BI tool. This shapes Phase 2 and Phase 3 specifically.

Live-query simplifies validation. Phase 3's validation work — matching new-tool output against legacy output — is easier when both tools query the same warehouse. Differences isolate to the BI layer rather than the data layer. Extract-based architectures put extracts between the BI tool and the warehouse, which means validation has to reconcile two different data snapshots before it can reconcile the dashboards. Live-query takes that step out.

Writeback unlocks net-new capabilities the legacy tool couldn't support. Legacy BI was read-only by design. The decisions your dashboards surfaced — a budget adjustment, an approval, a forecast revision — all moved into a spreadsheet or an email to actually act on. Warehouse-native platforms with native writeback (Astrato's data apps and workflows being one example) let you bring those decisions back inside the governed platform during the migration. That's a Phase 3 scoping decision: which spreadsheet workflows you retire as part of the migration versus which you defer to a Phase 6 roadmap.

Capacity-based pricing changes the overlap math. Per-seat licensing locks you in for the seats whether or not you use them. Capacity-based pricing sizes to your usage envelope, which gives you more flexibility in Phase 5's license-overlap window. This isn't a universal advantage — for some workloads per-seat is cheaper — but it changes the arithmetic, and you should run the numbers both ways before committing to a destination.

AI features inherit the warehouse's AI infrastructure. If you pick a warehouse-native platform that supports in-warehouse LLMs (Snowflake Cortex, BigQuery Gemini, Databricks Mosaic AI) or multi-LLM routing, the AI capabilities you build in Phase 3 inherit your warehouse's data residency, audit logging, and access controls. If you pick a platform that calls external LLMs without that integration, the AI architecture is a separate compliance review on top of the BI vendor review. This matters more for compliance-heavy environments — the AI chat analytics article covers the four failure modes worth knowing about — but it affects every migration where AI features are part of the scope.

None of these factors make warehouse-native the universal answer. Some migrations land at Power BI in commercial cloud because the Microsoft 365 estate makes it effectively free. Some land at Tableau Cloud because the existing investment is too deep to abandon. Some land at Looker because the BigQuery integration is too tight to ignore. The point is that the project shape changes in known, predictable ways depending on the destination — and the changes show up most visibly in Phase 1's identity work, Phase 3's parallel build, and Phase 5's license arithmetic.

A realistic timeline rubric

Migration scoping · Timeline rubric

How long will your BI migration take?

Realistic ranges by dashboard count and complexity. Pick the row that matches your environment and take it to your CFO.

Small Mid-market

< 50 dashboards, single warehouse, one business unit, no embedded analytics.

Timeline

4–6 months

Core team

3–4 FTE

During Phases 3 & 4

Approx. total hours

~2,500

What's true here: a focused team can run this lean. Migration lead doubles as PM. Skip nothing in Phase 2 even at this scale.

Medium Most common

50–200 dashboards, multi-source, 3–5 business units, light compliance overhead.

Timeline

8–12 months

Core team

5–6 FTE

Dedicated PM recommended

Approx. total hours

~7,500

What's true here: the most common shape and the one vendor "90-day" claims fail. Plan the renewal date and the parallel run carefully.

Large Enterprise

200+ dashboards, complex integrations, multiple business units, embedded analytics, regulated industry.

Timeline

12–18 months

Core team

6–7 FTE

Plus dedicated PM

Approx. total hours

~14,000

What's true here: security review and license overlap dominate the schedule risk. Inventory triage typically removes 30–50% of the rebuild scope.

The "core team" column is FTE during the heaviest phases (mostly Phase 3 and Phase 4) and includes the migration lead, BI developers, data engineer, and project manager. Business champions, executive sponsor time, and security review time sit on top of these numbers as distributed effort. Total hours include the long tail of Phases 1, 2, and 5 but not the post-sunset Phase 6 roadmap work.

FAQ

How long does BI migration take?

Four to eighteen months for most migrations, depending on dashboard count, integration complexity, and team experience. Small mid-market migrations (under 50 dashboards) run 4–6 months; mid-enterprise migrations (50–200 dashboards) run 8–12 months; large enterprise migrations (200+ dashboards) run 12–18 months. Very large enterprises with thousands of dashboards across multiple business units regularly run 18–30 months. Vendors who quote 90-day timelines are almost always describing a narrow case — small dashboard count, single warehouse, no embedded analytics, no row-level security complexity — that may not match your situation.

How many people do we need on a BI migration team?

Three to seven core FTE during the heaviest phases (Phase 3 parallel build and Phase 4 user enablement). Six named roles: migration lead, BI developers (1–3), data engineer (1–2), business champion (one per major business unit, distributed time), project manager, executive sponsor. Small migrations run on the lower end; large migrations need a dedicated project manager and more BI developers. Underestimating the team — particularly the business champions — is the most common cause of Phase 4 stalls.

When can we decommission the legacy tool?

When the new-tool-to-legacy-tool login ratio crosses 80/20 in every function, when every critical and important dashboard has been rebuilt and validated, and when the orphaned dashboards have signed-off sunset decisions. Plan at least 90 days of warning between the announcement and the hard cutover date. Archive the legacy environment for whatever retention window your compliance framework requires; budget the storage cost separately from the license decommission savings.

What's the most common reason BI migrations fail?

Skipping the inventory and triage phase. Teams that go straight from vendor decision to dashboard rebuild discover hidden business logic in Phase 3, watch their timeline slip, and lose leadership confidence. Treat Phase 2 as a logic-extraction project — document every calculated field, every custom SQL block, every scheduled script — before you start rebuilding. The migrations that come in on time are the ones that spent four to six weeks on inventory and found the surprises early.

Should we hire a migration consultant?

Maybe. Consultants help most in two cases: the team has never run a migration of this scale before, or the dashboard count and complexity exceed what the internal team can absorb on top of their day jobs. Consultants help least when the work is straightforward and the team has prior migration experience — in those cases, the consultant overhead slows the project down. A defensible compromise: use a consultant for Phase 2 (inventory and triage) where outside experience accelerates the work, and run Phases 3 through 5 internally where institutional knowledge matters most.

Ready to experience next-gen analytics?

See how Astrato runs natively in your warehouse.