Google Deindexed My Pages After Migration: How to Get Them Back
Google Deindexed My Pages After Migration: How to Get Them Back
You migrated your site. Traffic was holding. Then — somewhere between week two and week four — the Coverage report in Google Search Console started showing a declining Valid page count. Or you searched for a page you know should be indexed and it was gone. Or you checked your organic traffic and something was clearly wrong.
Pages disappearing from Google after a migration is one of the most stressful SEO problems to deal with. The organic traffic impact is immediate and compounding. Every day a page stays deindexed is a day it earns zero organic traffic from the rankings it previously held.
The good news: deindexing after migration is almost always caused by one of four identifiable problems. Once you identify the root cause, recovery is systematic — not a mystery. This post walks through each of the four causes, how to diagnose which one you are dealing with, and how to execute a priority-based reindexing recovery plan to get your pages back into search.
Quick Checklist
- Check robots.txt for accidental Disallow rules carried from staging
- Search for noindex meta tags in your production HTML (view source, not just inspect)
- Use Search Console's URL Inspection Tool on your top 20 pages
- Verify all old URLs redirect to new pages (not to the homepage or 404)
- Check for thin content: are rebuilt pages substantially similar to the originals?
- Submit individual URLs for reindexing via Search Console (priority pages first)
- Monitor Coverage report daily until Valid page count recovers
Why Pages Get Deindexed After a Migration (It's Almost Always One of These 4 Things)
When Google removes pages from its index, it is responding to something it found — or did not find — when it crawled. Unlike a manual penalty (which requires a deliberate violation of search guidelines), post-migration deindexing is almost always an unintentional configuration error. The site told Google not to index those pages, or Google could not reach them, or the content Google found did not meet its threshold for indexing.
Before you can recover, you need to know which of these four root causes applies to your situation:
1. Accidental noindex tags from staging — noindex meta tags or robots.txt Disallow rules that were appropriate for the staging environment were carried into production without being removed.
2. Redirect gaps — old URLs that previously ranked no longer redirect to their new equivalents. Google arrives at the old URL, finds a 404, and removes the page from the index. The new URL equivalent, without redirects or internal links pointing to it, takes time to be discovered and indexed.
3. Thin content signals on rebuilt pages — the new frontend renders pages that are technically live but contain substantially less content than their WordPress originals. Google crawls them, evaluates them as thin or low-quality, and excludes them from the index.
4. Canonical tag misconfiguration — pages on the new frontend have canonical tags pointing to the wrong URL (often the WordPress backend URL or a staging domain), signaling to Google that another URL is the preferred version. Google indexes the canonical target instead of the page it crawled.
Each of these requires a different fix. Diagnosing which one you have is the first step.
The Accidental Noindex Problem (Staging Robots Carried to Production)
This is the most common root cause of mass deindexing after a migration — and the most avoidable. During development and testing, the staging environment should block crawlers. A Disallow: / in robots.txt or a <meta name="robots" content="noindex"> tag in the site template keeps Googlebot out of the staging site. Standard practice, correctly applied.
The error happens when the migration goes live and those staging configurations are not removed. The production site inherits the staging robots.txt or the noindex meta tags. Googlebot crawls the live site, reads "noindex," and removes the affected pages from the index over the following days. By the time someone notices the Coverage report drop, Google may have already processed and acted on the noindex signal across the entire site.
How to diagnose this:
First, view source (not browser inspect — view source shows the raw HTML, not the DOM after JavaScript execution) on a page you suspect is deindexed. Look for this in the <head>:
<meta name="robots" content="noindex">
Or any variant: noindex, nofollow, noindex, follow, none. Any of these blocks indexing.
Second, check your robots.txt directly by visiting yourdomain.com/robots.txt in a browser. Look for Disallow: / or Disallow: /*. A blanket disallow rule blocks Googlebot from crawling any page.
Third, use the URL Inspection Tool in Search Console on a sample of your deindexed pages. The inspection results will show if the page is excluded due to a noindex tag or robots.txt rule.
How to fix it:
Remove the noindex tags from the production site template immediately. Update the robots.txt to allow crawling. Then submit your sitemap and use the URL Inspection Tool's "Request Indexing" feature on your highest-priority pages to accelerate recrawling.
Redirect Gaps: When Google Can't Find Your New Pages
A redirect gap is a URL that existed in the old site structure and ranked in Google — but in the new site, that URL either 404s or redirects somewhere unhelpful (like the homepage). The old ranking equity, built up in that specific URL, cannot transfer to the new URL because Google cannot follow a redirect chain that does not exist.
Redirect gaps often appear because the redirect map used during migration was incomplete. Common causes: old blog post URLs that used a different slug structure than the new site, paginated archive pages that are no longer part of the new architecture, tag or category pages that were removed or restructured, and any URL that was in Google's index but was not included in the migration's redirect planning.
How to diagnose this:
Export a complete list of URLs currently in Google's index from the Coverage report. Run a crawl of these URLs against the new site (using Screaming Frog or a similar tool) to check the HTTP response for each one. Any URL returning 404 that was previously in the index is a redirect gap.
Also check the Coverage report's "Not Found (404)" section directly. URLs that Google tried to crawl and found missing will appear here. These are your redirect gaps.
How to fix it:
For each gap, identify the correct new URL equivalent and implement a 301 redirect from the old URL to the new one. Redirects should be single-hop — old URL goes directly to the new URL, not through an intermediate page. Redirect chains (A to B to C) reduce the equity transferred and slow down Googlebot's processing.
After implementing redirects, submit the affected URLs through the URL Inspection Tool to prompt recrawling. It may take several weeks for Google to process the redirects and fully transfer indexing status to the new URLs.
Thin Content Signals on Rebuilt Pages
A migrated page that is technically live and indexed can still lose its ranking if the rebuilt version contains substantially less content than the original. This happens more than teams expect — particularly when WordPress posts relied on content injected by plugins, dynamic sidebars, or template-level content that was not part of the main post body.
When the page is rebuilt in Next.js, the developer replicates the visual template but the CMS content migration pulls only the main post content. Plugin-injected content, FAQ sections, comparison tables, and callout blocks that were part of the rendered WordPress output are left out. Google crawls the new page and finds a page with half the content it previously had. The evaluation drops, and the page may be moved from "Valid" to "Crawled — currently not indexed."
How to diagnose this:
Compare the page word count and content structure between the old WordPress page (via a cached version or the Wayback Machine) and the new Next.js page. Look for: missing sections, missing headings, missing FAQ content, missing comparison data, or any content that was part of the page's value but is not present in the rebuilt version.
Also check the Coverage report for pages classified as "Crawled — currently not indexed." This is Google's signal for a page it can reach but chooses not to index — often a thin content indicator.
How to fix it:
Restore the missing content. Pull the complete original content from the WordPress export and ensure all substantive sections are present in the new CMS and rendered correctly in the Next.js frontend. For pages that were heavy on plugin-generated content, this may require recreating that content in the headless CMS as structured content blocks.
After restoring content, request reindexing via the URL Inspection Tool. Thin content recoveries take longer than technical configuration fixes — Google may need 2–4 weeks to recrawl, re-evaluate, and restore indexing.
How to Recover: The Priority-Based Reindexing Plan
Once you have identified the root cause or causes, the recovery plan follows a priority sequence. Not all deindexed pages are equal in value. Recovering your highest-traffic pages first limits the organic traffic impact while you work through the full recovery.
Step 1: Triage by traffic value. Pull your pre-migration organic traffic data (from Analytics or Search Console's Performance report). Sort your deindexed pages by their historical organic traffic. The top 20% of pages by traffic should be your priority queue.
Step 2: Fix root causes for priority pages first. Apply the correct fix (remove noindex, implement redirect, restore content, correct canonical) to your priority pages before working through the full list. This concentrates your recovery effort where it has the highest impact on organic traffic restoration.
Step 3: Use URL Inspection Tool for priority reindexing requests. After fixing each priority page, use the URL Inspection Tool's "Request Indexing" function. Google Search Console allows a limited number of reindexing requests per day — use them on your highest-traffic pages first.
Step 4: Submit an updated sitemap. After fixes are in place across the full URL list, submit an updated XML sitemap. The sitemap should include all pages that should be indexed at their correct new URLs. This gives Googlebot a complete signal of what the site structure should look like.
Step 5: Monitor the Coverage report daily. Recovery is not instantaneous. Google recrawls pages based on its own schedule, which can range from days to weeks. Watch the Valid page count in the Coverage report for upward movement. Pages should begin recovering within 1–3 weeks of fixes being implemented, assuming the root cause has been addressed correctly.
Using the URL Inspection API for Bulk Validation
For sites with large numbers of deindexed pages, the URL Inspection Tool in Search Console is too slow for page-by-page manual checking. The Google URL Inspection API provides programmatic access to the same data — allowing you to query the indexing status of hundreds of URLs automatically.
The API returns the same information as the Search Console UI: whether the page is indexed, the canonical URL Google has selected, the last crawl date, and any indexing issues. By running the full list of deindexed URLs through the API, you can categorize them by issue type at scale — identifying which pages have noindex tags, which have redirect issues, and which are excluded for other reasons.
This is particularly useful for large migrations where the deindexed page count is in the hundreds. Manual URL inspection of 300 pages is not practical. API-based validation of 300 pages takes minutes and produces a structured output you can use to prioritize and assign the recovery work.
The API requires Search Console property ownership and authentication. For teams without technical staff to implement the API integration, a structured manual triage of the top 50 priority pages combined with a full crawl analysis covers most of the recovery work effectively.
What a Reindexing Recovery Timeline Looks Like
Recovery is not linear, and it is not fast. Here is a realistic expectation for each root cause:
Accidental noindex recovery: Fastest. Once the noindex tags are removed and reindexing is requested, priority pages typically recover within 1–2 weeks. Googlebot is ready to index them — it just needed the noindex removed.
Redirect gap recovery: Moderate speed. Once redirects are implemented, Google follows them on the next crawl cycle. The old URL exits the index. The new URL, if it has good content and proper internal links pointing to it, typically enters the index within 2–4 weeks of the redirect being in place.
Thin content recovery: Slowest. Content quality signals accumulate over time. After content is restored, Google needs to recrawl, re-evaluate, and determine that the page now meets its quality threshold. Expect 3–6 weeks for meaningful recovery, and longer for competitive queries.
Canonical tag recovery: Variable. Once corrected canonical tags are deployed, Google needs to recrawl both the incorrectly canonicalized page and the correct page. If Google has been indexing the canonical target (the wrong URL) for several weeks, it may take additional time to update its preference to the correct URL.
The SEO-safe migration checklist covers how to avoid these issues during the migration — but if you are already post-launch and dealing with deindexing, the recovery framework above is the applicable path forward.
For situations where traffic dropped more broadly and deindexing is just one potential factor, the traffic recovery plan after redesign covers the full diagnostic picture.
Preventing Deindexing in Future Migrations
The four root causes described in this post are all preventable with a structured pre-launch validation process. Before any migration goes live:
Verify robots.txt on the production domain allows crawling. Verify that no noindex meta tags exist in the production HTML (a site-wide crawl searching for noindex in <head> takes minutes and catches this before it becomes a problem).
Validate every redirect in the redirect map. A redirect is not implemented until it is tested. Each old URL should return a 301 with the correct destination URL — not a 302, not a redirect chain, not a 404.
Compare content between old pages and new pages for your top 20 pages by traffic. If the new page has significantly fewer words, fewer headings, or is missing sections that were present in the old version, fix it before launch.
Validate canonical tags on every page template. Check the canonical URL output for at least one page of each type: homepage, category page, post page, and any custom templates. Confirm every canonical points to the correct frontend URL.
This pre-launch validation is exactly what the SEO Parity Audit covers — and why it exists as the starting point for every migration engagement.
FAQ
How quickly does Google deindex pages after a migration?
The timeline varies based on how frequently Google was crawling the site before migration and how quickly it recrawls after. High-authority, frequently crawled pages may be deindexed within days of a misconfiguration going live. Lower-authority pages may take 2–4 weeks to be removed from the index. This is why monitoring the Coverage report in the first 7 days of launch is critical — the early signals often precede large-scale deindexing by 1–2 weeks.
Can deindexed pages recover their original rankings?
In most cases, yes — with caveats. Pages deindexed due to technical configuration errors (noindex, robots.txt, canonical misconfiguration) typically recover their original rankings once the technical issue is fixed and Google reprocesses the page. Pages deindexed due to thin content may not fully recover their original rankings if the content quality on the rebuilt page is permanently lower than the original. And during the recovery period — which can take weeks — competitors may have moved up in the rankings for those queries, making full recovery to the original position less certain.
What's the fastest way to get deindexed pages back into Google?
Fix the root cause first — that is non-negotiable. Then use the URL Inspection Tool's "Request Indexing" function on your highest-priority pages. Make sure those pages are included in your XML sitemap. Improve internal linking to the deindexed pages from high-authority pages on the site. And make sure the content on those pages is complete and substantive — thin pages will be re-excluded even if the technical issue is resolved.
Does submitting a page to Search Console guarantee reindexing?
No. "Request Indexing" in Search Console signals to Google that you want the page recrawled — it does not guarantee that Google will index it. Google still evaluates the page's content, canonical tags, and overall quality before deciding to index it. If the root cause of deindexing has not been fixed, requesting indexing will result in Google recrawling the page, finding the same issue, and continuing to exclude it.
How do I know if my pages were deindexed vs just dropped in rankings?
Check the Coverage report in Google Search Console for the specific URLs in question. If the pages appear under "Excluded" with a specific reason (noindex, canonical, not found, etc.), they have been deindexed — Google is aware of them but has chosen not to index them. If the pages appear under "Valid" in the Coverage report but your rankings for those pages have dropped, that is a ranking decline rather than deindexing. The recovery approach differs significantly between the two.
Next Steps
Deindexing after a migration is recoverable. The root causes are known. The fix sequence is systematic. But the recovery window matters — every day a high-value page stays deindexed is a day of lost organic traffic that you are not getting back. The faster you identify the root cause, the faster you can execute the fix and begin the recovery.
If you are dealing with post-migration deindexing now and you need a clear diagnosis and fix list quickly, the Emergency Audit delivers exactly that in 72 hours.
Related posts:
Services: