Redirect Maps for Site Migrations: 301 vs 308, Chains, QA, and What Google Actually Expects
Redirect Maps for Site Migrations: 301 vs 308, Chains, QA, and What Google Actually Expects
A site migration without a complete redirect map is one of the most reliable ways to destroy years of accumulated SEO authority in a single afternoon. Redirects are the mechanism by which Google understands that your content has permanently moved — that the signals, links, and trust built around your old URLs should be transferred to your new ones. Get this right and your migration is nearly invisible to search engines. Get it wrong and you are looking at months of traffic recovery work.
The redirect map is not a nice-to-have. It is the single most important SEO deliverable in any migration project. This guide covers everything you need to build one correctly: the difference between redirect types, the step-by-step process for building a comprehensive map, the redirect chain problem and how to prevent it, QA validation before and after launch, and how long to keep redirects active.
This is the operational detail that the full migration checklist summarizes — if you are running a WordPress to Next.js migration and want a complete reference, start there and use this post as the deep-dive companion for the redirect work.
Quick Checklist
- Export all URLs from your current site using a crawler (Screaming Frog, Sitebulb)
- Map every URL to its exact new destination — 1:1, no wildcard assumptions
- Use 301 (permanent) redirects for all standard migrations
- Eliminate all redirect chains — every redirect must point to the final URL
- Handle trailing slashes consistently (pick one convention, redirect the other)
- Test every redirect rule in staging before production launch
- Validate with a post-launch crawl — confirm zero 404s on previously indexed URLs
- Keep redirects active indefinitely (Google recommends minimum 1 year)
Why Redirects Are the #1 Make-or-Break Factor in Any Migration
When Google indexes a URL, it builds a record of everything associated with that URL: the content it contains, the links pointing to it from other sites, the engagement signals it has accumulated, and the topical authority it contributes to the site as a whole. This is not attached to the content itself — it is attached to the URL as an identifier.
When you change a URL without a redirect, Google sees two things: an old URL that has stopped returning content (a 404), and a new URL that appears to be a brand-new page with no history. The old URL's authority does not automatically transfer to the new one. You have, in effect, deleted the old page from Google's perspective and published a new one.
A properly implemented 301 redirect signals to Google: "This URL has permanently moved. Please transfer all signals — PageRank, indexing priority, and external link equity — to the new destination." According to Google's guidance on redirects, this transfer is not instant, but it does occur over subsequent crawl cycles, typically within a few weeks to a couple of months.
For a site with hundreds or thousands of URLs, the cumulative SEO value at stake is enormous. Even a 10–15% gap in redirect coverage — URLs that were previously indexed but now return 404 — can produce measurable traffic losses within 30 to 60 days of launch.
301 vs 308 — Which to Use and When
The HTTP specification defines two permanent redirect codes: 301 (Moved Permanently) and 308 (Permanent Redirect). The practical difference matters primarily for how browsers handle the HTTP method of the original request.
A 301 redirect allows the browser to change the HTTP method — a POST request to the old URL may be reissued as a GET request to the new URL. A 308 redirect preserves the original HTTP method, meaning a POST remains a POST after the redirect.
For SEO purposes, both 301 and 308 signal permanent moves, and Google treats both as equivalent from a PageRank-passing standpoint. The choice between them should be made based on your site's technical requirements, not SEO considerations.
In practice, for nearly all content migrations — blog posts, landing pages, product pages, category pages — use 301. The HTTP method consideration is irrelevant for standard GET requests to content pages, and 301 has universal support across servers, CDNs, and crawlers. Use 308 only if you have a specific technical requirement to preserve POST method behavior across a redirect.
One redirect code to avoid in a migration context is 302 (Found/Temporary Redirect). A 302 signals a temporary move and is typically not treated by Google as an instruction to transfer PageRank. If you find 302 redirects in your configuration for URLs that should be permanently moved, convert them to 301 immediately.
How to Build a Redirect Map (Step-by-Step)
A redirect map is a structured document — usually a spreadsheet — that records every old URL and its corresponding new destination URL, along with the redirect type and any QA status information. Building it correctly requires three distinct phases.
Export Your Full URL Inventory
Start by crawling your existing site with a crawler tool like Screaming Frog or Sitebulb. Configure the crawl to follow all internal links and export the full list of URLs the crawler finds. This becomes your source of truth for what exists on the current site.
Do not rely on your XML sitemap alone. Sitemaps are frequently incomplete — they miss paginated pages, tag archives, parameter-based URLs, and older content that may have been removed from the sitemap but is still indexed by Google. Your crawler will find pages that your sitemap omits.
After the crawler export, cross-reference it against two additional sources: Google Search Console's Coverage report (which shows all URLs Google has indexed, including ones not in your sitemap) and your server access logs (which show URLs actually being requested, including URLs that may not be linked internally anymore but still receive direct traffic or backlinks). The union of these three sources is your true URL inventory.
Map Old → New (Pattern Rules vs Individual Rules)
With your inventory complete, begin mapping each old URL to its new destination. The goal is a 1:1 mapping where every old URL points to the single most relevant new URL — not to the homepage, not to a category page, but to the page that best represents the same content and intent.
For large sites, you can use pattern-based redirect rules to handle groups of URLs that follow a consistent naming convention. For example, if all your old WordPress blog posts followed /year/month/slug/ and the new site uses /blog/slug/, you can write a single regex rule that maps the pattern rather than listing every post individually.
Pattern rules are efficient, but they require careful testing. A poorly written regex can catch unintended URLs or fail to match URLs it should catch. Always validate pattern rules against your full URL inventory before deploying them — run every URL in your export through the rule and confirm the output is the correct new destination.
For URLs that do not follow a consistent pattern — legacy pages with arbitrary slugs, campaign landing pages, old product names — you will need individual redirect rules. There is no shortcut here. Work through them one by one, and for each one make a deliberate decision about the most relevant new destination.
Handle Edge Cases (Query Params, Trailing Slashes, Pagination)
These three categories catch teams off guard more often than any other redirect complexity.
Query parameters. If your old site used URLs like /products?id=123&color=blue, those parameter-based URLs may be indexed by Google if they were linked to from other sites or appeared in your sitemap. Decide upfront whether parameter-based URLs will exist in the new site, and if not, write redirect rules that strip the parameters and redirect to the canonical version of the page.
Trailing slashes. Choose one canonical convention — either all URLs have trailing slashes or none do — and implement redirects to enforce it. /services/ and /services should not both return 200 responses. One should be the canonical and the other should 301 to it. Inconsistency here creates duplicate content signals and confuses crawlers about which version to index.
Pagination. Paginated URLs (/blog/page/2/, /blog/page/3/) are often indexed by Google. If your new site handles pagination differently — or eliminates pagination in favor of infinite scroll or a single archive page — each paginated URL needs a redirect to its best equivalent. Pointing all pagination to the first page or the main archive is acceptable; returning 404 is not.
Redirect Chains: Why Google Hates Them and How to Prevent Them
A redirect chain occurs when URL A redirects to URL B, which redirects to URL C. Instead of a direct path, Googlebot has to make multiple requests to follow the chain to the final destination.
Google's crawler has a redirect chain limit — after following a certain number of hops, it stops. If a URL is buried deep in a chain, Googlebot may never reach the final destination. Even in shorter chains, each hop introduces latency and the potential for PageRank dilution.
Redirect chains are almost always created unintentionally, typically when a site has been migrated more than once. A URL from a 2019 redesign redirects to a URL from a 2022 redesign, which should redirect to the current 2026 URL — but instead of updating the original redirect, someone adds a new one. The result is a three-hop chain.
The rule is simple: every redirect in your map must point to the final destination URL — a URL that returns a 200 response — not to another redirect. Before deploying your redirect map, run every rule through a chain-detection tool or manually verify that the destination URL is not itself a redirect. Screaming Frog's redirect chain report is useful here.
When you find chains, collapse them. Update the original redirect rule to point directly to the final URL, bypassing the intermediate hop. This is a maintenance task you should repeat every time you make changes to URL structure.
QA Your Redirect Map Before Launch
The redirect map is only as valuable as its accuracy. A redirect pointing to the wrong page, a broken regex that mismatches URLs, or a chain that was not caught in review will cause SEO damage that could have been prevented. QA is not optional.
Automated Crawl Validation
Deploy your redirect map to your staging environment and run a full crawl against it. Configure the crawler to start from your URL inventory — the complete list of old URLs — and follow redirects. The output should show: each old URL, the response code at that URL, the redirect destination, and the final response code at the destination.
Every row in this report should show: old URL → 301 → new URL → 200. Any row that shows a 404, a 302, a chain of more than one hop, or a redirect to the homepage when a specific page was expected is a finding that must be resolved before launch.
Manual Spot-Checks for High-Traffic Pages
Automated crawls catch structural issues but can miss content-level problems — a redirect that technically works but sends users to the wrong page. For your top 20 to 50 pages by pre-migration traffic, manually verify that the redirect destination is the correct, intended page. Load the old URL in a browser, confirm it redirects, and confirm the destination page has the right content.
Pay particular attention to pages that held strong backlinks. These are the URLs where a redirect to the wrong destination causes the most lasting damage, because external link equity is being funneled to a page that does not represent the intended content.
How Long to Keep Redirects Active (Google's Guidance)
A common misconception is that redirects can be removed after a few months once "Google has updated its index." This is incorrect and can cause lasting damage.
Google's official guidance recommends keeping redirects active for at least one year after a migration. The reasoning is practical: external websites that link to your old URLs may not update those links for months or years. Every time Googlebot crawls those external links and follows them to your site, the redirect must be in place to correctly pass link equity to the new URL.
If you remove a redirect after six months and an external site links to the old URL with a valuable backlink, that link now returns a 404. The backlink is effectively lost.
In practice, the maintenance cost of keeping redirect rules in place is minimal — a few hundred lines of configuration in your server or CDN. The risk of removing them is asymmetric: very little to gain, significant potential to lose. The practical recommendation is to keep all migration redirects active indefinitely, or until the old URL structure is so far in the past that no reasonable user or crawler would encounter it.
For sites that care about preserving structured data during migration, the same principle applies: keep the old signals in place until the new implementation is fully validated and confirmed in Search Console.
FAQ
Do redirect chains really hurt SEO?
Yes. Google's crawler follows a limited number of redirect hops before stopping. A URL buried three or four hops deep in a chain may never be crawled to its final destination, meaning the PageRank at the origin is never successfully transferred. Even two-hop chains create unnecessary crawl overhead and can slow the authority transfer process. Keep every redirect as a direct, single-hop rule pointing to a 200-status destination.
Should I use 301 or 308 for a site migration?
Use 301 for all standard content migrations. Both 301 and 308 signal permanent moves and both pass PageRank according to Google. The difference is in how the HTTP method is handled on POST requests, which is irrelevant for content pages requested via GET. 301 has broader support and is the established convention for migrations. Reserve 308 for specific technical scenarios where preserving the POST method across a redirect is required.
How do I handle pagination URLs in a redirect map?
Paginated URLs that are indexed by Google need redirect coverage. If the new site does not have equivalent paginated pages, redirect each paginated URL to the most relevant non-paginated equivalent — typically the first page of the archive or the main category or tag page. Avoid redirecting all paginated URLs to the homepage, as this is too generic and may be treated as a soft 404 by Google.
How long do I need to keep old redirects live?
Google recommends keeping redirects active for at least one year. In practice, keeping them active indefinitely is the safest approach. The cost of maintaining redirect rules is negligible compared to the cost of losing external link equity when those rules are removed prematurely.
What's the difference between a redirect map and a robots.txt file?
A redirect map is a document that specifies where each old URL should redirect to — it governs URL resolution. A robots.txt file governs crawler access — which URLs Googlebot is allowed to request at all. They serve different functions. A redirect in your map tells Googlebot "this URL has moved to here." A robots.txt disallow tells Googlebot "do not request this URL at all." You need both, but they should never conflict: do not disallow URLs in robots.txt that you also have redirect rules for, or Googlebot will not be able to follow the redirect.
Next Steps
A complete, QA'd redirect map is the difference between a migration that preserves your traffic and one that destroys it. The work is detailed and requires a full URL inventory as a starting point — which is why getting this right before launch is always faster and cheaper than recovering from it afterward.
Related posts:
Services: