Structurally Eliminating Initial Flicker and Double Fetching in SSR Data Grids with HydrationBoundary in Next.js App Router
After migrating a project to the Next.js App Router, I once ran into two unexpected problems simultaneously during data grid initial loading. One was an initial loading flicker — a spinner that briefly flashed on screen the moment the page opened, then disappeared. The other was double fetching — the server had clearly already fetched data, yet the same API was being called again immediately after client mount. I still remember muttering "huh, why is it fetching twice?" while staring at the Network tab.
By prefetching data with prefetchQuery in a Server Component, serializing it with dehydrate, and injecting it into the client cache via HydrationBoundary, you can structurally solve both problems at once. With this single pattern, data exists immediately from the first render and unnecessary network requests disappear entirely. The setup is done once, and the same approach applies to any component you add afterward.
This is aimed at frontend developers running or considering projects based on the App Router. If you're still not sure how to pass data across the Server Component and Client Component boundary, this article should help you get a feel for it. Code examples are based on TanStack Query v5. If you're in the middle of migrating from v4, the official migration guide is a helpful companion.
Core Concepts
Why Both Problems Occur Simultaneously
Even if a Server Component fetched data on the server, the moment that data is passed as props to a Client Component, TanStack Query has no knowledge of it. When useQuery mounts, there's nothing in the cache, so it starts with isLoading: true and immediately fires a network request to the same endpoint. This is double fetching. And in that brief moment, a spinner or skeleton flashes on screen and disappears. This is flicker.
The root cause is simple: the server's data and the client's query cache are not connected.
How the Hybrid Pattern Works
TanStack Query v5 solves this connection problem with dehydrate and HydrationBoundary.
Server Component
↓ queryClient.prefetchQuery() ← Fetch data on server and store in cache
↓ dehydrate(queryClient) ← Convert cache to a JSON-serializable snapshot
↓ <HydrationBoundary state={dehydratedState}>
↓ Client Component
↓ useQuery(same query key) ← Cache hit → no network requestThe cache snapshot created on the server is serialized into Next.js's RSC payload (React Server Components streaming format) and delivered to the browser along with the HTML. In the browser, HydrationBoundary merges this snapshot into the client's QueryClient. When useQuery is subsequently called with the same key, data is already in the cache, so isLoading starts as false and no network request is made.
dehydrate / hydrate: These terms come from the analogy of "dehydrating" server state into JSON for transfer, then "hydrating" it back to life on the client. Conceptually, this is the same as SSR serialization in Redux or Zustand.
Why staleTime Is the Critical Variable
Even if data has been placed in the cache via prefetchQuery, if staleTime is 0 (the default), the data is immediately judged as "stale" right after client mount. Because TanStack Query automatically re-requests stale data in the background, double fetching will recur.
Setting staleTime to longer than the server rendering round-trip time is a prerequisite for this pattern. Since a server rendering round-trip typically takes hundreds of milliseconds, a minimum of 30 seconds and generally 60 seconds is recommended. For highly real-time data, you can set it shorter — but be aware of the tradeoff (the possibility of double fetching recurring) and decide accordingly.
Practical Application
Now let's get into actual code. It's structured in three steps, with dependencies forming in order.
Step 1: Request Isolation — Creating getQueryClient
If you create a QueryClient incorrectly in a server environment, the instance gets shared across multiple users' requests, leading to data contamination. I initially placed new QueryClient() at the top of a module and ended up in a situation where one user's cache was exposed to another. Using React 18's cache() lets you safely create an isolated instance per request.
// lib/query-client.ts
import { QueryClient } from '@tanstack/react-query'
import { cache } from 'react'
export const getQueryClient = cache(
() =>
new QueryClient({
defaultOptions: {
queries: {
staleTime: 60 * 1000, // 60 seconds: prevents re-fetching immediately after client mount
},
},
})
)| Point | Description |
|---|---|
cache() wrapping |
Returns the same instance within the same HTTP request. No sharing between requests |
staleTime: 60 * 1000 |
Keeps data fresh for 60 seconds. Blocks background re-requests after mount |
| No module-level singleton | Placing new QueryClient() at the top of a module causes it to be shared until server restart |
React
cache(): A server-only memoization helper introduced in React 18 that caches return values for the same function within the same request lifecycle. It plays a similar role to Next.js'sfetch-based deduplication, but differs in that it can be applied to arbitrary functions, not justfetch. It does not work in the browser and is for Server Components only.
The client-side QueryClientProvider is typically separated into its own file. It requires the 'use client' directive and therefore cannot be used alongside Server Components.
// app/providers.tsx
'use client'
import { QueryClient, QueryClientProvider } from '@tanstack/react-query'
import { useState } from 'react'
export function Providers({ children }: { children: React.ReactNode }) {
const [queryClient] = useState(
() =>
new QueryClient({
defaultOptions: {
queries: {
staleTime: 60 * 1000, // Match the server setting
},
},
})
)
return (
<QueryClientProvider client={queryClient}>
{children}
</QueryClientProvider>
)
}// app/layout.tsx
import { Providers } from './providers'
export default function RootLayout({
children,
}: {
children: React.ReactNode
}) {
return (
<html lang="ko">
<body>
<Providers>{children}</Providers>
</body>
</html>
)
}Step 2: Prefetch on the Server — Filling the Cache Before Sending
A Server Component is an async function, so you can await data fetching, serialize with dehydrate, then wrap with HydrationBoundary to pass to children.
To copy this code and run it immediately, you need to know the signature of fetchGridData, so let's define the API function and types first. This example assumes a public API that doesn't require authentication. If you need authentication cookies or tokens, you'll need to read them with cookies() or headers() in the server component and include them in the fetch request.
// lib/api.ts
type GridRow = {
id: string
name: string
value: string
}
export type GridData = {
rows: GridRow[]
totalCount: number
}
export async function fetchGridData(params: {
page: number
pageSize: number
}): Promise<GridData> {
const res = await fetch(
`https://api.example.com/grid?page=${params.page}&pageSize=${params.pageSize}`
)
if (!res.ok) throw new Error('Failed to fetch grid data')
return res.json()
}If the query keys are used differently on the server and client, cache hits won't occur. Extracting the key generation function into a shared file prevents this mistake from the outset.
// lib/query-keys.ts
export const gridKeys = {
all: ['grid-data'] as const,
list: (params: { page: number; pageSize: number }) =>
[...gridKeys.all, params] as const,
}Now apply prefetching in the Server Component.
// app/dashboard/page.tsx
import { dehydrate, HydrationBoundary } from '@tanstack/react-query'
import { getQueryClient } from '@/lib/query-client'
import { fetchGridData } from '@/lib/api'
import { gridKeys } from '@/lib/query-keys'
import { DataGrid } from './DataGrid'
export default async function DashboardPage() {
const queryClient = getQueryClient()
await queryClient.prefetchQuery({
queryKey: gridKeys.list({ page: 1, pageSize: 50 }),
queryFn: () => fetchGridData({ page: 1, pageSize: 50 }),
})
return (
<HydrationBoundary state={dehydrate(queryClient)}>
<DataGrid />
</HydrationBoundary>
)
}HydrationBoundary merges the serialized cache snapshot into the QueryClient within the child tree. When useQuery is called inside DataGrid with the same key, data is already in the cache, so it renders immediately without a loading state.
Step 3: Starting from a Cache Hit in the Client Component
// app/dashboard/DataGrid.tsx
'use client'
import { useQuery, keepPreviousData } from '@tanstack/react-query'
import { fetchGridData } from '@/lib/api'
import { gridKeys } from '@/lib/query-keys'
import { useState } from 'react'
export function DataGrid() {
const [page, setPage] = useState(1)
const { data, isFetching, isError, error } = useQuery({
queryKey: gridKeys.list({ page, pageSize: 50 }),
queryFn: () => fetchGridData({ page, pageSize: 50 }),
placeholderData: keepPreviousData, // Retain previous data during page transitions
// staleTime inherits from QueryClient default (60 seconds)
})
// Error handling — in production, using an error boundary or toast notification alongside this is recommended
if (isError) return <div>Failed to load data: {error.message}</div>
return (
<div>
{isFetching && <span aria-live="polite">Refreshing in background...</span>}
<table>
<thead>
<tr>
<th>Name</th>
<th>Value</th>
</tr>
</thead>
<tbody>
{data?.rows.map((row) => (
<tr key={row.id}>
<td>{row.name}</td>
<td>{row.value}</td>
</tr>
))}
</tbody>
</table>
<button onClick={() => setPage((p) => p - 1)} disabled={page === 1}>
Previous
</button>
<button onClick={() => setPage((p) => p + 1)}>Next</button>
</div>
)
}| Code Point | Behavior |
|---|---|
page: 1 initial state |
Matches the key prefetched on the server → cache hit, no loading |
page: 2 and above |
Cache miss → client fetching activates |
keepPreviousData |
Retains previous data during page transitions, prevents layout shift |
isFetching |
true only during background revalidation. Always false on initial render |
gridKeys.list(...) |
Shared key factory prevents server/client key mismatch |
Pros and Cons Analysis
Advantages
The biggest thing I noticed after applying this pattern was that prop drilling disappeared. Previously, data fetched on the server had to be continuously passed down as props, but anywhere within the HydrationBoundary subtree, you can pull from the cache using the same key. And since data exists immediately from the first render, the spinner flicker is structurally eliminated.
| Item | Description |
|---|---|
| Eliminates initial flicker | data exists immediately on first render. Skips isLoading: true state |
| Eliminates double fetching | Within staleTime, handled as a cache hit. No network requests |
| Improves SEO and initial LCP | Server delivers completed HTML, so content is visible immediately |
| No prop drilling needed | Cache shared anywhere below HydrationBoundary using the same key |
| Subsequent interactions use client fetching | Pagination, filter changes naturally transition to client-driven fetching |
Drawbacks and Caveats
| Item | Description | Mitigation |
|---|---|---|
staleTime not set |
Default 0 causes re-requests immediately after mount |
Specify staleTime in QueryClient defaults or per query |
| Query key mismatch | Cache miss if server/client key structure differs | Manage key factory function in a shared module |
| Suspense nesting order | Placing HydrationBoundary inside Suspense may render fallback first |
Place HydrationBoundary outside Suspense |
| Server memory | Serialization size increases when prefetching large datasets | Prefetch only the first page; handle the rest with client fetching |
| Authenticated APIs | Be careful with endpoints that require passing cookies/tokens from the server | Include auth information in fetch using cookies() or headers() |
HydrationBoundary vs Suspense:
HydrationBoundaryis responsible for injecting already-complete data into the client cache, whileSuspenseis responsible for waiting on data that isn't ready yet. Because they serve different purposes, placingHydrationBoundaryinsideSuspensecan cause the fallback to appear before hydration is complete.
The Most Common Mistakes in Practice
-
Double fetching continues because
staleTimewas not set. If you appliedHydrationBoundarybut still see two requests in the Network tab, checkstaleTimefirst. This was the last mistake I caught myself — I remember spending a long time puzzling over "why is it still making another request even after adding HydrationBoundary?" -
The query key structures on the server and client are subtly different. If you store on the server with
['grid-data', { page: 1 }]and read on the client with['grid-data', 1], the cache hit won't occur. Putting the key factory function likegridKeys.list(...)in a shared file structurally prevents this problem. -
Attempting to use
QueryClientProviderdirectly in a Server Component.QueryClientProviderrequires a client context and must be inside a'use client'file. Typically, separating it intoapp/providers.tsxand importing it inapp/layout.tsxis the clean structure. The code in Step 1 demonstrates this.
Closing Thoughts
The combination of prefetchQuery + dehydrate + HydrationBoundary is currently the most stable pattern for structurally resolving initial flicker and double fetching in the App Router environment. Setup is done once, and the same approach can be applied each time you add a new component.
Starting in this order will help you get a feel for it quickly.
- Create
lib/query-client.tsandapp/providers.tsxto separategetQueryClientandQueryClientProvider. If your existing project already has aQueryClient, start by moving that creation logic into these files. - Add
prefetchQueryandHydrationBoundaryto the Server Component (page.tsx) that needs data first. You can verify directly in the browser's Network tab that the duplicate API call after client mount has disappeared. - For grids with pagination, apply the
keepPreviousDataoption together. Instead of a blank screen during page transitions, the previous data is retained, providing a much more natural user experience.
References
Official
- Server Rendering & Hydration | TanStack Query Official Docs
- Advanced Server Rendering | TanStack Query Official Docs
- Prefetching & Router Integration | TanStack Query Official Docs
- Next.js App Prefetching Example | TanStack Query
- Next.js Suspense Streaming Example | TanStack Query v5
- Data Fetching Patterns and Best Practices | Next.js Official Docs
- Set up with React Server Components | tRPC Official Docs
Additional Reading
- Building a Fully Hydrated SSR App with Next.js App Router and TanStack Query | Medium
- Stop the Duplicate Refetch | Medium
- Integrate TanStack Query with Next.js App Router 2025 | storieasy
- Advanced Server Rendering with Next.js App Router | DEV Community
- Using TanStack Query with Next.js | LogRocket Blog