Cutting Monorepo CI from 14 Minutes to 3 Minutes with turbo.json — Real-World Tuning of inputs, env, and GitHub Actions Cache Hit Rate
The configuration change that boosted cache hit rate from 4% to 87% came down to exactly three lines. Simply separating env and passThroughEnv properly in turbo.json was all it took — and CI dropped from 14 minutes to 3.
When you're running a 20-package monorepo, there comes a point where CI gets so slow that opening a PR feels daunting. When teammates started saying "what do we do while waiting for the build?", the thing I really dug into was Turborepo's remote cache and task graph optimization. If you're already using Turborepo, tuning inputs/env to boost cache hit rate right now matters far more than installation or basic setup.
This post walks through how Turborepo accelerates builds from first principles, then covers step-by-step configuration that actually works in GitHub Actions. Simply setting outputs, inputs, and env correctly in turbo.json can cut CI time by 70–80%. I've also summarized the latest breaking changes in Turborepo 2.x and key changes by version.
If you're already familiar with Turborepo basics, feel free to jump straight to Real-World Application.
Core Concepts
Remote Cache: "Why build something we've already built?"
Remote caching in Turborepo boils down to this: if the inputs are the same, the outputs will be the same — so instead of rebuilding, just pull the previously stored result.
Input determination is based on SHA-256 hashing. Turborepo combines source files, package.json dependencies, environment variables, and turbo.json configuration to produce a hash. If the hash matches, it declares a "cache hit" and restores the stored artifacts. Local cache restores in milliseconds; remote cache requires a network download, so it takes hundreds of milliseconds to a few seconds — but that's still tens of times faster than an actual build.
Cache Hit: A state where the input hash matches a previous run, so the stored artifacts are used directly without re-running the build. The opposite is a Cache Miss, in which case the task actually executes.
The real strength of remote caching is that the entire team shares the same cache. If a teammate already ran a build on the main branch, you can pull those artifacts locally when building from the same commit. It's also possible for CI to use what was built on your MacBook, and for your MacBook to use what was built in CI.
Task Graph: Everything That Can Run in Parallel, Does
Based on the dependsOn configuration in turbo.json, Turborepo models dependencies between packages and tasks as a Directed Acyclic Graph (DAG). Tasks with no dependencies automatically run in parallel; only those with dependencies run sequentially in order.
{
"$schema": "https://turborepo.com/schema.json",
"tasks": {
"build": {
"dependsOn": ["^build"],
"outputs": [".next/**", "!.next/cache/**", "dist/**"]
},
"lint": {
"dependsOn": [],
"outputs": []
},
"test": {
"dependsOn": ["^build"],
"outputs": ["coverage/**"],
"inputs": ["src/**/*.ts", "test/**/*.ts"]
}
}
}Turborepo 2.0 Breaking Change: The existing
pipelinefield has been renamed totasks. This is the most common breaking change when migrating from 1.x, so it's worth noting.
The ^ prefix in ^build is the key. It means "all build tasks in every package this package depends on must complete first." When lint's dependsOn is an empty array, it runs in parallel immediately, regardless of other tasks.
Topological Dependency (
^):^taskmeans the task runs after the specified task completes across the entire dependency tree of the current package. By contrast,taskwithout a caret controls task ordering within the same package.
Correctly defining outputs is also essential. Turborepo stores the files defined in outputs as cache artifacts — without this, cache restoration is impossible. I remember spending a long time confused about "why isn't the cache working?" early on because I had left this out.
What --affected Does Internally
The --affected flag operates internally based on git diff. It compares the base and head branches of a PR to identify which files actually changed, then limits execution to only the packages containing those files and their dependents. This is why the comparison itself is impossible without the full git history — which is exactly why fetch-depth: 0 is required in GitHub Actions. Knowing this context in advance makes the common mistakes section much easier to understand.
What's Changed in Turborepo 2.x
The 2.x series from 2025–2026 has seen significant performance gains.
| Version | Key Changes |
|---|---|
| 2.5 | Sidecar tasks, turbo.jsonc comment support, $TURBO_ROOT$ variable |
| 2.7 | Task graph visualization Devtools, composable per-package turbo.json configuration |
| 2.9 | time-to-first-task 96% improvement (★), turbo query stabilized, experimental OpenTelemetry support |
★ It's worth distinguishing between two metrics. The 81–91% improvement in task graph calculation speed and the 96% reduction in the delay before the first task starts executing are separate measurements. The former is dependency analysis time; the latter is the perceived speed from typing turbo run to something actually starting. On repos with over 200 packages, the instant response when running turbo run is almost hard to believe at first.
Real-World Application
Example 1: GitHub Actions Remote Cache + Affected Strategy
This is the combination that delivers results the fastest. Using Vercel remote cache together with the --affected flag lets you build only the packages changed in a PR while still leveraging previous cache.
# .github/workflows/ci.yml
name: CI
on:
pull_request:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0 # Full history is required for --affected calculation based on git diff
- uses: pnpm/action-setup@v3
with:
version: 9
- uses: actions/setup-node@v4
with:
node-version: 20
cache: 'pnpm'
- run: pnpm install --frozen-lockfile
- name: Build & Test (affected only)
run: pnpm turbo run build test lint --affected
env:
TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }}
TURBO_TEAM: ${{ vars.TURBO_TEAM }}| Setting | Role |
|---|---|
fetch-depth: 0 |
Full git history required for git diff calculation between PR base and head branches |
--affected |
Runs only changed packages and their dependents; the rest are treated as cache hits |
TURBO_TOKEN |
Vercel remote cache auth token — stored in GitHub Secrets |
TURBO_TEAM |
Remote cache team identifier — stored in GitHub Variables |
With a 20-package workspace, applying just this configuration brought CI from 14 minutes down to 3–5 minutes. In practice, only about 1–3 packages actually change per PR.
Alternative: Self-Hosted Remote Cache on S3
If Vercel remote cache's paid tier is a concern, you can build a self-hosted S3-based server using the open-source ducktors/turborepo-remote-cache. Honestly, it adds infrastructure management overhead, but if your team is large, the cost savings may outweigh that.
Configure your server environment variables as shown below and run the server with Docker Compose or similar. For detailed setup instructions, refer to the ducktors/turborepo-remote-cache README.
# Remote cache server environment variables
STORAGE_PROVIDER=s3
STORAGE_PATH=my-turbo-cache-bucket
AWS_ACCESS_KEY_ID=...
AWS_SECRET_ACCESS_KEY=...
AWS_REGION=ap-northeast-2
TURBO_TOKEN=my-secret-tokenOnce the server is running, point your project to the custom server address using .turbo/config.json. This file is only needed when using a self-hosted server — if you're using Vercel remote cache, environment variables alone are sufficient.
// .turbo/config.json (self-hosted only)
{
"teamId": "team_my-org",
"apiUrl": "https://my-cache-server.example.com"
}Boosting Cache Hit Rate Further: Fine-Tuning inputs/env
If cache misses are still too frequent after connecting remote cache, it's time to look at your inputs and env settings. I used to wonder why my cache hit rate was so low, and it turned out that environment variables changing with every CI run were all being included in the hash. This is the area that requires the most precision in real-world usage.
{
"tasks": {
"test": {
"dependsOn": ["^build"],
"inputs": ["$TURBO_DEFAULT$", "!README.md", "!**/*.md"],
"outputs": ["coverage/**"]
},
"build": {
"dependsOn": ["^build"],
"env": ["API_URL", "NODE_ENV"],
"passThroughEnv": ["SENTRY_AUTH_TOKEN"],
"outputs": [".next/**", "!.next/cache/**"]
}
}
}| Setting | Role |
|---|---|
$TURBO_DEFAULT$ |
Retains default inputs (all source files) while applying additional exclusion rules |
!README.md |
Excludes README changes from affecting the cache hash |
env |
Only includes environment variables that actually affect build output in the hash |
passThroughEnv |
Environment variables passed to the task at runtime but not included in the hash |
passThroughEnv is a concept introduced after Turborepo 2.0, and it's used for environment variables like Sentry auth tokens — needed at runtime but with no effect on the build artifact itself. Since they're not included in the hash, cache hits are maintained even when the token value changes between CI runs. This single setting alone can dramatically improve cache hit rates.
Pros and Cons
Advantages
| Item | Details |
|---|---|
| Configuration simplicity | Entire pipeline configured with a single turbo.json; can be adopted into an existing monorepo in under 10 minutes |
| Raw performance | Rust-based engine; task graph calculation 81–91% faster; time-to-first-task up to 96% improved |
| Automatic parallelization | Tasks with no dependencies run in parallel automatically — no extra configuration needed |
| Team cache sharing | Cache shared between CI and local — reuse "already built" artifacts across the team |
| Self-hosting | Can be built on S3, GCS, Azure Blob, and other storage without depending on Vercel |
Drawbacks and Caveats
| Item | Details | Mitigation |
|---|---|---|
| Remote cache cost | Vercel remote cache free plan has limits; costs rise with team size | Consider switching to self-hosted S3 |
| Env vars included in hash (2.0+) | Environment variables are included in the hash by default, potentially increasing cache misses | Carefully classify with env / passThroughEnv |
--affected accuracy |
In some edge cases, unchanged packages may be incorrectly identified as affected | Pre-validate with turbo query affected |
| No distributed CI execution | Unlike Nx Agents, there's no automatic distribution across multiple CI machines | Large monorepos require manual distribution setup |
| Log artifact caution | Console output is stored as a cache artifact | Avoid printing sensitive information in logs |
Most Common Mistakes in Practice
-
Not defining
outputs: Withoutoutputson a build task, Turborepo doesn't know what to cache. This is the most common reason caching doesn't work at all. Be sure to specify build artifact paths like.next/**,dist/**,coverage/**, etc. -
Using
--affectedwithoutfetch-depth: 0:--affectedusesgit diffinternally. Without git history, it can't compare the PR base and head, so it treats every package as affected. Remember thatfetch-depth: 0inactions/checkoutand--affectedare a matched pair. -
Not separating environment variables into
envandpassThroughEnv: I didn't know about this for a while. Since Turborepo 2.0, environment variables are included in the hash by default. If environment variables with different values in each CI run (e.g., auth tokens, deployment URLs) are placed inenv, cache misses will explode. It's recommended to keep only what genuinely affects the build output inenv, and put everything else inpassThroughEnv.
Closing Thoughts
Simply setting outputs, inputs, and env correctly in turbo.json can cut CI time by more than half. Add remote cache and the --affected strategy on top of that, and you can experience firsthand what it's like to watch a 14-minute CI shrink to 3 minutes.
Here are three steps you can start with right now.
- Start by reviewing
turbo.json. Check whetheroutputsis missing from your existingbuild,test, andlinttasks, and add the build artifact paths like.next/**,dist/**,coverage/**. - Connect Vercel remote cache. Add
TURBO_TOKENandTURBO_TEAMto GitHub Secrets/Variables and include them in your CI script. (If cost is a concern,ducktors/turborepo-remote-cache+ self-hosted S3 is also a solid option.) - Add the
--affectedflag andfetch-depth: 0to GitHub Actions. Switching from a full build on every PR to running only changed packages yields greater impact the larger your team grows.
Next Post
The next installment will cover how to manage shared configuration across packages (ESLint, TypeScript, Tailwind) in a Turborepo setup using the packages/config-* pattern, linked via workspace references without versioning.
References
- Remote Caching | Turborepo Official Docs
- Package and Task Graphs | Turborepo Official Docs
- Caching | Turborepo Official Docs
- Configuring Tasks | Turborepo Official Docs
- Using Environment Variables | Turborepo Official Docs
- GitHub Actions | Turborepo Official Docs
- Turborepo 2.9 Release Blog
- Turborepo 2.7 Release Blog
- Turborepo 2.5 Release Blog
- Making Turborepo 96% Faster | Vercel Blog
- Remote Caching | Vercel Docs
- ducktors/turborepo-remote-cache | GitHub
- Setting Up Turborepo Remote Cache with S3 and GitHub Actions