How to Self-Host Turborepo Remote Cache Without Vercel — Cut CI Build Time by 40–80% with Docker Compose + MinIO + GitHub Actions
Anyone who has managed a monorepo knows the frustration of watching pnpm turbo run build run for 5 or 10 minutes in CI. With more than 10 packages, you end up rebuilding everything even when only two files changed. At first, I spent a long time wondering, "Turborepo uses its cache fine locally — why does CI start from scratch every time?" The answer is simple: the local cache exists only on that machine, so when a CI worker spins up a fresh container each time, it starts with no cache at all.
Attaching a proper Remote Cache changes everything. According to a case study published by Mercari Engineering in early 2026, adopting Turborepo Remote Cache cut their CI task duration by approximately 50%. Depending on your team and scale, you can expect a 40–80% improvement — meaning a 10-minute build could drop to 2–6 minutes. This post covers the entire process of spinning up an open-source cache server via Docker Compose without a Vercel account, using MinIO as storage, and connecting it to GitHub Actions. It also covers how to switch to AWS S3 and integrate with GitLab CI.
This post is written for teams already running a Turborepo-based monorepo who are familiar with the basics of Docker Compose and GitHub Actions. By the end of this post, you'll be ready to apply this starting with your next PR.
Core Concepts
How Remote Cache Works
Before executing each task, Turborepo computes an input hash. It combines source files, package.json, environment variables, and the pipeline configuration in turbo.json into a single unique hash. It then asks the Remote Cache server whether an artifact exists for that hash.
Run turbo run build
↓
Compute input hash for each task (source + dependencies + env vars)
↓
GET /v8/artifacts/:hash → cache server
↓
Cache hit → restore artifacts, skip task
Cache miss → run task, then upload via PUT /v8/artifacts/:hashYou might wonder what "restore artifacts" means concretely: it bundles build output folders like dist/ and .next/ together with the task execution logs and stores them as an artifact. This is why, on a cache hit, the terminal displays the same logs as the previous run.
Content-addressed hashing: A method of generating hashes based on actual file contents rather than file paths or timestamps. If the code hasn't changed, the same hash is produced across different branches, allowing cache sharing.
The protocol is remarkably simple. Any HTTP server that implements just two endpoints — GET and PUT — can serve as a Turborepo Remote Cache. Vercel published this spec, and the community has built a variety of implementations around it.
Three Required Environment Variables
Turborepo needs three environment variables to recognize a custom cache server.
| Variable | Role | Example |
|---|---|---|
TURBO_API |
Base URL of the cache server | https://cache.internal.mycompany.com |
TURBO_TEAM |
Team identifier for cache namespace separation | my-team |
TURBO_TOKEN |
Bearer authentication token | Random string |
Bearer authentication: A method of sending a token in the HTTP
Authorizationheader asAuthorization: Bearer <token>. The server validates requests by checking whether this token matches a registered value. Turborepo uses this method for all cache API requests.
Instead of environment variables, you can also hardcode the URL and team name in .turbo/config.json. For security, the token should always be kept in an environment variable.
{
"apiUrl": "https://cache.internal.mycompany.com",
"teamId": "my-team"
}The field name teamId may be confusing at first since it differs from the environment variable TURBO_TEAM. Both hold the same value — the JSON config file follows camelCase convention while environment variables use SCREAMING_SNAKE_CASE. Either approach works the same way.
Practical Implementation
Local Setup: Docker Compose + MinIO
The recommended first step is to spin up the cache server locally. ducktors/turborepo-remote-cache is a community-standard implementation built on Node.js (Fastify), with an official image on Docker Hub that you can use without any custom build.
MinIO: An open-source object storage that is fully compatible with the AWS S3 API. It lets you store and manage data in a private environment using the same approach as S3, making it a popular choice for building S3-compatible infrastructure without cloud dependency.
# docker-compose.yml
services:
turborepo-cache:
image: ducktors/turborepo-remote-cache:latest
ports:
- "3000:3000"
environment:
- TURBO_TOKEN=my-secret-token # Recommended: generate with openssl rand -hex 32
- STORAGE_PROVIDER=s3
- S3_ACCESS_KEY=minioadmin
- S3_SECRET_KEY=minioadmin
- S3_BUCKET=turborepo-cache
- S3_ENDPOINT=http://minio:9000
- S3_REGION=us-east-1
depends_on:
minio:
condition: service_healthy
minio:
image: minio/minio
command: server /data --console-address ":9001"
ports:
- "9000:9000"
- "9001:9001" # MinIO web console
environment:
- MINIO_ROOT_USER=minioadmin
- MINIO_ROOT_PASSWORD=minioadmin
volumes:
- minio_data:/data
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 10s
timeout: 5s
retries: 5
volumes:
minio_data:For TURBO_TOKEN, it is recommended to use a random token generated with the following command.
openssl rand -hex 32If you omit the --console-address ":9001" flag, you won't be able to access the MinIO web console at http://localhost:9001. I spent a while confused about why the console wouldn't open the first time I ran it without that flag.
The healthcheck + condition: service_healthy combination ensures the cache server starts only after MinIO is fully ready. Without this, the cache server may attempt to connect while MinIO is still initializing, causing repeated cache miss errors that are difficult to trace.
| Item | Role |
|---|---|
ducktors/turborepo-remote-cache |
Cache server implementing the Turborepo API spec |
minio |
S3-compatible object storage, the actual artifact store |
TURBO_TOKEN |
Bearer token; recommended to generate with openssl rand -hex 32 |
S3_ENDPOINT |
Custom endpoint for using MinIO instead of AWS S3 |
Once the server is up, you can check its status at http://localhost:3000/health. Then, create the turborepo-cache bucket in the MinIO console (http://localhost:9001) in advance.
Production Switch: Replacing Storage with AWS S3
Once you've validated with local MinIO, switching to AWS S3 in production is just a matter of changing a few environment variables. The file below is an override layered on top of docker-compose.yml, run by specifying both files together like docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d.
# docker-compose.prod.yml
# Override file layered on top of docker-compose.yml
# Run: docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d
services:
turborepo-cache:
environment:
- TURBO_TOKEN=${TURBO_TOKEN}
- STORAGE_PROVIDER=s3
- S3_ACCESS_KEY=${AWS_ACCESS_KEY_ID}
- S3_SECRET_KEY=${AWS_SECRET_ACCESS_KEY}
- S3_BUCKET=my-company-turborepo-cache
- S3_REGION=ap-northeast-2
# Omitting S3_ENDPOINT uses the AWS S3 defaultIt is recommended to attach a Lifecycle policy to the S3 bucket. Many people only set this up after seeing their storage bill around six months later. Applying the configuration below when you first create the bucket will automatically expire artifacts older than 30 days.
{
"Rules": [
{
"ID": "turborepo-cache-ttl",
"Status": "Enabled",
"Filter": { "Prefix": "" },
"Expiration": { "Days": 30 }
}
]
}GitHub Actions Integration
Once the cache server is ready, connecting to CI takes just three environment variables. Honestly, this part was so simple that at first I suspected I was missing something.
# .github/workflows/ci.yml
name: CI
on: [push, pull_request]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: pnpm/action-setup@v3
with:
version: 9
- uses: actions/setup-node@v4
with:
node-version: 20
cache: 'pnpm'
- run: pnpm install
- name: Build with Remote Cache
run: pnpm turbo run build test lint
env:
TURBO_API: ${{ secrets.TURBO_API }}
TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }}
TURBO_TEAM: ${{ vars.TURBO_TEAM }}Register the following in your GitHub repo settings.
| Location | Key | Value |
|---|---|---|
| Settings → Secrets | TURBO_API |
https://cache.internal.mycompany.com |
| Settings → Secrets | TURBO_TOKEN |
The token configured on the cache server |
| Settings → Variables | TURBO_TEAM |
my-team |
TURBO_TEAM is placed in Variables rather than Secrets because the team name is not sensitive information and it's fine if it appears in workflow logs.
GitLab CI Integration
For teams using GitLab, register the same three variables under Settings → CI/CD → Variables and inject them in .gitlab-ci.yml as shown below.
# .gitlab-ci.yml
variables:
TURBO_API: "${TURBO_API}"
TURBO_TOKEN: "${TURBO_TOKEN}"
TURBO_TEAM: "${TURBO_TEAM}"
build:
stage: build
script:
- pnpm install
- pnpm turbo run build test lint
cache:
key: pnpm-$CI_COMMIT_REF_SLUG
paths:
- node_modules/.cache/pnpmOne point worth clarifying about the cache block: the cache here is GitLab's built-in cache, used to store node_modules so that pnpm install completes faster. It is separate from Turborepo Remote Cache — each caches a different layer. GitLab cache reduces package installation time; Turborepo Remote Cache reduces build and test execution time. Used together, they complement each other.
Pros and Cons
Advantages
| Item | Details |
|---|---|
| Reduced build time | Can cut CI build time by 40–80% for large monorepos. Mercari Engineering reported 50% reduction in task duration and 30% reduction in overall CI job time |
| Reduced CI costs | Less runner usage time means lower costs on GitHub Actions paid plans or self-hosted runners |
| Shared cache across teams | Multiple workers and developer machines share the cache based on the same commit. If a colleague has already built before you open a PR, your CI can use their cache directly |
| Vendor independence | Full control over infrastructure without depending on external SaaS like Vercel platform or Nx Cloud |
| Simple API spec | Just two HTTP endpoints: PUT and GET. Easy to implement yourself or add a thin layer on top of an existing object store |
| Storage flexibility | Choose any backend: AWS S3, MinIO, GCS, DigitalOcean Spaces, local disk, and more |
Drawbacks and Caveats
This option isn't right for every team. Here are the honest downsides.
- Infrastructure maintenance burden — You are responsible for the cache server's availability, security patches, and upgrades. If your team lacks a dedicated DevOps person, this option may add more overhead than it saves. Railway one-click deployment is one way to reduce operational burden.
- Cache invalidation complexity — A bad cache hit can result in stale artifacts being deployed. Explicitly specifying the
inputsfield inturbo.jsonand auditing your environment variable list prevents most of these issues. - Accumulating storage costs — S3 costs grow as artifacts pile up. It is recommended to set a TTL/Lifecycle policy to automatically expire artifacts older than 30 days.
- Initial cold cache — The first run has no cache, resulting in upload overhead only. The first run being slow is expected behavior; you'll feel the benefit starting from the second run.
- Dependency on change scope — Cache hit rates drop on large-change PRs, limiting the benefit. This works best alongside a culture of small, incremental commits.
turbo login/linknot supported — Custom servers cannot use theturbo loginorturbo linkcommands. Manual configuration via.turbo/config.jsonand environment variables is required.
Stale cache: A situation where inputs appear identical but have actually changed in a way that goes undetected, causing an outdated result from a previous run to be reused. Accurately specifying the file patterns that affect the build in the
inputsarray ofturbo.jsonprevents most occurrences.
The Most Common Mistakes in Practice
- Hardcoding
TURBO_TOKENin the code — Even if you put it in a.envfile and added it to.gitignore, writing the value directly in a CI workflow file embeds it in commit history. Always inject it through Secrets. - Exposing the cache server to the public internet over HTTP — Even with a token, plain HTTP is vulnerable to sniffing. If external access is required, always place TLS (HTTPS) in front of it. HTTP is acceptable for internal-only use.
- Not setting an S3 bucket Lifecycle policy — Easy to overlook, and many people only realize this after seeing their storage bill around six months later. It is recommended to apply a 30-day expiration policy when you first create the bucket.
Closing Thoughts
The key takeaway is to never leave the token in your code — always inject it through Secrets. If you remember nothing else from this post, that alone makes it worthwhile. The cache server itself can be started with a single Docker Compose file and three environment variables, and based on the Mercari case study, it can cut CI build time by up to 50% or more.
Here are 3 steps you can take right now.
- Copy the
docker-compose.ymland rundocker compose up -dto spin up the cache server locally — Confirm the server is running athttp://localhost:3000/health, then create theturborepo-cachebucket in the MinIO console (http://localhost:9001) and you're ready. - Validate the cache connection in your local environment — Create
.turbo/config.jsonat the monorepo root withapiUrlset tohttp://localhost:3000, then runTURBO_TOKEN=my-secret-token pnpm turbo run buildtwice. If you seeFULL TURBOon the second run, the cache is working correctly. - Once validated, deploy the server to an accessible URL and register
TURBO_API,TURBO_TOKEN, andTURBO_TEAMin GitHub Secrets — Add them to the workflowenvblock as shown in the example above, and you'll feel the difference starting with your next PR.
References
- Remote Caching | Turborepo Official Docs
- GitHub Actions Integration Guide | Turborepo Official Docs
- ducktors/turborepo-remote-cache | GitHub
- Custom Remote Caching | ducktors Official Docs
- Supported Storage Providers | ducktors Official Docs
- ducktors/turborepo-remote-cache | Docker Hub
- brunojppb/turbo-cache-server (Rust implementation) | GitHub
- pkarolyi/garden-snail (NestJS implementation) | GitHub
- Tapico/tapico-turborepo-remote-cache | GitHub
- rharkor/caching-for-turbo (using GitHub Actions built-in cache) | GitHub
- Accelerating CI with Turborepo Remote Cache | Mercari Engineering (2026.02)
- S3 + GitHub Actions Integration | januschung.github.io (2025.05)
- Official Announcement: Vercel Remote Cache Goes Free | Turborepo Blog
- Alternative remote caching hosts | Turborepo GitHub Discussions