Axum + SQLx + PostgreSQL + JWT + Docker Compose Production Stack: From Compile-Time SQL Validation to Container Deployment
I started my Rust backend with Actix-Web before switching to Axum, and honestly, I don't think it was the wrong choice. The first thing I thought after making the switch was "why didn't I do this sooner?" Axum, built and maintained by the Tokio team, reached another level of maturity with the 0.8 release in late 2024, and just how convenient its integration with the Tower middleware ecosystem is in practice is something you have to experience firsthand.
This article is aimed at developers who know basic Rust syntax and have experience building backends in other languages. It assumes you've at least encountered the concepts of ownership and lifetimes. The content covers the entire process of building a backend ready for real-world deployment using the Axum + SQLx + PostgreSQL + JWT + Docker Compose stack. The core value of this stack is that it catches SQL query errors and type mismatches at compile time, which drastically reduces database bugs you'd otherwise only discover after deployment.
One thing I'll be upfront about: this stack is not easy to get into. According to the Rust 2025 survey, 45.5% of all developers are already using Rust in production, but concerns about complexity have also grown alongside that. It's true that Cloudflare, Discord, and Amazon have adopted Axum + Tokio for their backends, but behind that is the time their teams invested in getting comfortable with Rust. If you're willing to pay the learning cost, what this stack gives back is substantial.
Core Concepts
The Decisive Difference Between Axum and Actix-Web
Axum's biggest characteristic is that it didn't build its own middleware system. Instead, it uses the Tower ecosystem as-is. At first, it wasn't obvious to me why that was an advantage, but once you're in production combining CORS, rate limiting, compression, and authentication layers, you feel just how convenient it is to pull in already-battle-tested Tower middleware like plugins. Things that required a custom implementation or hunting for a separate crate in Actix-Web often come down to a single line from tower-http.
The second characteristic is the type-safe Extractor pattern. When you encapsulate the logic for pulling data out of a request using the FromRequestParts trait, Axum automatically extracts and validates it just from the type declaration in your handler's function signature. JWT authentication is the prime example of this pattern in action.
// Just from the handler signature, it's clear "this endpoint requires authentication"
async fn get_profile(
claims: Claims, // JWT extractor — automatically returns 401 if no token
State(state): State<AppState>,
) -> impl IntoResponse {
// claims.sub already has the verified user ID
}Tower: An abstraction library for Rust async services and middleware. A single
Servicetrait lets you compose HTTP servers, clients, and middleware chains consistently. Think of Axum as a thin layer on top of Tower.
Why SQLx Is Not an ORM — And Why That's a Strength
SQLx is not an ORM. You write raw SQL. At first you might think "so how is it different from a plain DB driver?", but the key difference is the query!() macro.
// This code actually connects to the DB at compile time to validate the query
let user = sqlx::query_as!(
UserProfile,
"SELECT id, email, name, created_at FROM users WHERE id = $1",
user_id
)
.fetch_one(&pool)
.await?;The moment you run cargo build, SQLx connects to the DB and checks whether the id, email, name, and created_at columns actually exist in the users table and whether the types match. Column name typos or type mismatches are caught as compile errors, not runtime errors. When I first experienced this, it felt quite fresh. SQL errors are usually the kind of thing you "only find out after deployment."
| Approach | Pros | Cons |
|---|---|---|
query!() macro |
Compile-time validation, automatic type mapping | Requires DB connection at build time |
query_as!() |
Maps directly to a struct | Same |
query_unchecked!() |
Can build without DB | Gives up type safety |
Offline Mode — Building Without a DB in CI
The only inconvenience with the query!() macro is that it requires a DB at build time. If spinning up a DB in CI every time is cumbersome, there's a way to save query metadata to a file using cargo sqlx prepare.
# 1. Generate metadata while local DB is running
cargo sqlx prepare
# 2. Include the generated .sqlx/ directory in Git
git add .sqlx/
git commit -m "chore: sqlx 쿼리 메타데이터 추가"After that, a single SQLX_OFFLINE=true environment variable lets CI build without a DB.
# GitHub Actions example
- name: Build
env:
SQLX_OFFLINE: true
run: cargo build --releaseYou only need to re-run cargo sqlx prepare locally and commit when you modify a query. If the .sqlx/ directory is missing, offline mode won't work, so it must be included in Git.
Two JWT Authentication Patterns — Which One to Choose
There are two main ways to apply JWT in Axum.
Pattern 1 — Extractor approach: Useful when only specific handlers need authentication. Just declare the Claims type as a handler parameter.
Pattern 2 — Tower middleware approach: Use this when you want to apply authentication to an entire group of routes at once. Wrapping with route_layer() automatically enforces authentication on all endpoints beneath it.
// Pattern 2: Protecting an entire route group with middleware
let protected_routes = Router::new()
.route("/profile", get(get_profile))
.route("/posts", post(create_post))
.route_layer(middleware::from_fn_with_state(
app_state.clone(),
jwt_auth_middleware,
));
let public_routes = Router::new()
.route("/auth/login", post(login))
.route("/auth/register", post(register))
.route("/health", get(health)); // health check — no auth required
let app = Router::new()
.merge(protected_routes)
.merge(public_routes)
.with_state(app_state);It's useful to build the health check handler to also verify DB connectivity.
async fn health(State(state): State<AppState>) -> impl IntoResponse {
match sqlx::query("SELECT 1").execute(&state.db).await {
Ok(_) => StatusCode::OK,
Err(_) => StatusCode::SERVICE_UNAVAILABLE,
}
}In practice, you use both patterns together. Use middleware for route groups that need protection, and use an Option<Claims> extractor when you need user info optionally within the same endpoint (personalized response if logged in, default response otherwise).
Practical Application
Step 1: Project Setup and AppState Configuration
First, add the necessary dependencies to Cargo.toml. This is based on Axum 0.8.
[dependencies]
axum = "0.8"
tokio = { version = "1", features = ["full"] }
sqlx = { version = "0.8", features = ["runtime-tokio-rustls", "postgres", "uuid", "time", "chrono"] }
jsonwebtoken = "9"
axum-extra = { version = "0.9", features = ["typed-header"] }
tower-http = { version = "0.6", features = ["cors", "trace"] }
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
serde = { version = "1", features = ["derive"] }
serde_json = "1"
dotenvy = "0.15"
chrono = { version = "0.4", features = ["serde"] }It's good to define the DB schema first. Generate a migration file with sqlx migrate add create_users and add the following DDL.
-- migrations/20240101000001_create_users.sql
CREATE TABLE users (
id BIGSERIAL PRIMARY KEY,
email VARCHAR(255) UNIQUE NOT NULL,
name VARCHAR(100) NOT NULL,
password_hash TEXT NOT NULL,
created_at TIMESTAMPTZ DEFAULT NOW() NOT NULL
);The key is how you design AppState. Since PgPool internally uses Arc, there's no need to wrap it in a separate Arc<AppState>. The reason for using Arc<str> for jwt_secret is so that when AppState is cloned, only the reference count increases rather than copying the string buffer each time. Using String works too, but in an environment where state is cloned per request, Arc<str> is more efficient.
use sqlx::PgPool;
use std::sync::Arc;
#[derive(Clone)]
pub struct AppState {
pub db: PgPool,
pub jwt_secret: Arc<str>, // On Clone, only ref count increments — no string buffer copy
}
impl AppState {
pub async fn new() -> Result<Self, Box<dyn std::error::Error>> {
dotenvy::dotenv().ok();
let database_url = std::env::var("DATABASE_URL")
.expect("DATABASE_URL must be set");
let jwt_secret = std::env::var("JWT_SECRET")
.expect("JWT_SECRET must be set");
let db = sqlx::postgres::PgPoolOptions::new()
.max_connections(10)
.acquire_timeout(std::time::Duration::from_secs(3))
.connect(&database_url)
.await?;
// Automatically apply migrations on app startup
sqlx::migrate!("./migrations").run(&db).await?;
Ok(Self {
db,
jwt_secret: jwt_secret.into(),
})
}
}| Component | Role | Notes |
|---|---|---|
PgPool |
Connection pool management | Arc-based internally, no Clone cost |
max_connections |
Limits concurrent DB connections | Prevents overload |
acquire_timeout |
Wait time for acquiring a connection | For latency detection |
sqlx::migrate!() |
Auto-applies migrations on startup | No separate init container needed |
Step 2: Full JWT Extractor Implementation
Here is the full code for implementing a JWT Claims extractor as FromRequestParts<AppState>. Because it receives AppState directly, it uses the already-parsed jwt_secret rather than reading the environment variable on every request. This matters because calling std::env::var() on every request introduces unnecessary overhead and conflicts with the design intent of storing the secret in AppState.
Native
async fnin traits is supported since Rust 1.75. You can implement it directly as shown below without the#[async_trait]macro. If you're on 1.74 or below, add theasync-traitcrate and attach#[async_trait]to eachimplblock for the same behavior.
use axum::{
extract::FromRequestParts,
http::{request::Parts, StatusCode},
response::{IntoResponse, Response},
Json,
};
use axum_extra::{
headers::{authorization::Bearer, Authorization},
TypedHeader,
};
use jsonwebtoken::{decode, DecodingKey, Validation};
use serde::{Deserialize, Serialize};
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct Claims {
pub sub: i64, // User ID stored as i64 from the start — no parsing needed in handlers
pub exp: usize,
pub iat: usize,
}
#[derive(Debug)]
pub enum AuthError {
MissingToken,
InvalidToken,
ExpiredToken,
}
impl IntoResponse for AuthError {
fn into_response(self) -> Response {
let (status, message) = match self {
AuthError::MissingToken => (StatusCode::UNAUTHORIZED, "토큰이 필요합니다"),
AuthError::InvalidToken => (StatusCode::UNAUTHORIZED, "유효하지 않은 토큰입니다"),
AuthError::ExpiredToken => (StatusCode::UNAUTHORIZED, "만료된 토큰입니다"),
};
(status, Json(serde_json::json!({ "error": message }))).into_response()
}
}
// Receives AppState directly to access jwt_secret — no env::var call per request
impl FromRequestParts<AppState> for Claims {
type Rejection = AuthError;
async fn from_request_parts(
parts: &mut Parts,
state: &AppState,
) -> Result<Self, Self::Rejection> {
let TypedHeader(Authorization(bearer)) =
TypedHeader::<Authorization<Bearer>>::from_request_parts(parts, state)
.await
.map_err(|_| AuthError::MissingToken)?;
let token_data = decode::<Claims>(
bearer.token(),
&DecodingKey::from_secret(state.jwt_secret.as_bytes()),
&Validation::default(),
)
.map_err(|e| match e.kind() {
jsonwebtoken::errors::ErrorKind::ExpiredSignature => AuthError::ExpiredToken,
_ => AuthError::InvalidToken,
})?;
Ok(token_data.claims)
}
}In handlers, you use it like this. Since claims.sub is already i64, no separate parsing is needed, and there's no trap like unwrap_or_default() returning 0 on parse failure.
#[derive(Debug, Serialize, sqlx::FromRow)]
pub struct UserProfile {
pub id: i64,
pub email: String,
pub name: String,
pub created_at: chrono::DateTime<chrono::Utc>,
}
pub async fn get_my_profile(
claims: Claims,
State(state): State<AppState>,
) -> impl IntoResponse {
let result = sqlx::query_as!(
UserProfile,
"SELECT id, email, name, created_at FROM users WHERE id = $1",
claims.sub // Used directly as i64
)
.fetch_optional(&state.db)
.await;
match result {
Ok(Some(profile)) => (StatusCode::OK, Json(profile)).into_response(),
Ok(None) => StatusCode::NOT_FOUND.into_response(),
Err(_) => StatusCode::INTERNAL_SERVER_ERROR.into_response(),
}
}Step 3: Multi-Stage Dockerfile and Docker Compose
Rust's long build times can be partially mitigated with Docker layer caching. The key is copying Cargo.toml and Cargo.lock before the source code to cache the dependency layer separately. The first build is slow, but when only the source changes, subsequent builds reuse the dependency layer as-is.
musl libc: Statically linking with musl instead of GNU libc produces a self-contained binary with no external library dependencies. The
x86_64-unknown-linux-musltarget is exactly for this purpose. As a result, you can usescratch(essentially an empty image) as the runtime image, bringing the final image size down to the 10–50 MB range.
# ---- Build Stage ----
FROM rust:alpine AS builder
# If you want to pin the version, you can specify something like rust:1.82-alpine
RUN apk add --no-cache musl-dev pkgconfig openssl-dev
WORKDIR /app
# Dependency layer caching: copy Cargo files first to separate dependencies into their own layer
COPY Cargo.toml Cargo.lock ./
RUN mkdir src && echo 'fn main() {}' > src/main.rs
RUN cargo build --release --target x86_64-unknown-linux-musl
RUN rm -f target/x86_64-unknown-linux-musl/release/deps/api*
# Copy actual source and build
COPY . .
ENV SQLX_OFFLINE=true # Uses .sqlx/ metadata — builds without DB
RUN cargo build --release --target x86_64-unknown-linux-musl
# ---- Runtime Stage ----
FROM scratch
COPY --from=builder /app/target/x86_64-unknown-linux-musl/release/api /api
COPY --from=builder /app/migrations /migrations
EXPOSE 3000
CMD ["/api"]# docker-compose.yml
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
environment:
DATABASE_URL: postgres://user:pass@db:5432/mydb
JWT_SECRET: ${JWT_SECRET} # Injected from .env — never hardcode in plaintext
RUST_LOG: info
depends_on:
db:
condition: service_healthy # Without this line, intermittent connection failures occur
restart: unless-stopped
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: mydb
healthcheck:
test: ["CMD-SHELL", "pg_isready -U user -d mydb"]
interval: 5s
timeout: 3s
retries: 5
start_period: 10s # Grace period for data file initialization during first startup
volumes:
- pgdata:/var/lib/postgresql/data
restart: unless-stopped
volumes:
pgdata:You must specify condition: service_healthy in depends_on. Simply writing depends_on: [db] only means the PostgreSQL process has started, not that it's actually ready to accept connections. start_period: 10s is the option that gives PostgreSQL a grace period during the health check for the time it takes to initialize data files on first startup. I once deployed without this setting and spent a while puzzling over "why does the connection sometimes fail?" — it was this.
Pros and Cons Analysis
Here's an honest summary of what I've felt using this stack in production. The advantages are clear, but so are the trade-offs, so I hope this helps you judge "is this right for our team?"
Advantages
| Item | Details |
|---|---|
| Compile-time SQL validation | The query!() macro connects to the DB at build time to verify query correctness, eliminating runtime SQL errors at the source |
| Extreme performance | Rust zero-cost abstractions + Tokio async runtime deliver throughput equal to or better than Go, with dramatically reduced memory usage |
| Tower middleware ecosystem | Rate limiting, CORS, compression, and authentication can be composed from battle-tested components |
| Type-safe state sharing | PgPool is Arc-based, so it can be safely shared via .with_state() without additional wrapping |
| Ultra-small container images | Multi-stage builds compress 1GB+ build images down to 10–50 MB runtime images |
| Memory safety | Ownership model prevents dangling pointers and race conditions at compile time without GC |
Drawbacks and Caveats
The most common issue I hit in production is compile time. No matter how many times I've done it, waiting several minutes for a full build never gets comfortable. Combining sccache with Docker layer caching in CI makes subsequent builds meaningfully faster.
| Item | Details | Mitigation |
|---|---|---|
| Slow compile times | Full initial builds can take several minutes | sccache + Docker dependency layer caching |
query!() build dependency |
Requires a DB connection at compile time, complicating CI setup | Commit .sqlx/ offline snapshot to Git via cargo sqlx prepare |
| JWT secret exposure risk | Storing secrets in plaintext env vars risks exposure in logs or dumps | Docker Secrets or AWS Secrets Manager / Vault integration |
| Learning curve | Ownership and lifetime concepts have a higher barrier to entry than other languages | Recommended to build a foundation with rustlings and The Rust Book first |
| Migration timing | Running migrations on app startup can cause issues during rollback | Consider running separately in a dedicated init container or deployment pipeline |
HS256 vs RS256: The default algorithm in
jsonwebtokenis HS256 (symmetric key). In environments where multiple services need to verify tokens, such as microservices, RS256 (asymmetric key) is recommended so the private key doesn't need to be shared.
Most Common Mistakes in Practice
-
Missing
condition: service_healthyindepends_on: Simply writingdepends_on: [db]only means the PostgreSQL process has started, not that it's ready to accept connections. Deploying without the health check condition leads to intermittent failures where the app can't connect to the DB and exits. -
Forgetting to prepare offline mode in CI: The
query!()macro requires a DB during the build phase. SettingSQLX_OFFLINE=truewithout committing the.sqlx/directory to Git will cause the build itself to fail. Always runcargo sqlx preparelocally and commit the result. -
Hardcoding JWT secrets in
docker-compose.yml: Writing something likeJWT_SECRET: supersecretdirectly in the Compose file leaves it permanently in Git history. Always use a.envfile +.gitignorecombination, or an external secret management tool.
Closing Thoughts
Once you actually put this stack into a real service, there's something you come to feel. After SQL typos, type mismatches, and concurrency bugs are all caught before deployment, the pattern of production incidents changes. Instead of debugging "why isn't this query working?" at 3am, you get to focus on business logic-level problems. I won't pretend the barrier to entry is low, but the experience of having the compiler block SQL errors, type mismatches, and race conditions all at once is the kind of thing that makes it hard to go back once you've felt it.
Three steps to get started right now:
-
Environment setup: Add the musl target with
rustup target add x86_64-unknown-linux-musl, and install the SQLx CLI withcargo install sqlx-cli --features postgres. You can start by creating a project withcargo new my-apiand adding theCargo.tomldependencies above. -
Verify local DB connection: Spin up PostgreSQL with
docker run -p 5432:5432 -e POSTGRES_PASSWORD=pass -e POSTGRES_DB=mydb postgres:16-alpine, and writeDATABASE_URL=postgres://postgres:pass@localhost:5432/mydbandJWT_SECRET=dev-secretto a.envfile. Apply migrations withsqlx database create && sqlx migrate run, run the app withcargo run, and verify DB connectivity withcurl http://localhost:3000/health. -
Containerize: Before adding the multi-stage
Dockerfileanddocker-compose.ymlabove to your project root, first commit the offline metadata withcargo sqlx prepare && git add .sqlx/. Thanks toSQLX_OFFLINE=truein the Dockerfile, the container build will pass without a DB. Then rundocker compose up --buildand confirm the app starts after the DB health check passes.
References
- JWT Authentication in Rust using Axum Framework 2025 | CodevoWeb
- Getting started with REST API in Rust using Axum, SQLx, PostgreSQL, Redis, JWT, Docker | SHEROZ.COM
- GitHub — sheroz/axum-rest-api-sample
- GitHub — wpcodevo/rust-axum-jwt-rs256
- Rust CRUD Rest API, using Axum, SQLx, Postgres, Docker and Docker Compose | DEV Community
- The Ultimate Guide to Axum: From Hello World to Production in Rust (2025) | Shuttle
- Building Production Web Services with Rust and Axum | DASRoot
- A Guide to Rust ORMs in 2025 | Shuttle
- SQLx GitHub (launchbadge/sqlx)
- SQLx integration in Axum | mo8it.com
- axum/examples/sqlx-postgres — Official Axum Repo
- axum/examples/jwt — Official Axum Repo
- JWT Authentication with Axum | Shuttle Docs
- Building a Custom Authentication Layer in Axum | Leapcell
- Authentication with Axum | mattrighetti (2025-05-03)
- How to Build Tower Middleware for Auth and Logging in Axum | OneUptime
- How to Instrument Rust Axum Applications with OpenTelemetry | OneUptime
- Rust Async in Production: Tokio, Axum, High-Performance APIs in 2026
- Rust 2025 Survey: 45.5% Adoption | ByteIota
- Axum in 2026 — Health Score & Ecosystem Analysis | Stackwise
- GitHub — koskeller/axum-postgres-template
- GitHub — JohnScience/axum-docker-compose-sqlx
- Rust Web Frameworks in 2026: Axum vs Actix Web vs Rocket | Medium