Automatically Syncing EKS Multi-Cluster Secrets Without Vault — AWS Secrets Manager + IRSA + ESO in Practice
If you find yourself exchanging uneasy glances with teammates every time Vault comes up in conversation, you're not alone. HA configuration, automated unseal, backup policies, incident runbooks… the moment Vault itself becomes another critical piece of infrastructure, it's only natural to wonder, "Do we really need this?"
If you're running on AWS, there's a different way to approach this problem. By combining External Secrets Operator (ESO) with AWS Secrets Manager, you can build a production-grade EKS multi-cluster secret synchronization pipeline without Vault. AWS Secrets Manager is an AWS-managed service backed by an SLA, so there are no servers to operate yourself, and by authenticating with IRSA (IAM Roles for Service Accounts), you never have to place a static Access Key anywhere in the cluster. ESO uses a Kubernetes resource called ExternalSecret to automatically sync values from Secrets Manager into native Kubernetes Secret objects. If these terms are unfamiliar, don't worry — we'll break them down step by step below.
In this post, we'll walk through "Why IRSA?", "How do ClusterSecretStore and ExternalSecret connect?", and "What IAM structure should you use in a multi-cluster setup?" If you have experience running EKS, you can follow along right away. If Kubernetes is new to you, reading through the concepts section should give you a clear picture of the overall flow.
Core Concepts
What External Secrets Operator Does
ESO is a Kubernetes operator. It reads values from external secret stores — such as AWS Secrets Manager, GCP Secret Manager, and Azure Key Vault — and automatically syncs them into native Secret objects inside the cluster. The key is that this synchronization is declarative. You declare in YAML which external keys should become which Kubernetes Secrets, and the ESO controller periodically calls the API to keep values in sync.
ESO provides three CRDs:
| Resource | Scope | Role |
|---|---|---|
SecretStore |
Namespace | Store connection for a specific namespace |
ClusterSecretStore |
Cluster-wide | A shared store accessible by all namespaces |
ExternalSecret |
Namespace | Maps external secret keys → Kubernetes Secrets |
In multi-cluster environments, ClusterSecretStore is the primary choice. Configure the store once per cluster, and every ExternalSecret in every namespace can reference it.
CNCF Sandbox Project: ESO is currently a CNCF Sandbox project, rapidly stabilizing in the v0.9–v0.10 range. Recent additions include
PushSecret(Kubernetes → external sync direction), Generator resources, and improved metrics.
How IRSA Solves Authentication
Honestly, the first time I encountered IRSA, I thought "Can this actually work?" Assuming an IAM role without an Access Key felt like magic.
Here's how it works. An EKS cluster has an OIDC (OpenID Connect) provider. When you annotate a Kubernetes ServiceAccount with an IAM role ARN, the token issued to pods running under that ServiceAccount is verified by AWS STS via OIDC. Once verification passes, sts:AssumeRoleWithWebIdentity issues temporary credentials.
ESO Pod
└─ ServiceAccount (annotated: eks.amazonaws.com/role-arn)
└─ EKS OIDC Provider signs the token
└─ AWS STS verifies → issues temporary credentials
└─ Secrets Manager API calls are now possibleOnce you understand this flow, it becomes clear where the IAM Trust Policy and Permission Policy each belong. Both are shown in code in the examples below.
EKS Pod Identity (2024~): In late 2023, AWS released EKS Pod Identity as the successor to IRSA. It directly associates IAM roles with pods without needing a separate OIDC provider setup. Pod Identity is recommended for new clusters, and ESO officially supports it. This post uses IRSA as it remains the most widely adopted approach, but the structural flow is identical.
Multi-Cluster Sync Flow at a Glance
Let's assume three clusters: dev, staging, and prod. Each is an independent EKS cluster, and secrets are centralized in a single AWS Secrets Manager.
AWS Secrets Manager (central)
├─ prod/my-app/database
├─ staging/my-app/database
└─ dev/my-app/database
EKS-prod EKS-staging EKS-dev
├─ ESO (Helm installed) ├─ ESO ├─ ESO
├─ ClusterSecretStore ├─ ClusterSecretStore ├─ ClusterSecretStore
│ └─ IRSA role-prod │ └─ IRSA role-staging │ └─ IRSA role-dev
└─ ExternalSecret └─ ExternalSecret └─ ExternalSecret
→ k8s Secret db-secret → k8s Secret db-secret → k8s Secret db-secretEach cluster's IRSA role is attached an IAM policy that grants read access only to secrets under that environment's path. Preventing a prod cluster from reading dev paths is handled at the IAM policy level.
Practical Application
The four patterns below can each be used independently. Feel free to jump to whichever section fits your situation.
Basic Pattern: Connecting Multiple Clusters Within a Single Account
This is the most common setup when first adopting ESO. dev/staging/prod clusters within the same AWS account all point to a shared Secrets Manager. It's a great starting point for initial infrastructure setup or small teams who want to move fast.
Step 1: Create the IRSA Role
You need an IAM role to attach to the ESO ServiceAccount. The role has two policies.
Trust Policy — defines which ServiceAccount can assume this role.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::<ACCOUNT-ID>:oidc-provider/oidc.eks.ap-northeast-2.amazonaws.com/id/<OIDC-ID>"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"oidc.eks.ap-northeast-2.amazonaws.com/id/<OIDC-ID>:sub": "system:serviceaccount:external-secrets:external-secrets",
"oidc.eks.ap-northeast-2.amazonaws.com/id/<OIDC-ID>:aud": "sts.amazonaws.com"
}
}
}
]
}Permission Policy — defines what ESO can actually do in Secrets Manager.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"secretsmanager:GetSecretValue",
"secretsmanager:DescribeSecret",
"secretsmanager:ListSecretVersionIds"
],
"Resource": "arn:aws:secretsmanager:ap-northeast-2:<ACCOUNT-ID>:secret:prod/*"
}
]
}Restricting the Resource path per environment (prod/*, staging/*, dev/*) naturally enforces secret isolation between clusters at the IAM level.
Step 2: Install ESO via Helm
helm repo add external-secrets https://charts.external-secrets.io
helm install external-secrets \
external-secrets/external-secrets \
-n external-secrets \
--create-namespace \
--values eso-values.yamlPassing the eks.amazonaws.com/role-arn annotation directly via Helm --set flags often fails due to differing escape rules across shells (bash, zsh, fish). Creating a separate values.yaml is far more reliable.
# eso-values.yaml
serviceAccount:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::<ACCOUNT-ID>:role/eso-role-prodStep 3: Define the ClusterSecretStore
apiVersion: external-secrets.io/v1beta1
kind: ClusterSecretStore
metadata:
name: aws-secrets-manager
spec:
provider:
aws:
service: SecretsManager
region: ap-northeast-2
auth:
jwt:
serviceAccountRef:
name: external-secrets
namespace: external-secrets| Field | Description |
|---|---|
service: SecretsManager |
Designates AWS Secrets Manager as the provider |
auth.jwt.serviceAccountRef |
References the ServiceAccount configured with IRSA |
region |
The region where secrets are stored |
Step 4: Create Kubernetes Secrets with ExternalSecret
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: app-db-credentials
namespace: my-app
spec:
refreshInterval: 1h
secretStoreRef:
name: aws-secrets-manager
kind: ClusterSecretStore
target:
name: db-secret
creationPolicy: Owner
data:
- secretKey: DB_PASSWORD
remoteRef:
key: prod/my-app/database
property: password
- secretKey: DB_USERNAME
remoteRef:
key: prod/my-app/database
property: username| Field | Description |
|---|---|
refreshInterval |
Frequency of Secrets Manager API calls (tradeoff between cost and freshness) |
secretStoreRef |
Name of the ClusterSecretStore to reference |
target.creationPolicy: Owner |
ESO becomes the Secret owner, so the Secret is cleaned up when the ExternalSecret is deleted |
remoteRef.key |
The secret name in Secrets Manager |
remoteRef.property |
A specific key within the JSON secret |
This will automatically create a Kubernetes Secret named db-secret in the my-app namespace, refreshed with the latest values from Secrets Manager on every refreshInterval.
Cost estimation example: With 50
ExternalSecretresources andrefreshInterval: 30m, you'll generate 2,400 calls per day — roughly 72,000 API calls per month. At $0.05 per 10,000 Secrets Manager API calls, that's about $0.36/month in API costs. Add the per-secret storage cost ($0.40/secret/month) and scale to your own usage to get your actual number.
Cross-Account Pattern: Centralized Secret Management with a Hub-Spoke Architecture
This is the right pattern when AWS accounts are separated by team or environment. I struggled a bit with the concept of IAM role chaining when first implementing this, but once the structure clicks, it's simpler than it looks.
[Each cluster account] [Central secrets account]
IRSA role (<CLUSTER-ACCOUNT-ID>) → Central role (<CENTRAL-ACCOUNT-ID>)
└─ sts:AssumeRole allowed └─ secretsmanager:GetSecretValue, etc.Central account role — Trust Policy
Register each cluster account's IRSA role as a Principal.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::<CLUSTER-A-ACCOUNT-ID>:role/eso-role-cluster-a",
"arn:aws:iam::<CLUSTER-B-ACCOUNT-ID>:role/eso-role-cluster-b"
]
},
"Action": "sts:AssumeRole"
}
]
}Central account role — Permission Policy
A trust policy alone lets you assume the role but doesn't grant any capabilities. You also need to attach Secrets Manager access permissions.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"secretsmanager:GetSecretValue",
"secretsmanager:DescribeSecret",
"secretsmanager:ListSecretVersionIds"
],
"Resource": "arn:aws:secretsmanager:ap-northeast-2:<CENTRAL-ACCOUNT-ID>:secret:*"
}
]
}Adding the role field to ClusterSecretStore
apiVersion: external-secrets.io/v1beta1
kind: ClusterSecretStore
metadata:
name: aws-secrets-manager-central
spec:
provider:
aws:
service: SecretsManager
region: ap-northeast-2
role: arn:aws:iam::<CENTRAL-ACCOUNT-ID>:role/eso-secrets-reader
auth:
jwt:
serviceAccountRef:
name: external-secrets
namespace: external-secretsAdding the single role field is all it takes for ESO to assume an IAM role in another account and read secrets from it.
GitOps Integration Pattern: Deploying ExternalSecrets with ArgoCD
This is the combination I see most often in production today. When you're running GitOps with ArgoCD and hit the dilemma of "I can't put secrets in Git, but managing them separately by hand is a pain," this pattern is the answer.
ExternalSecret YAML contains only references to secrets, not the actual values, so committing it to Git never exposes sensitive information.
Git Repository
└─ manifests/
├─ deployment.yaml
├─ service.yaml
└─ external-secret.yaml ← No sensitive data, safe to store in Git
ArgoCD deploys external-secret.yaml
└─ ESO pulls values from Secrets Manager
└─ Kubernetes Secret created automatically# external-secret.yaml (file stored in Git)
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: app-secrets
namespace: production
annotations:
argocd.argoproj.io/sync-wave: "-1"
spec:
refreshInterval: 30m
secretStoreRef:
name: aws-secrets-manager
kind: ClusterSecretStore
target:
name: app-secrets
creationPolicy: Owner
dataFrom:
- extract:
key: prod/my-app/all-secrets
dataFrom.extract: Instead of individualdataentries, this converts all key-value pairs from a JSON secret into a Kubernetes Secret at once. When you have more than ten keys, this is far more convenient than listing individualdataentries one by one.
Secret Rotation Automation: Automating Pod Restarts with Reloader
When a secret is rotated in AWS Secrets Manager, ESO will update the Kubernetes Secret on the next refreshInterval. But here's an important nuance.
Whether a pod restart is needed depends on how the secret is injected. When secrets are mounted as volumes, kubelet automatically updates the files on disk, so no pod restart is required. However, when injected as environment variables (env.valueFrom.secretKeyRef), the pod reads the value only once at startup — so even if the Secret changes, the pod continues using the old value. A restart is only needed in this case.
Using Reloader alongside ESO handles the environment variable injection case by detecting Secret changes and automatically triggering rolling restarts.
helm install reloader stakater/reloader -n reloader --create-namespaceapiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
annotations:
secret.reloader.stakater.com/reload: "app-secrets"
spec:
# ...When app-secrets changes, Reloader detects it and triggers a rolling restart of the associated Deployment. This pattern lets you rotate credentials on short cycles with zero downtime.
Pros and Cons
Advantages
| Item | Detail |
|---|---|
| Eliminates Vault operational burden | Uses AWS-managed services without HA setup, unseal, or backups |
| Zero static credentials | IRSA/Pod Identity eliminates the need for Access Keys; all API calls are auditable via CloudTrail |
| Automatic synchronization | Secret rotations are automatically reflected in the cluster via refreshInterval |
| Multi-namespace centralization | A single ClusterSecretStore serves all namespaces from the same store |
| GitOps-friendly | ExternalSecret is a pure reference object with no sensitive values — safe to store in Git |
| Provider extensibility | Switching to GCP/Azure only requires changing the provider, not the interface |
Disadvantages and Caveats
| Item | Detail | Mitigation |
|---|---|---|
| AWS cost | $0.40/secret/month + $0.05 per 10,000 API calls | Increase refreshInterval appropriately, or use SSM Parameter Store (Standard Tier is free) for simple values |
| Sync delay | Maximum delay equals refreshInterval |
For urgent updates, manually trigger with kubectl annotate externalsecret app-secrets force-sync=$(date +%s) -n my-app |
| Repeated per-cluster setup | ESO Helm deployment + IRSA configuration must be done for each cluster | Automate with a Terraform module or ArgoCD ApplicationSet |
| OIDC provider management | Each cluster has a different OIDC endpoint requiring separate entries in IAM trust policies | Migrate new clusters to EKS Pod Identity to reduce complexity |
| ClusterSecretStore permission scope | Misconfiguration can grant all namespaces access to sensitive secrets | Restrict access scope with namespaceSelector or an explicit namespaces list |
SSM Parameter Store: AWS Systems Manager's parameter store, free at the Standard Tier. Well-suited for storing simple string values, and can be used with ESO by switching to
service: ParameterStore. For JSON-structured secrets or automated rotation, Secrets Manager is the better fit.
The Most Common Mistakes in Practice
1. Specifying the wrong namespace for the ClusterSecretStore ServiceAccount
Early on, I wasted 30 minutes by setting the namespace in serviceAccountRef to the application namespace (my-app). The value here must be the namespace where ESO is installed (external-secrets). Setting it to the app namespace will leave the Secret uncreated and stuck in a SecretSyncError state.
2. Omitting OIDC conditions from the IAM role trust policy
Without specifying the exact ServiceAccount in StringEquals — like system:serviceaccount:external-secrets:external-secrets — any ServiceAccount under that OIDC provider can assume the role. It's easy to let this slide with "it works for now," but it can lead to unintended privilege escalation. Always specify the condition explicitly.
3. Setting refreshInterval too short
If you set refreshInterval: 30s while testing in a dev environment and forget to change it, dozens of ExternalSecret resources can generate hundreds of API calls per minute. It happened to me once — I only noticed when the AWS bill arrived. Start with the default of 1h and adjust as needed.
Closing Thoughts
After putting this setup into production, the first change I noticed was that secret-related Slack alerts quietly disappeared from the deployment pipeline. Messages like "I rotated the DB password but it's not showing up in the cluster" stopped coming in — secret rotation just works on its own.
The ESO + AWS Secrets Manager + IRSA combination is a practical choice for building a production-grade secret management pipeline in EKS without Vault. If you haven't tried it yet, here's how to get started:
Step 1: Install ESO via Helm
helm repo add external-secrets https://charts.external-secrets.io
helm install external-secrets external-secrets/external-secrets \
-n external-secrets --create-namespaceYou can verify ESO itself works by directly injecting an accessKeyID/secretAccessKey as a Kubernetes Secret, without setting up IRSA first. This is useful when you want to quickly confirm ESO's behavior before committing to a full IRSA setup.
Step 2: Test Secret Integration
Create a JSON secret named dev/test/myapp in the AWS Console, then update remoteRef.key in the example ExternalSecret YAML above to that path and apply it.
kubectl get secret db-secret -o jsonpath='{.data.DB_PASSWORD}' | base64 -dIf this value matches what you stored in Secrets Manager, synchronization is working correctly.
Step 3: Configure IRSA to Eliminate Static Credentials
Enable the OIDC provider on your EKS cluster, create an IRSA role with eksctl create iamserviceaccount or Terraform's aws_iam_role, then annotate the ESO ServiceAccount with the role ARN. Once this step is complete, no AWS credentials will exist anywhere in the cluster.
References
- External Secrets Operator Official Docs — AWS Secrets Manager Provider
- ESO Security Best Practices Official Guide
- EKS Workshop — External Secrets Operator
- AWS Official Blog — EKS Secret Management with AWS Secrets Manager and ABAC
- AWS Containers Blog — ESO with EKS Fargate
- GitHub — external-secrets/external-secrets
- Securing GitOps with External Secrets Operator & AWS Secrets Manager | Codefresh — Based on ESO v0.9+
- Secrets Auto-Rotation in Kubernetes with AWS Secrets Manager and Reloader | Medium — Based on ESO v0.9+
- ESO on Amazon EKS — Terraform-first Guide | Medium — Based on ESO v0.9+