Argo CD Multi-Cluster Secret Management: Sealed Secrets and External Secrets Operator in Practice
If you've run Kubernetes on a single cluster and set up GitOps with Argo CD, you'll be able to follow right along. One of the first problems you hit when expanding to multi-cluster is secrets. When I first stretched a working Argo CD setup from a single dev cluster out to staging and production, I told myself "Base64-encoding it before committing to Git should be fine, right?" It wasn't, of course. Base64 is not encryption. And by the time I realized that, I had also come to understand firsthand that a stolen secret containing a Spoke cluster's access token gives an attacker full control of that entire cluster. The whole team spent two days rebuilding our secret management system from scratch—and ever since, this has been something we get right from the start.
This post is written so you can skip those two days. It walks through patterns for automating deployments with ApplicationSet in an Argo CD multi-cluster environment, and for safely managing secrets by choosing between Sealed Secrets and External Secrets Operator (ESO) based on your situation—with real YAML examples throughout. The focus is on "how do you use this in practice" rather than concept explanations.
Core Concepts
Hub-Spoke: How Argo CD Manages Multiple Clusters
The most common form of multi-cluster Argo CD is the Hub-Spoke model. You install Argo CD on a single Hub cluster, and from there you manage multiple Spoke (remote) clusters. When you register each cluster as a Secret resource in the Hub's argocd namespace, Argo CD can access that cluster and deploy applications to it.
# Spoke cluster secret created in the Hub cluster's argocd namespace
apiVersion: v1
kind: Secret
metadata:
name: prod-cluster
namespace: argocd
labels:
argocd.argoproj.io/secret-type: cluster
env: production
region: ap-northeast-2
type: Opaque
stringData:
name: prod-cluster
server: https://prod-cluster.example.com
config: |
{
"bearerToken": "<spoke-cluster-sa-token>",
"tlsClientConfig": {
"insecure": false,
"caData": "<base64-ca-cert>"
}
}This secret contains the ServiceAccount token for the Spoke cluster. If stolen, an attacker can take full control of that cluster, so this secret itself must be managed securely. The safest pattern is to use ESO to inject this secret from Vault—we'll cover that again later.
What is Hub-Spoke? It's a structure that radiates outward from a center, like a bicycle wheel. The Hub Argo CD makes outbound connections to each Spoke cluster to perform deployments. In environments where inbound connections are blocked by a firewall, you can look into
argocd-agent(reverse connection).
ApplicationSet: Automation That Shines as Clusters Multiply
Once you have more than three clusters, manually creating Application resources for each one quickly becomes unmanageable. ApplicationSet uses Generators to create Applications automatically. In particular, combining a Cluster Generator and Git Generator in a Matrix lets you declaratively manage deployments to hundreds of clusters using only cluster labels and a Git directory structure.
Store Only 'References' in Git, Never Secret Values
The core principle of secret management is simple: never commit plaintext secrets to a Git repository. From here, the path splits in two.
| Approach | What is stored in Git | Who decrypts/syncs |
|---|---|---|
| Sealed Secrets | Encrypted SealedSecret resources |
Sealed Secrets controller inside the cluster |
| External Secrets Operator | Secret references (ExternalSecret) |
External store (Vault, AWS SM, etc.) |
Sealed Secrets stores "the encrypted values themselves" in Git, while ESO stores only "where to fetch them from."
Which One Should You Choose?
Checking your situation first will point you in the right direction.
| Item | Sealed Secrets | External Secrets Operator |
|---|---|---|
| External dependencies | None (self-contained in cluster) | Requires external store |
| Setup complexity | Low | Medium–High |
| Air-gapped environments | Fully supported | Not possible |
| Dynamic rotation | Not supported | Supported (refreshInterval) |
| Multi-provider | N/A | AWS/GCP/Azure/Vault all supported |
| GitOps integration | Natural (SealedSecret is a Git resource) | Only references in Git, cleaner |
| Multi-cluster scalability | BYO Key strategy required | Centralized management with one store |
Go with Sealed Secrets if you want to minimize external dependencies or you're in an air-gapped environment. Go with ESO if you need secret rotation and audit logs at enterprise scale. Some teams use both. The recent trend leans toward ESO, but for teams just getting started with GitOps, Sealed Secrets is the more pragmatic starting point.
Practical Application
Example 1: Multi-Cluster Automated Deployment with ApplicationSet Matrix Generator
This pattern deploys the same app to all clusters labeled env: production, applying per-cluster values.yaml files from a GitOps repository. Once you register a new cluster and attach the label, an Application is created automatically.
One thing to watch out for: if your app chart repository (my-app) and your GitOps repository containing values files (gitops-repo) are separate, using a single source means you can't reference files from another repository in helm.valueFiles. On Argo CD 2.6 and above, you must use sources (plural) to specify multiple sources.
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: my-app
namespace: argocd
spec:
generators:
- matrix:
generators:
- clusters:
selector:
matchLabels:
env: production
- git:
repoURL: https://github.com/org/gitops-repo
revision: HEAD
files:
- path: "clusters/*/values.yaml"
template:
metadata:
name: "my-app-{{name}}"
spec:
project: default
sources:
# App chart repository
- repoURL: https://github.com/org/my-app
targetRevision: HEAD
helm:
valueFiles:
- $values/clusters/{{name}}/values.yaml
# GitOps repository containing values files (referenced via ref)
- repoURL: https://github.com/org/gitops-repo
targetRevision: HEAD
ref: values
destination:
server: "{{server}}"
namespace: my-app
syncPolicy:
automated:
prune: true
selfHeal: true| Field | Role |
|---|---|
clusters.selector.matchLabels |
Targets only clusters labeled env: production |
git.files |
Path pattern for per-cluster values files |
{{name}}, {{server}} |
Cluster name and address injected by the Cluster Generator |
sources[1].ref: values |
Makes the second source referenceable as the $values variable |
syncPolicy.automated |
Automatically syncs and prunes on detected changes |
Example 2: Per-Cluster Sealing with Sealed Secrets
Sealed Secrets is simple to set up, making it a great way for teams to get started quickly when first adopting GitOps. I started with it myself, and for small-scale environments I still use it regularly.
Extract the public key, then encrypt the secret:
# Extract the public key for the target cluster — this key differs per cluster
kubeseal --fetch-cert \
--controller-name=sealed-secrets-controller \
--controller-namespace=kube-system \
--kubeconfig ~/.kube/prod-cluster > pub-key-prod.pem
# Create a secret and seal it immediately (dry-run → pipe to kubeseal)
kubectl create secret generic db-password \
--from-literal=password=mysecret \
--dry-run=client -o yaml | \
kubeseal --cert pub-key-prod.pem \
--format yaml > db-password-sealed.yaml
# Commit the sealed file to Git
git add db-password-sealed.yaml && git commit -m "add sealed db-password for prod"Specifying --cert pub-key-prod.pem lets kubeseal encrypt locally without connecting directly to the cluster. However, since this public key belongs to prod-cluster, secrets intended for other clusters must be sealed separately with that cluster's public key.
Bring Your Own Key (BYO Key) strategy for multi-cluster:
Using a separate key per cluster creates the hassle of having to re-seal secrets every time you want to share them. You can solve this with the BYO Key approach: generate a common key up front and distribute it to multiple clusters.
# Generate a shared RSA key
openssl req -x509 -nodes -newkey rsa:4096 \
-keyout sealed-secrets.key \
-out sealed-secrets.crt \
-subj "/CN=sealed-secret/O=sealed-secret" \
-days 3650
# Register the same key as a secret on each cluster (apply first, then label separately)
kubectl create secret tls my-sealing-key \
--cert=sealed-secrets.crt \
--key=sealed-secrets.key \
--namespace=kube-system
kubectl label secret my-sealing-key \
--namespace=kube-system \
sealedsecrets.bitnami.com/sealed-secrets-key=activeYou'll often see attempts to pipe everything into one line, but combining kubectl label --local with --dry-run=client tends to fail when metadata.name is missing. Applying first and labeling separately is much safer.
Key backup is not optional—it's mandatory:
# Regularly back up keys that auto-rotate every 30 days
kubectl get secret -n kube-system \
-l sealedsecrets.bitnami.com/sealed-secrets-key \
-o yaml > sealed-secrets-keys-backup.yaml
# This file must be stored in Vault or AWS Secrets Manager — never commit it to GitWhy is key backup so important? If the Sealed Secrets controller is deleted or the key is lost, there is no way to decrypt the SealedSecrets stored in Git. You will permanently lose those secrets.
Example 3: External Secrets Operator + HashiCorp Vault Multi-Cluster Pattern
If you have many clusters, or if you need secret rotation and audit logs in an enterprise environment, the ESO + Vault combination is far more powerful. Honestly, the initial setup takes some effort—but once it's in place, ongoing management becomes noticeably easier.
A realistic note on the bootstrapping order:
When you want to deploy ESO itself via Argo CD, you hit a chicken-and-egg problem: there's no ClusterSecretStore yet because ESO doesn't exist yet. The standard approach is to install ESO initially via Helm or direct kubectl apply, then hand off ongoing management to Argo CD.
Define the Vault backend with ClusterSecretStore:
# Vault backend store referenceable from anywhere in the cluster
apiVersion: external-secrets.io/v1beta1
kind: ClusterSecretStore
metadata:
name: vault-backend
spec:
provider:
vault:
server: https://vault.example.com
path: secret
version: v2
auth:
kubernetes:
mountPath: kubernetes
role: external-secrets-role
serviceAccountRef:
name: external-secrets
namespace: external-secretsHere, external-secrets-role is a Kubernetes Auth Role pre-defined in Vault. This role must be granted read access to the myapp/db path. If you're using AWS EKS, authentication is done via IRSA; on GKE it's Workload Identity. If you're new to Vault, it's worth knowing that AWS Secrets Manager or GCP Secret Manager can be used as substitutes—ESO supports various backends using the same ExternalSecret structure.
Reference secrets in individual namespaces with ExternalSecret:
# Only this file is committed to Git — no values, only references
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: db-credentials
namespace: my-app
spec:
refreshInterval: 1h # Sync latest values from Vault every 1 hour
secretStoreRef:
name: vault-backend
kind: ClusterSecretStore
target:
name: db-secret # Name of the Kubernetes Secret to create
creationPolicy: Owner
data:
- secretKey: password
remoteRef:
key: myapp/db # Vault KV path
property: password
- secretKey: username
remoteRef:
key: myapp/db
property: username| Field | Role |
|---|---|
refreshInterval |
How often the Kubernetes Secret is automatically updated when Vault values change |
ClusterSecretStore |
Referenceable across the entire cluster with no namespace boundary |
creationPolicy: Owner |
Deletes the Kubernetes Secret when the ExternalSecret is deleted |
remoteRef.key |
Vault KV v2 path |
Preventing false-positive drift detection in Argo CD:
When ESO periodically refreshes secrets, Argo CD may incorrectly detect this as "drift" (state that differs from Git) and report an OutOfSync warning. This is a situation you'll encounter frequently in practice, and it can be prevented with ignoreDifferences. However, ignoring all of /data can also cause you to miss real drift from non-ESO secrets, so it's better to use managedFieldsManagers to scope this only to resources managed by ESO.
# Add to Application or AppProject
spec:
ignoreDifferences:
- group: ""
kind: Secret
jsonPointers:
- /data
managedFieldsManagers:
- external-secretsPros and Cons Analysis
Advantages
| Item | Sealed Secrets | External Secrets Operator |
|---|---|---|
| Installation complexity | Complete with one Helm command | Requires ESO + external store setup |
| External dependencies | None (self-contained in cluster) | External store required |
| Air-gapped environments | Fully supported | Not possible |
| Dynamic rotation | Not supported | Automated with refreshInterval |
| Multi-provider | N/A | AWS/GCP/Azure/Vault all supported |
| GitOps integration | SealedSecret itself is a Git resource | Only references in Git, cleaner |
Disadvantages and Caveats
| Item | Description | Mitigation |
|---|---|---|
| Sealed Secrets — key loss | If the controller is deleted or key is lost, all SealedSecrets become undecryptable | 30-day key backup cycle, store in Vault/AWS SM |
| Sealed Secrets — cluster dependency | Different key per cluster makes sharing secrets complex | Use BYO Key strategy for a common key |
| ESO — external failure propagation | Vault/AWS SM outages cause secret injection failures for new pods | Caching strategy, Vault high-availability setup |
| ESO — no automatic pod restart | Existing pods don't pick up new values after a secret refresh | Integrate with Reloader |
| Common — Argo CD cluster secret | Spoke cluster access tokens are exposed in the Hub's argocd namespace | Use ESO + Vault to manage this secret itself |
SecretStore vs ClusterSecretStore:
SecretStoreis namespace-scoped;ClusterSecretStoreis cluster-wide. In multi-tenant environments, placing aSecretStorein each team's namespace to block cross-team secret references is the safer approach.
What Happens When You Skip These Settings
If you don't back up Sealed Secrets keys:
Telling yourself "the controller will handle it" and moving on can leave you in a situation where all your SealedSecrets become useless when you have to rebuild the cluster. Set up a key backup pipeline at the same time you install the controller the first time—it'll save you pain later.
If you forget ignoreDifferences after adopting ESO:
Every time a secret gets automatically refreshed, Argo CD will report an OutOfSync status, and alerts will start piling up in your team's Slack channel. I've personally experienced hundreds of OutOfSync alerts stacking up because of this missed setting—and since then, applying this configuration at the same time as the ESO deployment has become routine. Put it in the same PR as the ESO deployment and you'll never forget.
If you leave the Hub cluster's argocd cluster secrets in plaintext:
These secrets contain Spoke cluster access tokens and are the most valuable attack target, yet they're often left unattended with the reasoning "it's inside the argocd namespace, so it should be fine." Applying the ESO + Vault pattern to manage these secrets as well gives you much more peace of mind.
Closing Thoughts
Secret management in a multi-cluster Argo CD environment is an architectural question of "where you manage secrets," not "how you hide them." If you have few clusters and want to minimize external dependencies, Sealed Secrets is the better fit. If you need centralized secret management with automatic rotation at enterprise scale, External Secrets Operator is the stronger choice.
Three steps you can start on right now:
-
Start small with Sealed Secrets. Install the controller with
helm install sealed-secrets-controller sealed-secrets/sealed-secrets -n kube-system, fetch the public key withkubeseal --fetch-cert, and seal one of your existing secrets. You can have your first SealedSecret committed to Git within 30 minutes. -
Connect the ApplicationSet Cluster Generator. Add a label like
env: stagingto the clusters you currently manage, adjust the ApplicationSet YAML from the example above, and apply it. You'll experience Applications being created automatically the moment a cluster is registered. If your app repository and values repository are separate, apply thesourcesmulti-source configuration at the same time. -
Experiment with ESO PushSecret for secret propagation. This is an advanced topic that requires ESO installation, Vault integration, and ClusterSecretStore configuration to be complete first. Once you're ready, try creating a
PushSecretresource on the Hub cluster to automatically propagate secrets from the management cluster to Spoke clusters. This is especially useful when you want to manage common multi-cluster secrets in one central place.
References
- Argo CD Official Docs - Secret Management
- Argo CD Official Docs - Cluster Management
- Argo CD Official Docs - Cluster Generator (ApplicationSet)
- External Secrets Operator Official Docs - Overview
- External Secrets Operator - ClusterSecretStore
- A Comprehensive Overview of Argo CD Architectures 2025 | Codefresh
- ArgoCD ApplicationSet: Multi-Cluster Deployment Made Easy | Codefresh
- GitOps Secrets with Argo CD, HashiCorp Vault, and External Secret Operator | Codefresh
- Sealed Secrets | GitHub - bitnami-labs
- argocd-agent | GitHub - argoproj-labs
- Multi-cluster GitOps with Argo CD Agent | Red Hat Blog
- A Guide to Secrets Management with GitOps and Kubernetes | Red Hat Blog
- Sealed Secrets multi-cluster scenario | DEV Community
- Kubernetes Secrets Management in 2025 | Infisical Blog
- GitOps in 2025: From Old-School Updates to the Modern Way | CNCF