Implementing Secrets Manager Multi-Tenant Isolation from a Single IAM Role with EKS Pod Identity + ABAC
If you have three teams in a cluster, do you really need three IAM roles? Ten teams, ten roles? I believed that for a while too. team-a-role, team-b-role… copying and pasting similar policies with only the name changed, wondering "is this really right?" The short answer: no, you don't. Even as teams grow to ten, onboarding a new team is just a matter of adding one tag to a secret — no new IAM roles needed.
This post is aimed at DevOps/cloud infrastructure engineers who are already running EKS and are familiar with basic IAM concepts (AssumeRole, policy conditions).
Combining EKS Pod Identity with ABAC lets you maintain a single IAM role while enforcing per-team, per-namespace secret isolation at the IAM layer. Rather than a Pod claiming "I'm team-a" by itself, EKS stamps namespace information as a tag directly into the credentials it delivers. This post covers exactly how session tags work as multi-tenant security boundaries, and walks through a real-world configuration connecting ESO (External Secrets Operator) with Secrets Manager.
This pattern became possible when Pod Identity went GA in late 2023. Through 2024–2025, major open-source tools like ESO and ASCP completed official support, and it's becoming the standard pattern for multi-tenant cluster operations. Now is the right time to seriously consider adopting it.
Core Concepts
From IRSA to Pod Identity — What Changed
If you've used IRSA (IAM Roles for Service Accounts), you know the hassle. You had to create an OIDC provider for each EKS cluster and hardcode the cluster's OIDC URL directly into the IAM role's trust policy. Multiple clusters? You've probably run into broken authentication caused by trust policy edit mistakes more than once.
If you haven't experienced IRSA, think of it this way: the old approach required a Pod to prove its identity by hardcoding "I'm from this cluster, this service account" into the IAM role configuration, whereas Pod Identity has the EKS cluster itself vouch that "this Pod came from this namespace."
Pod Identity changed this structure. When the eks-pod-identity-agent DaemonSet is installed on a cluster, the agent injects temporary credentials directly into Pods. No OIDC provider setup is needed, and association configuration is done with a single line in the AWS console or CLI. You no longer need to put OIDC URLs in Terraform trust policies either — you'll see this directly in the examples later.
Pod Identity Association: A configuration that links an IAM role to an EKS namespace + service account combination. Created with the
aws eks create-pod-identity-associationcommand, it replaces the manual trust policy editing required by IRSA.
Session Tags — The Key to This Pattern
When Pod Identity assumes an IAM role, AWS automatically attaches the following 6 session tags to the temporary credentials.
| Session Tag Key | Example Value |
|---|---|
eks-cluster-arn |
arn:aws:eks:ap-northeast-2:123456789:cluster/prod-cluster |
eks-cluster-name |
prod-cluster |
kubernetes-namespace |
team-a |
kubernetes-service-account |
app-sa |
kubernetes-pod-name |
app-pod-abc123 |
kubernetes-pod-uid |
abc-123-def-456 |
The important thing is that the Pod itself cannot forge these tags. The EKS control plane sets them and AWS STS validates them. Manipulating environment variables in application code has no effect.
These tags can be referenced in IAM policy conditions as aws:PrincipalTag/<tag-key>. This is where ABAC comes in.
ABAC Policy — Creating Tenant Boundaries with a Single Role
The following is the core IAM policy for this pattern.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["secretsmanager:GetSecretValue", "secretsmanager:DescribeSecret"],
"Resource": "*",
"Condition": {
"StringEquals": {
"secretsmanager:ResourceTag/eks-cluster-name": "${aws:PrincipalTag/eks-cluster-name}",
"secretsmanager:ResourceTag/kubernetes-namespace": "${aws:PrincipalTag/kubernetes-namespace}"
}
}
}
]
}The ${aws:PrincipalTag/kubernetes-namespace} portion is substituted at runtime with the session tag value. When a Pod in the team-a namespace sends a request, this value evaluates to team-a, allowing access only to secrets tagged with kubernetes-namespace=team-a. A Pod from team-b — even using the same IAM role — can only access secrets tagged with team-b.
ABAC (Attribute-Based Access Control): A method of determining access by comparing attributes (tags) of resources and principals. The criterion is "do the tags match?" rather than "what is the role?"
Why sts:TagSession Is Required in the IAM Role Trust Policy
It's easy to assume that sts:AssumeRole alone is sufficient in the trust policy, but to use session tags you must also include sts:TagSession. This permission is required for the act of AWS STS attaching tags to temporary credentials when issuing them. Without it, Pod Identity association succeeds but the tags are absent during ABAC condition evaluation, resulting in Access Denied. You can see the correct trust policy format in the Terraform example.
Practical Application
Example 1: Namespace-Level Secret Isolation with ESO ClusterSecretStore
If you're using External Secrets Operator, this pattern fits most naturally. Multiple teams share a single ClusterSecretStore, while each team can only access their own secrets.
Step 1: Create secrets in Secrets Manager with tags
# team-a secret
aws secretsmanager create-secret \
--name "prod/team-a/db-password" \
--secret-string "super-secret-a" \
--tags Key=eks-cluster-name,Value=prod-cluster \
Key=kubernetes-namespace,Value=team-a
# team-b secret
aws secretsmanager create-secret \
--name "prod/team-b/db-password" \
--secret-string "super-secret-b" \
--tags Key=eks-cluster-name,Value=prod-cluster \
Key=kubernetes-namespace,Value=team-bStep 2: Configure Pod Identity Association
aws eks create-pod-identity-association \
--cluster-name prod-cluster \
--namespace external-secrets \
--service-account external-secrets \
--role-arn arn:aws:iam::123456789012:role/eso-abac-roleStep 3: Configure sessionTags in ClusterSecretStore
There's one thing worth noting here. The ESO controller runs in the external-secrets namespace. This means the kubernetes-namespace value in the session tags that the Pod Identity agent injects into the ESO controller is external-secrets. But what we want to isolate is the team-a namespace where the ExternalSecret resource lives.
The sessionTags configuration in ClusterSecretStore bridges this gap. It instructs ESO to explicitly inject the namespace of each ExternalSecret resource into the session tags when processing it.
apiVersion: external-secrets.io/v1beta1
kind: ClusterSecretStore
metadata:
name: aws-secrets-store
spec:
provider:
aws:
service: SecretsManager
region: ap-northeast-2
auth:
pod:
mountServiceAccountToken: true
# ESO explicitly injects the ExternalSecret's namespace as a session tag
# {{ .namespace }} is Go template syntax, evaluated to the namespace where the ExternalSecret is deployed
sessionTags:
- key: kubernetes-namespace
value: "{{ .namespace }}"
transitiveTagKeys:
- kubernetes-namespacevalue: "{{ .namespace }}" is Go template syntax — when ESO processes each ExternalSecret request, it substitutes the namespace the resource belongs to (team-a, team-b, etc.). This ensures that the kubernetes-namespace value in the session tags evaluated by AWS exactly matches the team's namespace where the ExternalSecret resides.
| Component | Role |
|---|---|
sessionTags |
ESO explicitly injects the ExternalSecret namespace when calling the Secrets Manager API |
transitiveTagKeys |
Propagates tags to child sessions (required for nested Assume calls) |
ClusterSecretStore |
A single store shared by all teams — the reason for a single IAM role |
Step 4: Create ExternalSecret in each team's namespace
# Deployed to the team-a namespace
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: db-credentials
namespace: team-a
spec:
refreshInterval: 1h
secretStoreRef:
name: aws-secrets-store
kind: ClusterSecretStore
target:
name: db-credentials
data:
- secretKey: password
remoteRef:
key: prod/team-a/db-passwordWhat happens if team-a's ExternalSecret tries to reference prod/team-b/db-password? ESO's session tags include kubernetes-namespace=team-a, while the target secret's tag is kubernetes-namespace=team-b, so the ABAC condition doesn't match. AWS responds with:
An error occurred (AccessDeniedException) when calling the GetSecretValue operation:
User: arn:aws:sts::123456789012:assumed-role/eso-abac-role/session
is not authorized to perform: secretsmanager:GetSecretValue on resource:
arn:aws:secretsmanager:ap-northeast-2:123456789012:secret:prod/team-b/db-password
because no identity-based policy allows the secretsmanager:GetSecretValue actionThis is a boundary enforced by AWS, not by application code.
Team isolation alone may be sufficient, but in practice another request often comes up. Situations like "a staging Pod read a production secret" mean you need to lock down cluster boundaries as well.
Example 2: Adding Cluster Boundaries — Blocking Staging from Production Secrets
For cases where cluster isolation is needed on top of team isolation, add an eks-cluster-arn condition to the IAM policy.
{
"Effect": "Allow",
"Action": ["secretsmanager:GetSecretValue"],
"Resource": "*",
"Condition": {
"StringEquals": {
"secretsmanager:ResourceTag/allowed-cluster-arn": "${aws:PrincipalTag/eks-cluster-arn}",
"secretsmanager:ResourceTag/kubernetes-namespace": "${aws:PrincipalTag/kubernetes-namespace}"
}
}
}Tag production secrets with the cluster ARN to match this policy. If adding tags to already-created secrets, use the tag-resource command.
aws secretsmanager tag-resource \
--secret-id "prod/team-a/api-key" \
--tags \
Key=allowed-cluster-arn,Value=arn:aws:eks:ap-northeast-2:123456789012:cluster/prod-cluster \
Key=kubernetes-namespace,Value=team-aPods from the staging cluster (staging-cluster) are blocked even when using the same IAM role, because the eks-cluster-arn session tag value differs and the condition fails.
Note: If you only check the namespace tag and omit the
eks-cluster-arncondition, Pods from other clusters using the same namespace name can gain access. In multi-cluster environments, it is strongly recommended to always includeeks-cluster-arn.
Example 3: Automating the Full Configuration with Terraform
Repeating this setup in the console every time is not realistic. Managing it as infrastructure code means that when teams grow, all you need is a single tag on the secret.
# IAM role — trust policy dedicated to Pod Identity
resource "aws_iam_role" "eso_abac" {
name = "eso-abac-role"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Principal = { Service = "pods.eks.amazonaws.com" }
# sts:TagSession is required for session tags to be attached to temporary credentials
Action = ["sts:AssumeRole", "sts:TagSession"]
}
]
})
}
# ABAC IAM policy
resource "aws_iam_role_policy" "eso_abac_policy" {
name = "eso-abac-secrets-policy"
role = aws_iam_role.eso_abac.id
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Action = ["secretsmanager:GetSecretValue", "secretsmanager:DescribeSecret"]
Resource = "*"
Condition = {
StringEquals = {
# To escape ${} in Terraform, use $${} syntax
"secretsmanager:ResourceTag/eks-cluster-name" = "$${aws:PrincipalTag/eks-cluster-name}"
"secretsmanager:ResourceTag/kubernetes-namespace" = "$${aws:PrincipalTag/kubernetes-namespace}"
}
}
}
]
})
}
# Pod Identity Association
resource "aws_eks_pod_identity_association" "eso" {
cluster_name = aws_eks_cluster.main.name
namespace = "external-secrets"
service_account = "external-secrets"
role_arn = aws_iam_role.eso_abac.arn
}Unlike IRSA, there is no OIDC URL in the trust policy. Simply specifying pods.eks.amazonaws.com as the Principal is all it takes. Because the trust policy is cluster-agnostic, the same role can be reused across multiple clusters — another advantage.
Pros and Cons
Advantages
| Item | Details |
|---|---|
| Fewer IAM roles | Operate with a single role instead of the pattern where roles proliferate in proportion to team count |
| Unforgeable security boundary | Session tags are set by the EKS control plane — cannot be manipulated by application code |
| Simplified configuration | No OIDC provider creation or manual trust policy editing required |
| Dynamic permission adjustment | Adjust permissions just by changing the IAM policy or secret tags, without restarting Pods |
| Enhanced audit trail | kubernetes-namespace and kubernetes-pod-name are recorded in CloudTrail, making access tracing easier |
Disadvantages and Caveats
| Item | Details | Mitigation |
|---|---|---|
| Secret tagging operational overhead | All secrets must have correct tags; missing tags result in inaccessible secrets | Codify secret creation and tagging via Terraform/CDK. AWS Config rules to block creation of untagged secrets are recommended |
| EKS-specific feature | Not usable with EKS Anywhere or self-managed clusters | IRSA is still appropriate for general-purpose environments |
| Pod Identity Agent dependency | The eks-pod-identity-agent add-on must be functioning correctly |
Recommend configuring automatic add-on installation at cluster creation |
| Some third-party tools unsupported | Older Helm charts or operators may not support it | Verify Pod Identity support version per tool before adopting |
Of these, the first is the one that most frequently trips people up in practice. Things work fine during initial setup, then a few months later someone adds a secret without the tag and the application silently fails. Bundling secret creation and tagging together in a Terraform module is the most reliable defense.
Most Common Mistakes in Practice
-
Missing
sts:TagSession: If the trust policy only hassts:AssumeRoleand notsts:TagSession, the Pod Identity association succeeds but session tags aren't attached, causing ABAC conditions to result inDeny. This is an easy place to spend a long time debugging "the connection worked, so why isn't access working?" -
Failing to tag new secrets: Things work fine during initial setup, but it's common for team members to add secrets later without the tags. It's recommended to put in guardrails — via AWS Config rules or SCPs — that prevent secret creation without required tags.
-
Over-matching with
StringLike: UsingStringLikewith a wildcard (team-*) instead ofStringEqualscan result inteam-adminbeing able to accessteam-a's secrets. UseStringEqualswherever exact matching is required.
Closing Thoughts
The combination of EKS Pod Identity and ABAC enforces multi-tenant secret isolation at the AWS layer through the simple principle of "one role, boundaries via tags." Even as teams grow, you don't add IAM roles — you just tag the secrets.
Three steps you can start with right now:
-
Enable the Pod Identity Agent add-on on your EKS cluster: AWS Console → EKS Cluster → Add-ons → Install
eks-pod-identity-agent, or via CLI:aws eks create-addon --cluster-name <cluster-name> --addon-name eks-pod-identity-agent. For existing clusters, you can also addaddon_name = "eks-pod-identity-agent"to the Terraformaws_eks_addonresource. -
Validate the ABAC policy at a small scope: Start by creating secrets in two test namespaces (
test-a,test-b) with the appropriate tags, then apply the ABAC policy above to a single IAM role. Use the IAM Policy Simulator to confirm you get aDenywhen requestingGetSecretValueon a secret ARN tagged withtest-b, using atest-aservice account session. -
Integrate with ESO or ASCP: Once validated, configure
sessionTagsandtransitiveTagKeysin External Secrets Operator'sClusterSecretStore, or apply Pod Identity-based authentication to an ASCPSecretProviderClassso applications can receive secrets mounted as files.
References
- Grant Pods access to AWS resources based on tags (ABAC) | Amazon EKS Official Docs
- How to use AWS Secrets Manager and ABAC for enhanced secrets management in Amazon EKS | AWS Security Blog
- Amazon EKS Pod Identity: a new way for applications on EKS to obtain IAM credentials | AWS Containers Blog
- Announcing ASCP integration with Pod Identity | AWS Security Blog
- Session policies for Amazon EKS Pod Identity | AWS Containers Blog
- Secure Cross-Cluster Communication in EKS with VPC Lattice and Pod Identity IAM Session Tags | AWS Containers Blog
- How to Use EKS Pod Identity to Isolate Tenant Data in S3 With a Shared IAM Role | HackerNoon
- Control access to secrets using ABAC | AWS Secrets Manager Official Docs
- Multi Tenancy | External Secrets Operator Official Docs
- Identity and Access Management | EKS Best Practices Guides