ArgoCD: Multi-Cluster Deployments with AWS EKS
I've been using ArgoCD daily for about the past year almost. I've really been able to explore it's strengths, as well as it's shortcomings. The thing I think ArgoCD handles the best is deployment to multiple clusters. I will guide you through this process and how to configure it yourself.
ArgoCD is a declarative, GitOps continuous delivery tool for Kubernetes. It is designed to manage application deployments and automate the process of synchronizing application states between Git repositories and Kubernetes clusters. ArgoCD provides a user-friendly interface for visualizing and managing application deployments in real-time.

Key Features:
- GitOps-Centric: Operates directly from Git repositories, ensuring version control and auditability.
- Declarative Configuration: Uses declarative definitions for application deployment, simplifying the management process.
- Automated Sync: Continuously monitors and synchronizes application states to match the desired configurations in Git.
- Visual Dashboard: Offers a comprehensive dashboard for real-time monitoring and management of deployments.
ArgoCD does an excellent job managing application deployments to multiple EKS clusters, on different AWS accounts. It also has a lot of features built in that make managing these deployments a bit easier. The graphic below describes the architecture of this build.

Why Argo CD?
- Application definitions, configurations, and environments should be declarative and version controlled.
- Application deployment and lifecycle management should be automated, auditable, and easy to understand.
- Provides a WebUI to manage deployments and prove a visual overview.
- Automated deployment of applications to specified target environments
- Support for multiple config management/templating tools (Kustomize, Helm, Jsonnet, plain-YAML)
- Ability to manage and deploy to multiple clusters
- SSO Integration (OIDC, OAuth2, LDAP, SAML 2.0, GitHub, GitLab, Microsoft, LinkedIn)
- Multi-tenancy and RBAC policies for authorization
- Rollback/Roll-anywhere to any application configuration committed in Git repository
- Health status analysis of application resources
- Automated configuration drift detection and visualization
- Automated or manual syncing of applications to its desired state
- Web UI which provides real-time view of application activity
- CLI for automation and CI integration
- Webhook integration (GitHub, BitBucket, GitLab)
- Access tokens for automation
- PreSync, Sync, PostSync hooks to support complex application rollouts (e.g.blue/green & canary upgrades)
- Audit trails for application events and API calls
- Prometheus metrics
- Parameter overrides for overriding helm parameters in Git
Solution Overview
We will deploy and configure ArgoCD to a central management cluster which will oversee application deployment to multiple clusters that act as our development environments.
Install and Configuration
Begin the installation process. Get AWS credentials for the AWS account you would like to designate as the management account. Deploy ArgoCD to the management cluster.
Login to the AWS Account and update the kubeconfig. In this case, the Infrastructure Cluster. Create the namespace and run the install manifest.
aws eks update-kubeconfig --region us-east-2 --name infra-us-east-2
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yamlFirst, we create some AWS IAM resources with Terraform in each account. Starting with the management account. Here are the local variables we will need to predefine.
locals {
mgmt_cluster = "infra-us-east-2" #The name of your mgmt cluster
mgmt_account_id = "123456789-1" #The account id of your mgmt cluster
mgmt_eks_oidc_provider = "oidc.eks.us-east-2.amazonaws.com/id/CXXXXXXXXXXXXXXXXXXX" #The oidc provider of your mgmt cluster
dev_cluster = "dev-us-east-2" #The name of your dev EKS cluster
dev_cluster_account_id = "123456789-2" #The account id of your dev cluster
test_cluster = "test-us-east-2" #The name of your test EKS cluster
test_cluster_account_id = "123456789-3" #The account id of your test cluster
}Now we create the code for the IAM role in the infra account.
# ArgoCD IAM - Management Cluster Role
resource "aws_iam_role" "mgmt_argocd_service_account" {
name = "${local.mgmt_cluster}-argocd-server-sa"
path = "/"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::${local.mgmt_account_id}:oidc-provider/${local.mgmt_eks_oidc_provider}"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"ForAllValues:StringEquals": {
"${local.mgmt_eks_oidc_provider}:sub": [
"system:serviceaccount:argocd:argocd-server",
"system:serviceaccount:argocd:argocd-application-controller"
]
}
}
}
]
}
POLICY
}
resource "aws_iam_policy" "mgmt_argocd_service_account" {
name = "${local.mgmt_cluster}-argocd-server-sa"
path = "/"
description = "ArgoCD Server SA Policy"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"sts:AssumeRole"
]
Effect = "Allow"
Resource = [
"arn:aws:iam::${local.dev_cluster_account_id}:role/${local.dev_cluster}-argocd-server-sa",
"arn:aws:iam::${local.test_cluster_account_id}:role/${local.test_cluster}-argocd-server-sa"
]
},
]
})
}
resource "aws_iam_role_policy_attachment" "mgmt_argocd_service_account" {
role = aws_iam_role.mgmt_argocd_service_account.name
policy_arn = aws_iam_policy.mgmt_argocd_service_account.arn
}Now, we do similar steps in the other accounts. First Dev Account.
locals {
mgmt_cluster = "infra-us-east-2" #The name of your mgmt cluster
mgmt_account_id = "123456789-1" #The account id of your mgmt cluster
dev_cluster = "dev-us-east-2" #The name of your dev EKS cluster
dev_cluster_account_id = "123456789-2" #The account id of your dev cluster
}# ArgoCD - Dev Cluster Cross Account Role
resource "aws_iam_role" "dev_cluster_argocd_service_account" {
name = "${local.dev_cluster}-argocd-server-sa"
path = "/"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::${local.mgmt_account_id}:role/${local.mgmt_cluster}-argocd-server-sa"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
resource "aws_iam_role_policy_attachment" "dev_cluster_argocd_service_account" {
role = aws_iam_role.dev_cluster_argocd_service_account.name
policy_arn = "arn:aws:iam::aws:policy/AdministratorAccess"
}And Test Account.
locals {
mgmt_cluster = "infra-us-east-2" #The name of your mgmt cluster
mgmt_account_id = "123456789-1" #The account id of your mgmt cluster
test_cluster = "test-us-east-2" #The name of your test EKS cluster
test_cluster_account_id = "123456789-3" #The account id of your test cluster
}# ArgoCD - Test Cluster Cross Account Role
resource "aws_iam_role" "test_cluster_argocd_service_account" {
name = "${local.test_cluster}-argocd-server-sa"
path = "/"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::${local.mgmt_account_id}:role/${local.mgmt_cluster}-argocd-server-sa"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
resource "aws_iam_role_policy_attachment" "test_cluster_argocd_service_account" {
role = aws_iam_role.test_cluster_argocd_service_account.name
policy_arn = "arn:aws:iam::aws:policy/AdministratorAccess"
}With all of this in place, it’s time to access the web ui. We need to grab the password from a secret in our management cluster for the first login attempt.
kubectl get secret argocd-initial-admin-secret -n argocd -o jsonpath='{.data.password}' | base64 --decodeKubectl port-forwarding is used here to connect to the API server without exposing the service.
kubectl port-forward svc/argocd-server -n argocd 8080:443Leave this terminal up.

Now, access the UI https://localhost:8080
Use the username admin and the password we pulled from the secret in the previous step.

You will be greeted with the applications tab. We have none set up yet. Go to settings and change the password for the admin account.

Once verified it’s operational, there is some additional configuration needed for the cross-account IAM roles to access the EKS clusters in the Dev and Test environments.
Start by editing the aws-auth ConfigMap for the dev and test clusters. Login to each and follow these steps:
kubectl edit -n kube-system configmap/aws-authUnder mapRoles you will be adding the following information:
- groups:
- system:masters
rolearn: arn:aws:iam::ACCOUNT_ID:role/CLUSTER_NAME-argocd-server-sa
username: arn:aws:iam::ACCOUNT_ID:role/CLUSTER_NAME-argocd-server-saIt may also look like this:
{"rolearn":"arn:aws:iam::1234XXXX0:role/test-us-east-2-argocd-server-sa","username":"arn:aws:iam::2391222XXXX7:role/test-us-east-2-argocd-server-sa","groups":["system:masters”]}The next step involves authenticating back into the management cluster. We will be linking our ArgoCD service accounts with the IAM roles.
kubectl edit serviceaccount argocd-server -n argocdAnnotate the service account with the role information.
eks.amazonaws.com/role-arn: arn:aws:iam::MGMT_ACCOUNT_ID:role/MGMT_CLUSTER_NAME-argocd-server-saDo the same for the application controller service account.
kubectl edit serviceaccount argocd-application-controller -n argocdNext, we need to add a SecurityContext to the argocd-server k8s deployment.
kubectl edit deployment argocd-server -n argocdScroll down you will see an empty securityContext field. Change it to the following:
securityContext:
fsGroup: 999Lastly, restart all of the deployments.
kubectl rollout restart deployment argocd-server -n argocd
kubectl rollout restart deployment argocd-repo-server -n argocd
kubectl rollout restart deployment argocd-redis -n argocd
kubectl rollout restart deployment argocd-notifications-controller -n argocd
kubectl rollout restart deployment argocd-dex-server -n argocd
kubectl rollout restart deployment argocd-applicationset-controller -n argocdWe haven’t added any clusters at this point. First, we need to make sure the roles are properly configured.
Create a pod in the argocd namespace, we will use this to install the ArgoCD CLI and AWS CLI to ensure everything is working correctly. Deploy the following yaml to the management cluster. Save as argocd-cli-pod.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
app.kubernetes.io/name: argocd-cli
name: argocd-cli
spec:
serviceAccountName: argocd-server
containers:
- name: argocd-cli
image: amazon/aws-cli
command: [ "/bin/bash", "-c" ]
args:
- |
set -e
curl -sSL -o argocd-linux-amd64 https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-amd64
install -m 555 argocd-linux-amd64 /usr/local/bin/argocd
sleep 5000
Apply it.
kubectl apply -f argocd-cli-pod.yaml -n argocd
Let's shell into it:
kubectl exec --stdin --tty argocd-cli -n argocd -- /bin/bash
The pod now has both CLIs installed and let’s check our credentials to make sure the IAM linked role is configured correctly.

Now, we assume the role in one of the workload accounts using it’s account number and cluster name.
aws sts assume-role --role-arn arn:aws:iam::123456789:role/test-us-east-2-argocd-server-sa --role-session-name Workload1
Export the provided keys and token, then check your aws credentials again.
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
export AWS_SESSION_TOKEN=aws sts get-caller-identityIt should now show the workload account number, instead of the infra account number, The output will be different than before.
We now export the kubeconfig and add the cluster to our ArgoCD managment cluster.
aws eks update-kubeconfig --name test-us-east-2 --region us-east-2
argocd login argocd-server --username admin --password 1234567890
argocd cluster add arn:aws:eks:us-east-2:123456789:cluster/test-us-east-2 --aws-role-arn arn:aws:iam::123456789:role/test-us-east-2-argocd-server-sa --aws-cluster-name test-us-east-2
Do the same for the Test Cluster. Unset the aws credentials.
We have now added and external cluster through AWS AUTH to our management cluster. Now we can deploy applications by setting the destination server for the applications we are looking to deploy.
Next, we move on to setting up the actual deployments to the clusters. Let’s try creating a test application. I am going to use the guestbook example from the ArgoCD repository.
Deployments
In the ArgoCD UI go to Settings > Projects and create a new project.

This is where we set the base configuration for our applications. You can whitelist resources and various other tasks.
Add your source repos.
Create entries for all of the clusters you plan to deploy to, the ones we previously registered with ArgoCD.

Set the allowed Cluster Resources and allowed Namespaces permissions:

You can also run this command in the CLI:
argocd proj allow-cluster-resource <project-name> "*" "*"Now, we move to connecting GitHub and our project.
You will need to authorize GitHub via a Personal Access Token (PAT). In GitHub, under settings, at the bottom you will see Developer settings.

Then create a classic token.

Then, in ArgoCD you will connect to the repo via HTTPS and then use the user/pass from the token.

Now go to Applications and select new app.

Select the project we created, and the path is the path on the repository you want argocd to watch for this deployment. In this case, the dev environment.
Sync the application, you will see all the pods are healthy and the sync process successful.
