<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[nick rondeau]]></title><description><![CDATA[nick rondeau]]></description><link>https://nick.rond-eau.com/</link><generator>Ghost 5.78</generator><lastBuildDate>Fri, 24 Apr 2026 09:58:58 GMT</lastBuildDate><atom:link href="https://nick.rond-eau.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[ArgoCD: Multi-Cluster Deployments with AWS EKS]]></title><description><![CDATA[<p>I&apos;ve been using ArgoCD daily for about the past year almost.  I&apos;ve really been able to explore it&apos;s strengths, as well as it&apos;s shortcomings.  The thing I think ArgoCD handles the best is deployment to multiple clusters.  I will guide you through</p>]]></description><link>https://nick.rond-eau.com/argo-cd-setup-and-application-deployment/</link><guid isPermaLink="false">66b3e9d1fccdc8000159a1e7</guid><dc:creator><![CDATA[Nick Rondeau]]></dc:creator><pubDate>Tue, 10 Mar 2026 20:17:43 GMT</pubDate><media:content url="https://nick.rond-eau.com/content/images/2024/08/469caa53a9ecd84305a56e5ec39ebe6e9df37b4c-1132x377.webp" medium="image"/><content:encoded><![CDATA[<img src="https://nick.rond-eau.com/content/images/2024/08/469caa53a9ecd84305a56e5ec39ebe6e9df37b4c-1132x377.webp" alt="ArgoCD: Multi-Cluster Deployments with AWS EKS"><p>I&apos;ve been using ArgoCD daily for about the past year almost.  I&apos;ve really been able to explore it&apos;s strengths, as well as it&apos;s shortcomings.  The thing I think ArgoCD handles the best is deployment to multiple clusters.  I will guide you through this process and how to configure it yourself.</p><p>ArgoCD is a declarative, GitOps continuous delivery tool for Kubernetes. It is designed to manage application deployments and automate the process of synchronizing application states between Git repositories and Kubernetes clusters. ArgoCD provides a user-friendly interface for visualizing and managing application deployments in real-time.</p><figure class="kg-card kg-image-card"><img src="https://argo-cd.readthedocs.io/en/stable/assets/argocd-ui.gif" class="kg-image" alt="ArgoCD: Multi-Cluster Deployments with AWS EKS" loading="lazy" width="960" height="464"></figure><h3 id="key-features">Key Features:</h3><ul><li><strong>GitOps-Centric:</strong> Operates directly from Git repositories, ensuring version control and auditability.</li><li><strong>Declarative Configuration:</strong> Uses declarative definitions for application deployment, simplifying the management process.</li><li><strong>Automated Sync:</strong> Continuously monitors and synchronizes application states to match the desired configurations in Git.</li><li><strong>Visual Dashboard:</strong> Offers a comprehensive dashboard for real-time monitoring and management of deployments.</li></ul><p>ArgoCD does an excellent job managing application deployments to multiple EKS clusters, on different AWS accounts.  It also has a lot of features built in that make managing these deployments a bit easier.  The graphic below describes the architecture of this build.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://nick.rond-eau.com/content/images/2024/08/image.png" class="kg-image" alt="ArgoCD: Multi-Cluster Deployments with AWS EKS" loading="lazy" width="646" height="609" srcset="https://nick.rond-eau.com/content/images/size/w600/2024/08/image.png 600w, https://nick.rond-eau.com/content/images/2024/08/image.png 646w"><figcaption><span style="white-space: pre-wrap;">Infrastructure Overview</span></figcaption></figure><h2 id="why-argo-cd">Why Argo CD?</h2><ol><li>Application definitions, configurations, and environments should be declarative and version controlled.</li><li>Application deployment and lifecycle management should be automated, auditable, and easy to understand.</li><li>Provides a WebUI to manage deployments and prove a visual overview.</li><li>Automated deployment of applications to specified target environments</li><li>Support for multiple config management/templating tools (Kustomize, Helm, Jsonnet, plain-YAML)</li><li>Ability to manage and deploy to multiple clusters</li><li>SSO Integration (OIDC, OAuth2, LDAP, SAML 2.0, GitHub, GitLab, Microsoft, LinkedIn)</li><li>Multi-tenancy and RBAC policies for authorization</li><li>Rollback/Roll-anywhere to any application configuration committed in Git repository</li><li>Health status analysis of application resources</li><li>Automated configuration drift detection and visualization</li><li>Automated or manual syncing of applications to its desired state</li><li>Web UI which provides real-time view of application activity</li><li>CLI for automation and CI integration</li><li>Webhook integration (GitHub, BitBucket, GitLab)</li><li>Access tokens for automation</li><li>PreSync, Sync, PostSync hooks to support complex application rollouts (e.g.blue/green &amp; canary upgrades)</li><li>Audit trails for application events and API calls</li><li>Prometheus metrics</li><li>Parameter overrides for overriding helm parameters in Git</li></ol><h2 id="solution-overview">Solution Overview</h2><p>We will deploy and configure ArgoCD to a central management cluster which will oversee application deployment to multiple clusters that act as our development environments.</p><h2 id="install-and-configuration">Install and Configuration</h2><p>Begin the installation process.  Get AWS credentials for the AWS account you would like to designate as the management account. Deploy ArgoCD to the management cluster.&#xA0; </p><p>Login to the AWS Account and update the kubeconfig.&#xA0; In this case, the Infrastructure Cluster.  Create the namespace and run the install manifest.</p><pre><code>aws eks update-kubeconfig --region us-east-2 --name infra-us-east-2
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml</code></pre><p>First, we create some AWS IAM resources with Terraform in each account.  Starting with the management account.  Here are the local variables we will need to predefine.</p><pre><code>locals {
  mgmt_cluster           = &quot;infra-us-east-2&quot; #The name of your mgmt cluster
  mgmt_account_id        = &quot;123456789-1&quot; #The account id of your mgmt cluster
  mgmt_eks_oidc_provider = &quot;oidc.eks.us-east-2.amazonaws.com/id/CXXXXXXXXXXXXXXXXXXX&quot; #The oidc provider of your mgmt cluster
  dev_cluster            = &quot;dev-us-east-2&quot;   #The name of your dev EKS cluster
  dev_cluster_account_id = &quot;123456789-2&quot;   #The account id of your dev cluster
  test_cluster            = &quot;test-us-east-2&quot;   #The name of your test EKS cluster
  test_cluster_account_id = &quot;123456789-3&quot;   #The account id of your test cluster
}</code></pre><p>Now we create the code for the IAM role in the infra account.</p><pre><code># ArgoCD IAM - Management Cluster Role

resource &quot;aws_iam_role&quot; &quot;mgmt_argocd_service_account&quot; {
  name               = &quot;${local.mgmt_cluster}-argocd-server-sa&quot;
  path               = &quot;/&quot;
  assume_role_policy = &lt;&lt;POLICY
{
  &quot;Version&quot;: &quot;2012-10-17&quot;,
  &quot;Statement&quot;: [
    {
      &quot;Effect&quot;: &quot;Allow&quot;,
      &quot;Principal&quot;: {
        &quot;Federated&quot;: &quot;arn:aws:iam::${local.mgmt_account_id}:oidc-provider/${local.mgmt_eks_oidc_provider}&quot;
      },
      &quot;Action&quot;: &quot;sts:AssumeRoleWithWebIdentity&quot;,
      &quot;Condition&quot;: {
        &quot;ForAllValues:StringEquals&quot;: {
          &quot;${local.mgmt_eks_oidc_provider}:sub&quot;: [
            &quot;system:serviceaccount:argocd:argocd-server&quot;,
            &quot;system:serviceaccount:argocd:argocd-application-controller&quot;
          ]
        }
      }
    }
  ]
}
POLICY
}

resource &quot;aws_iam_policy&quot; &quot;mgmt_argocd_service_account&quot; {
  name        = &quot;${local.mgmt_cluster}-argocd-server-sa&quot;
  path        = &quot;/&quot;
  description = &quot;ArgoCD Server SA Policy&quot;

  policy = jsonencode({
    Version = &quot;2012-10-17&quot;
    Statement = [
      {
        Action = [
          &quot;sts:AssumeRole&quot;
        ]
        Effect = &quot;Allow&quot;
        Resource = [
          &quot;arn:aws:iam::${local.dev_cluster_account_id}:role/${local.dev_cluster}-argocd-server-sa&quot;,
          &quot;arn:aws:iam::${local.test_cluster_account_id}:role/${local.test_cluster}-argocd-server-sa&quot;
        ]
      },
    ]
  })
}

resource &quot;aws_iam_role_policy_attachment&quot; &quot;mgmt_argocd_service_account&quot; {
  role       = aws_iam_role.mgmt_argocd_service_account.name
  policy_arn = aws_iam_policy.mgmt_argocd_service_account.arn
}</code></pre><p>Now, we do similar steps in the other accounts. First Dev Account.</p><pre><code>locals {
  mgmt_cluster           = &quot;infra-us-east-2&quot; #The name of your mgmt cluster
  mgmt_account_id        = &quot;123456789-1&quot; #The account id of your mgmt cluster
  dev_cluster            = &quot;dev-us-east-2&quot;   #The name of your dev EKS cluster
  dev_cluster_account_id = &quot;123456789-2&quot;   #The account id of your dev cluster
}</code></pre><pre><code># ArgoCD - Dev Cluster Cross Account Role

resource &quot;aws_iam_role&quot; &quot;dev_cluster_argocd_service_account&quot; {
  name               = &quot;${local.dev_cluster}-argocd-server-sa&quot;
  path               = &quot;/&quot;
  assume_role_policy = &lt;&lt;POLICY
{
  &quot;Version&quot;: &quot;2012-10-17&quot;,
  &quot;Statement&quot;: [
    {
      &quot;Effect&quot;: &quot;Allow&quot;,
      &quot;Principal&quot;: {
        &quot;AWS&quot;: &quot;arn:aws:iam::${local.mgmt_account_id}:role/${local.mgmt_cluster}-argocd-server-sa&quot;
      },
      &quot;Action&quot;: &quot;sts:AssumeRole&quot;
    }
  ]
}
POLICY
}

resource &quot;aws_iam_role_policy_attachment&quot; &quot;dev_cluster_argocd_service_account&quot; {
  role       = aws_iam_role.dev_cluster_argocd_service_account.name
  policy_arn = &quot;arn:aws:iam::aws:policy/AdministratorAccess&quot;
}</code></pre><p>And Test Account.</p><pre><code>locals {
  mgmt_cluster           = &quot;infra-us-east-2&quot; #The name of your mgmt cluster
  mgmt_account_id        = &quot;123456789-1&quot; #The account id of your mgmt cluster
  test_cluster            = &quot;test-us-east-2&quot;   #The name of your test EKS cluster
  test_cluster_account_id = &quot;123456789-3&quot;   #The account id of your test cluster
}</code></pre><pre><code># ArgoCD - Test Cluster Cross Account Role

resource &quot;aws_iam_role&quot; &quot;test_cluster_argocd_service_account&quot; {
  name               = &quot;${local.test_cluster}-argocd-server-sa&quot;
  path               = &quot;/&quot;
  assume_role_policy = &lt;&lt;POLICY
{
  &quot;Version&quot;: &quot;2012-10-17&quot;,
  &quot;Statement&quot;: [
    {
      &quot;Effect&quot;: &quot;Allow&quot;,
      &quot;Principal&quot;: {
        &quot;AWS&quot;: &quot;arn:aws:iam::${local.mgmt_account_id}:role/${local.mgmt_cluster}-argocd-server-sa&quot;
      },
      &quot;Action&quot;: &quot;sts:AssumeRole&quot;
    }
  ]
}
POLICY
}

resource &quot;aws_iam_role_policy_attachment&quot; &quot;test_cluster_argocd_service_account&quot; {
  role       = aws_iam_role.test_cluster_argocd_service_account.name
  policy_arn = &quot;arn:aws:iam::aws:policy/AdministratorAccess&quot;
}</code></pre><p>With all of this in place, it&#x2019;s time to access the web ui.  We need to grab the password from a secret in our management cluster for the first login attempt.</p><pre><code>kubectl get secret argocd-initial-admin-secret -n argocd -o jsonpath=&apos;{.data.password}&apos; | base64 --decode</code></pre><p>Kubectl port-forwarding is used here to connect to the API server without exposing the service.</p><pre><code>kubectl port-forward svc/argocd-server -n argocd 8080:443</code></pre><p>Leave this terminal up.</p><figure class="kg-card kg-image-card"><img src="https://nick.rond-eau.com/content/images/2024/08/image-1.png" class="kg-image" alt="ArgoCD: Multi-Cluster Deployments with AWS EKS" loading="lazy" width="410" height="56"></figure><p>Now, access the UI <a href="https://localhost:8080/?ref=nick.rond-eau.com">https://localhost:8080</a></p><p>Use the username admin and the password we pulled from the secret in the previous step.</p><figure class="kg-card kg-image-card"><img src="https://nick.rond-eau.com/content/images/2024/08/image-2.png" class="kg-image" alt="ArgoCD: Multi-Cluster Deployments with AWS EKS" loading="lazy" width="1355" height="609" srcset="https://nick.rond-eau.com/content/images/size/w600/2024/08/image-2.png 600w, https://nick.rond-eau.com/content/images/size/w1000/2024/08/image-2.png 1000w, https://nick.rond-eau.com/content/images/2024/08/image-2.png 1355w" sizes="(min-width: 720px) 720px"></figure><p>You will be greeted with the applications tab.  We have none set up yet.  Go to settings and change the password for the admin account.</p><figure class="kg-card kg-image-card"><img src="https://nick.rond-eau.com/content/images/2024/08/image-3.png" class="kg-image" alt="ArgoCD: Multi-Cluster Deployments with AWS EKS" loading="lazy" width="1547" height="491" srcset="https://nick.rond-eau.com/content/images/size/w600/2024/08/image-3.png 600w, https://nick.rond-eau.com/content/images/size/w1000/2024/08/image-3.png 1000w, https://nick.rond-eau.com/content/images/2024/08/image-3.png 1547w" sizes="(min-width: 720px) 720px"></figure><p>Once verified it&#x2019;s operational, there is some additional configuration needed for the cross-account IAM roles to access the EKS clusters in the Dev and Test environments.</p><p>Start by editing the aws-auth ConfigMap for the dev and test clusters.  Login to each and follow these steps:</p><pre><code>kubectl edit -n kube-system configmap/aws-auth</code></pre><p>Under mapRoles you will be adding the following information:</p><pre><code>    - groups:
      - system:masters
      rolearn: arn:aws:iam::ACCOUNT_ID:role/CLUSTER_NAME-argocd-server-sa
      username: arn:aws:iam::ACCOUNT_ID:role/CLUSTER_NAME-argocd-server-sa</code></pre><div class="kg-card kg-callout-card kg-callout-card-blue"><div class="kg-callout-emoji">&#x1F4A1;</div><div class="kg-callout-text">For example - If working in the Dev account, use the Dev account ID and cluster name. When making the edit on the Test cluster, use the Test account ID and cluster name.</div></div><p>It may also look like this:</p><pre><code>{&quot;rolearn&quot;:&quot;arn:aws:iam::1234XXXX0:role/test-us-east-2-argocd-server-sa&quot;,&quot;username&quot;:&quot;arn:aws:iam::2391222XXXX7:role/test-us-east-2-argocd-server-sa&quot;,&quot;groups&quot;:[&quot;system:masters&#x201D;]}</code></pre><p>The next step involves authenticating back into the management cluster.  We will be linking our ArgoCD service accounts with the IAM roles.</p><pre><code>kubectl edit serviceaccount argocd-server -n argocd</code></pre><p>Annotate the service account with the role information.</p><pre><code>eks.amazonaws.com/role-arn: arn:aws:iam::MGMT_ACCOUNT_ID:role/MGMT_CLUSTER_NAME-argocd-server-sa</code></pre><p>Do the same for the application controller service account.</p><pre><code>kubectl edit serviceaccount argocd-application-controller -n argocd</code></pre><p>Next, we need to add a SecurityContext to the argocd-server k8s deployment.</p><pre><code>kubectl edit deployment argocd-server -n argocd</code></pre><p>Scroll down you will see an empty securityContext field. Change it to the following:</p><pre><code>securityContext:
  fsGroup: 999</code></pre><p>Lastly, restart all of the deployments.</p><pre><code>kubectl rollout restart deployment argocd-server -n argocd
kubectl rollout restart deployment argocd-repo-server -n argocd
kubectl rollout restart deployment argocd-redis -n argocd
kubectl rollout restart deployment argocd-notifications-controller -n argocd
kubectl rollout restart deployment argocd-dex-server -n argocd
kubectl rollout restart deployment argocd-applicationset-controller -n argocd</code></pre><p>We haven&#x2019;t added any clusters at this point. First, we need to make sure the roles are properly configured.</p><div class="kg-card kg-callout-card kg-callout-card-blue"><div class="kg-callout-emoji">&#x1F4A1;</div><div class="kg-callout-text">There is no option to add an external cluster through the web UI.</div></div><p>Create a pod in the argocd namespace, we will use this to install the ArgoCD CLI and AWS CLI to ensure everything is working correctly. Deploy the following yaml to the management cluster. Save as argocd-cli-pod.yaml</p><pre><code>apiVersion: v1
kind: Pod
metadata:
  labels:
    app.kubernetes.io/name: argocd-cli
  name: argocd-cli
spec:
  serviceAccountName: argocd-server
  containers:
  - name: argocd-cli
    image: amazon/aws-cli
    command: [ &quot;/bin/bash&quot;, &quot;-c&quot; ]
    args:
      - |
        set -e
        curl -sSL -o argocd-linux-amd64 https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-amd64
        install -m 555 argocd-linux-amd64 /usr/local/bin/argocd
        sleep 5000
</code></pre><p>Apply it.</p><p><code>kubectl apply -f argocd-cli-pod.yaml -n argocd</code></p><p>Let&apos;s shell into it:</p><pre><code>kubectl exec --stdin --tty argocd-cli -n argocd -- /bin/bash</code></pre><figure class="kg-card kg-image-card"><img src="https://nick.rond-eau.com/content/images/2024/08/image-4.png" class="kg-image" alt="ArgoCD: Multi-Cluster Deployments with AWS EKS" loading="lazy" width="598" height="52"></figure><p>The pod now has both CLIs installed and let&#x2019;s check our credentials to make sure the IAM linked role is configured correctly.</p><figure class="kg-card kg-image-card"><img src="https://nick.rond-eau.com/content/images/2024/08/image-5.png" class="kg-image" alt="ArgoCD: Multi-Cluster Deployments with AWS EKS" loading="lazy" width="705" height="119" srcset="https://nick.rond-eau.com/content/images/size/w600/2024/08/image-5.png 600w, https://nick.rond-eau.com/content/images/2024/08/image-5.png 705w"></figure><p>Now, we assume the role in one of the workload accounts using it&#x2019;s account number and cluster name.</p><pre><code>aws sts assume-role --role-arn arn:aws:iam::123456789:role/test-us-east-2-argocd-server-sa --role-session-name Workload1</code></pre><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://nick.rond-eau.com/content/images/2024/08/image-6.png" class="kg-image" alt="ArgoCD: Multi-Cluster Deployments with AWS EKS" loading="lazy" width="666" height="164" srcset="https://nick.rond-eau.com/content/images/size/w600/2024/08/image-6.png 600w, https://nick.rond-eau.com/content/images/2024/08/image-6.png 666w"><figcaption><span style="white-space: pre-wrap;">The output will look like this, copy the Keys and Token.</span></figcaption></figure><p>Export the provided keys and token, then check your aws credentials again.</p><pre><code>export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
export AWS_SESSION_TOKEN=</code></pre><pre><code>aws sts get-caller-identity</code></pre><p>It should now show the workload account number, instead of the infra account number, The output will be different than before.<br></p><p>We now export the kubeconfig and add the cluster to our ArgoCD managment cluster.</p><pre><code>aws eks update-kubeconfig --name test-us-east-2 --region us-east-2

argocd login argocd-server --username admin --password 1234567890

argocd cluster add arn:aws:eks:us-east-2:123456789:cluster/test-us-east-2 --aws-role-arn arn:aws:iam::123456789:role/test-us-east-2-argocd-server-sa --aws-cluster-name test-us-east-2</code></pre><figure class="kg-card kg-image-card"><img src="https://nick.rond-eau.com/content/images/2024/08/image-7.png" class="kg-image" alt="ArgoCD: Multi-Cluster Deployments with AWS EKS" loading="lazy" width="572" height="30"></figure><p>Do the same for the Test Cluster.  Unset the aws credentials.</p><p>We have now added and external cluster through AWS AUTH to our management cluster. Now we can deploy applications by setting the destination server for the applications we are looking to deploy.</p><div class="kg-card kg-callout-card kg-callout-card-blue"><div class="kg-callout-emoji">&#x1F4A1;</div><div class="kg-callout-text">You may need to restart the ArgoCD controller pod to update authentication changes.</div></div><p>Next, we move on to setting up the actual deployments to the clusters.  Let&#x2019;s try creating a test application.  I am going to use the guestbook example from the ArgoCD repository.</p><h1 id="deployments">Deployments</h1><p>In the ArgoCD UI go to Settings &gt; Projects and create a new project.</p><figure class="kg-card kg-image-card"><img src="https://nick.rond-eau.com/content/images/2024/08/image-8.png" class="kg-image" alt="ArgoCD: Multi-Cluster Deployments with AWS EKS" loading="lazy" width="1432" height="705" srcset="https://nick.rond-eau.com/content/images/size/w600/2024/08/image-8.png 600w, https://nick.rond-eau.com/content/images/size/w1000/2024/08/image-8.png 1000w, https://nick.rond-eau.com/content/images/2024/08/image-8.png 1432w" sizes="(min-width: 720px) 720px"></figure><p>This is where we set the base configuration for our applications.  You can whitelist resources and various other tasks.</p><p>Add your source repos.</p><p>Create entries for all of the clusters you plan to deploy to, the ones we previously registered with ArgoCD.</p><figure class="kg-card kg-image-card"><img src="https://nick.rond-eau.com/content/images/2024/08/image-9.png" class="kg-image" alt="ArgoCD: Multi-Cluster Deployments with AWS EKS" loading="lazy" width="936" height="326" srcset="https://nick.rond-eau.com/content/images/size/w600/2024/08/image-9.png 600w, https://nick.rond-eau.com/content/images/2024/08/image-9.png 936w" sizes="(min-width: 720px) 720px"></figure><p>Set the allowed Cluster Resources and allowed Namespaces permissions:</p><figure class="kg-card kg-image-card"><img src="https://nick.rond-eau.com/content/images/2024/08/image-10.png" class="kg-image" alt="ArgoCD: Multi-Cluster Deployments with AWS EKS" loading="lazy" width="943" height="487" srcset="https://nick.rond-eau.com/content/images/size/w600/2024/08/image-10.png 600w, https://nick.rond-eau.com/content/images/2024/08/image-10.png 943w" sizes="(min-width: 720px) 720px"></figure><p>You can also run this command in the CLI:</p><pre><code>argocd proj allow-cluster-resource &lt;project-name&gt; &quot;*&quot; &quot;*&quot;</code></pre><p>Now, we move to connecting GitHub and our project.</p><p>You will need to authorize GitHub via a Personal Access Token (PAT).  In GitHub, under settings, at the bottom you will see Developer settings.</p><figure class="kg-card kg-image-card"><img src="https://nick.rond-eau.com/content/images/2024/08/image-11.png" class="kg-image" alt="ArgoCD: Multi-Cluster Deployments with AWS EKS" loading="lazy" width="461" height="776"></figure><p>Then create a classic token.</p><figure class="kg-card kg-image-card"><img src="https://nick.rond-eau.com/content/images/2024/08/image-12.png" class="kg-image" alt="ArgoCD: Multi-Cluster Deployments with AWS EKS" loading="lazy" width="294" height="176"></figure><p>Then, in ArgoCD you will connect to the repo via HTTPS and then use the user/pass from the token.</p><figure class="kg-card kg-image-card"><img src="https://nick.rond-eau.com/content/images/2024/08/image-13.png" class="kg-image" alt="ArgoCD: Multi-Cluster Deployments with AWS EKS" loading="lazy" width="1015" height="729" srcset="https://nick.rond-eau.com/content/images/size/w600/2024/08/image-13.png 600w, https://nick.rond-eau.com/content/images/size/w1000/2024/08/image-13.png 1000w, https://nick.rond-eau.com/content/images/2024/08/image-13.png 1015w" sizes="(min-width: 720px) 720px"></figure><p>Now go to Applications and select new app.</p><figure class="kg-card kg-image-card"><img src="https://nick.rond-eau.com/content/images/2024/08/image-14.png" class="kg-image" alt="ArgoCD: Multi-Cluster Deployments with AWS EKS" loading="lazy" width="910" height="123" srcset="https://nick.rond-eau.com/content/images/size/w600/2024/08/image-14.png 600w, https://nick.rond-eau.com/content/images/2024/08/image-14.png 910w" sizes="(min-width: 720px) 720px"></figure><p>Select the project we created, and the path is the path on the repository you want argocd to watch for this deployment. In this case, the dev environment.</p><p>Sync the application, you will see all the pods are healthy and the sync process successful.</p><figure class="kg-card kg-image-card"><img src="https://nick.rond-eau.com/content/images/2024/08/image-15.png" class="kg-image" alt="ArgoCD: Multi-Cluster Deployments with AWS EKS" loading="lazy" width="832" height="74" srcset="https://nick.rond-eau.com/content/images/size/w600/2024/08/image-15.png 600w, https://nick.rond-eau.com/content/images/2024/08/image-15.png 832w" sizes="(min-width: 720px) 720px"></figure>]]></content:encoded></item><item><title><![CDATA[Creating a Kubernetes Cluster at Home]]></title><description><![CDATA[<p>I&apos;ve had the itch recently for a really good homelab project.  After looking at the other server I have, it&apos;s running almost 50 containers.  It&apos;s main use is Plex.  It runs everything from this blog you are reading, to our family recipe book.  Why</p>]]></description><link>https://nick.rond-eau.com/creating-a-kubernetes-cluster-at-home/</link><guid isPermaLink="false">6611ec069e280700016efc5a</guid><dc:creator><![CDATA[Nick Rondeau]]></dc:creator><pubDate>Thu, 11 Apr 2024 02:11:03 GMT</pubDate><media:content url="https://nick.rond-eau.com/content/images/2024/04/ATy7-blogunderstanding.the.role.of.cluster.autoscaler.in.kubernetes.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://nick.rond-eau.com/content/images/2024/04/ATy7-blogunderstanding.the.role.of.cluster.autoscaler.in.kubernetes.jpg" alt="Creating a Kubernetes Cluster at Home"><p>I&apos;ve had the itch recently for a really good homelab project.  After looking at the other server I have, it&apos;s running almost 50 containers.  It&apos;s main use is Plex.  It runs everything from this blog you are reading, to our family recipe book.  Why am I bogging it down while it&apos;s trying to transcode media?  I asked myself this and decided to make a plan for these additional services.</p><p>I&apos;ve always wanted to run my own Kubernetes cluster and a little challenge is nothing to be scared of.  It&apos;s always looked like it would fun to make and maintain my own.  I&apos;ve already done it a bunch of times, installing and creating a new cluster is part of the CKA exam after all.  I was initially looking at k3s, but decided to do the full k8s install instead.  More fun that way with a bit more control.  Also, it&apos;s fun to learn as much as possible during the process.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://nick.rond-eau.com/content/images/2024/04/PXL_20240410_231901593.jpg" class="kg-image" alt="Creating a Kubernetes Cluster at Home" loading="lazy" width="2000" height="2656" srcset="https://nick.rond-eau.com/content/images/size/w600/2024/04/PXL_20240410_231901593.jpg 600w, https://nick.rond-eau.com/content/images/size/w1000/2024/04/PXL_20240410_231901593.jpg 1000w, https://nick.rond-eau.com/content/images/size/w1600/2024/04/PXL_20240410_231901593.jpg 1600w, https://nick.rond-eau.com/content/images/2024/04/PXL_20240410_231901593.jpg 2000w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">A small footprint, a lot more power, and much cheaper than raspberry pi 4s</span></figcaption></figure><p>I purchased four Lenovo ThinkCentre M900 workstations,  so I will have a master and three worker nodes.  An affordable solution, each unit was under $75.</p><p>This will run through the process of how I created my Kubernetes cluster, starting from fresh Debian bookworm installs to having a working cluster.  Let&apos;s begin.</p><p>Login as root and install sudo:</p><pre><code>su -
apt update
apt install sudo</code></pre><p>and then add yourself to sudo:</p><pre><code>adduser myusername sudo</code></pre><p>Install ufw - the firewall we are going to use for this.</p><pre><code>sudo apt install ufw</code></pre><p>Next, add your default policies.</p><pre><code>sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow ssh</code></pre><p>Enable ufw.</p><pre><code>sudo ufw enable</code></pre><p>Let&apos;s set a static ip.</p><pre><code>sudo apt install vim net-tools</code></pre><p>View route and gateway</p><pre><code>netstat -nr </code></pre><p>It will have an output like this:</p><pre><code>
ronnic@debian-lenovo1:~$ netstat -nr
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
0.0.0.0         192.168.1.1     0.0.0.0         UG        0 0          0 eno1
192.168.1.0     0.0.0.0         255.255.255.0   U         0 0          0 eno1
</code></pre><p>Let&apos;s edit our interfaces file, first back it up.</p><pre><code>cp /etc/network/interfaces /etc/network/interfaces.bak</code></pre><p>If you need to ever restore it, reverse the command.</p><pre><code>cp /etc/network/interfaces.bak /etc/network/interfaces</code></pre><p>Make the following edits, using your own info:</p><pre><code>iface eth0 inet static
       address 192.168.1.10 #this is the static IP address I want to set
       netmask 255.255.255.0 #use the info from the last line above
       network 192.168.1.0  #use the info from the last line above
       gateway 192.168.1.1 #your router&apos;s address/gateway</code></pre><p>Reboot and ssh back in using the new IP.</p><p>Services with known vulnerabilities are a huge attack vector. You can get automatic security updates with unattended-upgrades.<br><br>Install the package:</p><pre><code>sudo apt install unattended-upgrades</code></pre><p>Enable automatic upgrades:</p><pre><code>sudo dpkg-reconfigure unattended-upgrades</code></pre><p>The build-essential meta package includes all the relevant tools and necessary packages to enable developers to build and compile software from the source.</p><pre><code>sudo apt install build-essential -y</code></pre><p>Install <a href="https://docs.docker.com/engine/install/debian/?ref=nick.rond-eau.com" rel="noreferrer">Docker </a>and follow the<a href="https://docs.docker.com/engine/install/linux-postinstall/?ref=nick.rond-eau.com" rel="noreferrer"> post-install instructions</a> for linux as well. This allows the running of docker commands without sudo as well as running at startup.</p><p>Now, let&apos;s set up kubectl, which is the command line tool for kubernetes.  Well also install kubelet, the node agent.  Last, kubeadm for cluster admin.  This is done on what will be our master node.</p><p>Install packages needed to use the Kubernetes repository:</p><pre><code class="language-shell">sudo apt update
sudo apt install -y apt-transport-https ca-certificates curl
</code></pre><p>Download the public signing key. The same signing key is used for all repositories so you can disregard the version in the URL:</p><pre><code class="language-shell">sudo mkdir -p -m 755 /etc/apt/keyrings

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
</code></pre><p>Add the repository:</p><figure class="kg-card kg-code-card"><pre><code>echo &apos;deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /&apos; | sudo tee /etc/apt/sources.list.d/kubernetes.list</code></pre><figcaption><p><span style="white-space: pre-wrap;">This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list</span></p></figcaption></figure><p>Install the tools:</p><pre><code class="language-bash">sudo apt update
sudo apt install -y kubelet kubeadm kubectl</code></pre><p>Prevent automatic updates to these applications:</p><pre><code>sudo apt-mark hold kubelet kubeadm kubectl</code></pre><figure class="kg-card kg-image-card"><img src="https://nick.rond-eau.com/content/images/2024/04/16124288-covers-d2iq-refcard-233-gettingstartedwithkubernet-1.jpg" class="kg-image" alt="Creating a Kubernetes Cluster at Home" loading="lazy" width="2000" height="600" srcset="https://nick.rond-eau.com/content/images/size/w600/2024/04/16124288-covers-d2iq-refcard-233-gettingstartedwithkubernet-1.jpg 600w, https://nick.rond-eau.com/content/images/size/w1000/2024/04/16124288-covers-d2iq-refcard-233-gettingstartedwithkubernet-1.jpg 1000w, https://nick.rond-eau.com/content/images/size/w1600/2024/04/16124288-covers-d2iq-refcard-233-gettingstartedwithkubernet-1.jpg 1600w, https://nick.rond-eau.com/content/images/2024/04/16124288-covers-d2iq-refcard-233-gettingstartedwithkubernet-1.jpg 2000w" sizes="(min-width: 720px) 720px"></figure><h3 id="verify-installation">Verify Installation</h3><p>To verify your installation, you can check the version of each tool:</p><pre><code>kubeadm version</code></pre><pre><code>kubectl version --client</code></pre><pre><code>sudo systemctl status kubelet</code></pre><p>You should see that the service is active even if you haven&#x2019;t yet used it to set up a cluster.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://nick.rond-eau.com/content/images/2024/04/image-2.png" class="kg-image" alt="Creating a Kubernetes Cluster at Home" loading="lazy" width="1661" height="196" srcset="https://nick.rond-eau.com/content/images/size/w600/2024/04/image-2.png 600w, https://nick.rond-eau.com/content/images/size/w1000/2024/04/image-2.png 1000w, https://nick.rond-eau.com/content/images/size/w1600/2024/04/image-2.png 1600w, https://nick.rond-eau.com/content/images/2024/04/image-2.png 1661w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Everything working perfectly so far</span></figcaption></figure><p>Kubernetes requires swap to be turned off.  Either install Debian without a swap partition, if one exists, comment out any swap lines in <code>/etc/fstab</code></p><p>Also, run</p><pre><code>sudo swapoff -a</code></pre><p>Load Required Kernel Modules next, ensure that the <code>br_netfilter</code> module is loaded. This module is necessary for Kubernetes networking to function correctly.</p><pre><code>sudo modprobe br_netfilter</code></pre><p>To ensure these settings persist across reboots, add the following lines to <code>/etc/sysctl.conf</code> or a Kubernetes-specific configuration file under <code>/etc/sysctl.d/</code>:</p><pre><code>net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1</code></pre><p>Then apply the sysctl settings with:</p><pre><code>sudo sysctl --system</code></pre><figure class="kg-card kg-image-card"><img src="https://nick.rond-eau.com/content/images/2024/04/image-3.png" class="kg-image" alt="Creating a Kubernetes Cluster at Home" loading="lazy" width="490" height="210"></figure><p>Next, Adjust Firewall Settings. Since we set up ufw, we need to allow traffic on the ports used by Kubernetes. </p><p>To elaborate on the network setting, here is a bit of an explanation of the ports we need open for master and worker.</p><h3 id="for-master-nodes"><u>For Master Nodes:</u></h3><ul><li><strong>6443</strong>: This is the Kubernetes API server port, the primary control plane component for the cluster. It must be accessible from all nodes in the cluster, so if your worker node needs to communicate with the master, this port must be open.</li><li><strong>2379-2380</strong>: These ports are for etcd server communications, used by Kubernetes to store all cluster data. They need to be accessible by all master nodes.</li><li><strong>10250</strong>: The Kubelet API, which must be accessible from the Kubernetes control plane.</li><li><strong>10251</strong>: The kube-scheduler port, accessible by the control plane.</li><li><strong>10252</strong>: The kube-controller-manager port, also for the control plane.</li><li><strong>179:</strong> Calico</li></ul><h3 id="for-worker-nodes"><u>For Worker Nodes:</u></h3><ul><li><strong>10250</strong>: The Kubelet API, which should be accessible from the master node.</li><li><strong>30000-32767</strong>: These are NodePort ports, which might be used if you expose services using NodePort type. They need to be accessible from outside the cluster.</li></ul><p>For ufw, commands to open these ports would look like:</p><pre><code>sudo ufw allow 2379:2380/tcp
sudo ufw allow 6443,10250,10251,10252,179/tcp
</code></pre><p>Finally, we get to initializing the master node. Initialize the Cluster by running <code>kubeadm init</code> to initialize the cluster. Specify the pod network CIDR if you plan to use a networking solution that requires it (such as Calico):</p><pre><code>sudo kubeadm init --pod-network-cidr=192.168.0.0/16</code></pre><figure class="kg-card kg-image-card"><img src="https://nick.rond-eau.com/content/images/2024/04/image-4.png" class="kg-image" alt="Creating a Kubernetes Cluster at Home" loading="lazy" width="987" height="315" srcset="https://nick.rond-eau.com/content/images/size/w600/2024/04/image-4.png 600w, https://nick.rond-eau.com/content/images/2024/04/image-4.png 987w" sizes="(min-width: 720px) 720px"></figure><p>I was running into an error when running the init command:</p><figure class="kg-card kg-image-card"><img src="https://nick.rond-eau.com/content/images/2024/04/image-5.png" class="kg-image" alt="Creating a Kubernetes Cluster at Home" loading="lazy" width="1631" height="167" srcset="https://nick.rond-eau.com/content/images/size/w600/2024/04/image-5.png 600w, https://nick.rond-eau.com/content/images/size/w1000/2024/04/image-5.png 1000w, https://nick.rond-eau.com/content/images/size/w1600/2024/04/image-5.png 1600w, https://nick.rond-eau.com/content/images/2024/04/image-5.png 1631w" sizes="(min-width: 720px) 720px"></figure><p>If you find you are also getting this error, all you need to do is remove the containerd config, then restart it.</p><p>Remove the installed default config file: <code>rm /etc/containerd/config.toml</code></p><pre><code>containerd config default &gt; /etc/containerd/config.toml</code></pre><pre><code>sudo systemctl restart containerd</code></pre><p>Now, we deploy a pod network: Choose a pod network add-on compatible with <code>kubeadm</code> and deploy it. For example, to deploy Calico.  Download the deployment.yaml from here:</p><pre><code>https://docs.projectcalico.org/manifests/calico.yaml</code></pre><p>Apply after you uncomment the following lines in it:</p><pre><code>            - name: CALICO_IPV4POOL_CIDR
              value: &quot;192.168.0.0/16&quot;</code></pre><p>We need to set up calicoctl also, which I do as a kubectl plugin.</p><p>Download and make <code>calicoctl</code> executable:</p><pre><code class="language-bash">curl -L https://github.com/projectcalico/calico/releases/download/v3.27.3/calicoctl-linux-amd64 -o kubectl-calico
chmod +x kubectl-calico</code></pre><p>Find a suitable directory in your <code>PATH</code>:</p><pre><code>echo $PATH</code></pre><p>This command will output something like <code>/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin</code>. These are the directories (separated by colons) where your system looks for executable files.</p><p>Move <code>kubectl-calico</code> to a directory in your <code>PATH</code>:</p><p>For most users, <code>/usr/local/bin</code> is a common directory for user-installed software and is typically in your <code>PATH</code>. To move <code>kubectl-calico</code> there, use:</p><pre><code class="language-bash">sudo mv kubectl-calico /usr/local/bin</code></pre><p>Let&apos;s make sure it works, run:</p><pre><code>kubectl calico version</code></pre><figure class="kg-card kg-image-card"><img src="https://nick.rond-eau.com/content/images/2024/04/image-8.png" class="kg-image" alt="Creating a Kubernetes Cluster at Home" loading="lazy" width="1153" height="247" srcset="https://nick.rond-eau.com/content/images/size/w600/2024/04/image-8.png 600w, https://nick.rond-eau.com/content/images/size/w1000/2024/04/image-8.png 1000w, https://nick.rond-eau.com/content/images/2024/04/image-8.png 1153w" sizes="(min-width: 720px) 720px"></figure><p>Now, let&apos;s set up the worker nodes.  Make sure to disable swap.</p><p>Run the following for Master and Worker:</p><pre><code>sudo systemctl status *swap 
sudo systemctl mask  &quot;dev-&lt;&gt;.swap&quot;</code></pre><p>Follow the same instructions for installing kubeadm, kubectl, and kubelet.  Also, allow the worker ports in ufw for each node.</p><p>If you need to create another token and join command, use the following on the master node:</p><pre><code>sudo kubeadm token create --print-join-command</code></pre><p>For each worker we load Debian on the new machine, configure the static ip, configure the firewall, install docker and containerd, then install the kube tools.  Then, run the join command.</p><p>As you are running these join commands, check back on your master node to see if the additional workers are ready.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://nick.rond-eau.com/content/images/2024/04/image-7.png" class="kg-image" alt="Creating a Kubernetes Cluster at Home" loading="lazy" width="1352" height="122" srcset="https://nick.rond-eau.com/content/images/size/w600/2024/04/image-7.png 600w, https://nick.rond-eau.com/content/images/size/w1000/2024/04/image-7.png 1000w, https://nick.rond-eau.com/content/images/2024/04/image-7.png 1352w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Four nodes with READY status!</span></figcaption></figure><p>There you have it, once you join the other nodes, you are ready to use your new cluster!</p><p>Want to automate this with Ansible?  I have it covered.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/ronnic1/ansible-k8s-nodes/tree/main?ref=nick.rond-eau.com"><div class="kg-bookmark-content"><div class="kg-bookmark-title">GitHub - ronnic1/ansible-k8s-nodes: Ansible set up of local Kubernetes Cluster</div><div class="kg-bookmark-description">Ansible set up of local Kubernetes Cluster. Contribute to ronnic1/ansible-k8s-nodes development by creating an account on GitHub.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.githubassets.com/assets/pinned-octocat-093da3e6fa40.svg" alt="Creating a Kubernetes Cluster at Home"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">ronnic1</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/e5f0be56d6ceedcec234c85f36cad17e4b1d7104502b2dee8e25869977856be5/ronnic1/ansible-k8s-nodes" alt="Creating a Kubernetes Cluster at Home"></div></a></figure><p>Thanks for reading.</p>]]></content:encoded></item><item><title><![CDATA[Drone CI Workflows for Encrypted Backups to AWS S3]]></title><description><![CDATA[<p>Recently I started self hosting <a href="https://github.com/dani-garcia/vaultwarden?ref=nick.rond-eau.com" rel="noreferrer">Vaultwarden</a> and wanted a nice automated way to run backups, but also encrypt them.  I was already using <a href="https://about.gitea.com/?ref=nick.rond-eau.com" rel="noreferrer">Gitea </a>, and after some research <a href="https://www.drone.io/?ref=nick.rond-eau.com" rel="noreferrer">Drone CI</a> seemed like an excellent fit to get me started.</p><p>I want to configure a pipeline to grab the docker config</p>]]></description><link>https://nick.rond-eau.com/hosting-vaultwarden-and-backing-up-to-aws/</link><guid isPermaLink="false">65ea9731626c8100019b08f5</guid><dc:creator><![CDATA[Nick Rondeau]]></dc:creator><pubDate>Fri, 08 Mar 2024 18:50:39 GMT</pubDate><media:content url="https://nick.rond-eau.com/content/images/2024/03/1-p_6HGLSxsyO6lNGZjr2B9w-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://nick.rond-eau.com/content/images/2024/03/1-p_6HGLSxsyO6lNGZjr2B9w-1.png" alt="Drone CI Workflows for Encrypted Backups to AWS S3"><p>Recently I started self hosting <a href="https://github.com/dani-garcia/vaultwarden?ref=nick.rond-eau.com" rel="noreferrer">Vaultwarden</a> and wanted a nice automated way to run backups, but also encrypt them.  I was already using <a href="https://about.gitea.com/?ref=nick.rond-eau.com" rel="noreferrer">Gitea </a>, and after some research <a href="https://www.drone.io/?ref=nick.rond-eau.com" rel="noreferrer">Drone CI</a> seemed like an excellent fit to get me started.</p><p>I want to configure a pipeline to grab the docker config directories on my server, compress them, encrypt them, and then upload to S3.  This would ensure I always have a cloud backup for all of my containers - especially important services like Vaultwarden.  I need to always have a backup of my personal vault.  What better place than s3?</p><p>First, I need to set up a little infrastructure.  I want to have my Terraform state file also saved to s3, with Dynamodb handling state locking.  This is a pretty simple set up.</p>
<!--kg-card-begin: html-->
<script src="https://gist.github.com/ronnic1/f80cfe9dc208b122e7c3da8fa958b580.js"></script>
<!--kg-card-end: html-->
<p>You will need to create the bucket part of the code first, then bring in the state and dynamodb sections.  When you run <code>terraform init -migrate-state</code> it will move it over to s3. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://nick.rond-eau.com/content/images/2024/03/image-2.png" class="kg-image" alt="Drone CI Workflows for Encrypted Backups to AWS S3" loading="lazy" width="1566" height="417" srcset="https://nick.rond-eau.com/content/images/size/w600/2024/03/image-2.png 600w, https://nick.rond-eau.com/content/images/size/w1000/2024/03/image-2.png 1000w, https://nick.rond-eau.com/content/images/2024/03/image-2.png 1566w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">A successfully uploaded state to s3</span></figcaption></figure><p>Create an IAM role for drone with s3 access and kms access if you plan to use SSE.</p><p>Authorize Drone CI in the Gitea settings, so it can access the repo we will run the pipeline from.</p><figure class="kg-card kg-image-card"><img src="https://nick.rond-eau.com/content/images/2024/03/image-1.png" class="kg-image" alt="Drone CI Workflows for Encrypted Backups to AWS S3" loading="lazy" width="1453" height="967" srcset="https://nick.rond-eau.com/content/images/size/w600/2024/03/image-1.png 600w, https://nick.rond-eau.com/content/images/size/w1000/2024/03/image-1.png 1000w, https://nick.rond-eau.com/content/images/2024/03/image-1.png 1453w" sizes="(min-width: 720px) 720px"></figure><p>Here is the docker compose file I used, configured for gitea.</p>
<!--kg-card-begin: html-->
<script src="https://gist.github.com/ronnic1/2cd14e7074bb87a0be9a17b207262caf.js"></script>
<!--kg-card-end: html-->
<p>In this compose file, we are creating the drone container, as well as a runner which is required to run our pipeline.</p><p>A quick refresher to load the compose file:<br>To start your application (in the background or detached mode):</p><pre><code>docker compose up -d</code></pre><ul><li>To stop your application and remove containers, networks, etc.:</li></ul><pre><code>docker compose down</code></pre><ul><li>To build or rebuild services:</li></ul><pre><code>docker compose build</code></pre><p>All of the important pieces are in place.  Next, create a repository and place a <code>.drone.yml</code> file which is places in the root of the repo.</p><p>Add a few secrets to the repo.</p><p>AWS access keys, default region, as well as the encryption key which can be any 16+ digit string you like.  This will be the key to decrypt your archive.</p>
<!--kg-card-begin: html-->
<script src="https://gist.github.com/ronnic1/25d995fc7662a870417fbcfcdab627c9.js"></script>
<!--kg-card-end: html-->
<p>The pipeline is pretty simple, it compresses and encrypts the config folder, creating a .7z file.  Then, uploads to your s3 bucket of choice.</p><p>It&apos;s time to run it! Log into DroneCI, you will now see the list of your repos.  Choose the repo you placed the <code>.drone.yml</code> file.</p><figure class="kg-card kg-image-card"><img src="https://nick.rond-eau.com/content/images/2024/03/image-3.png" class="kg-image" alt="Drone CI Workflows for Encrypted Backups to AWS S3" loading="lazy" width="1428" height="402" srcset="https://nick.rond-eau.com/content/images/size/w600/2024/03/image-3.png 600w, https://nick.rond-eau.com/content/images/size/w1000/2024/03/image-3.png 1000w, https://nick.rond-eau.com/content/images/2024/03/image-3.png 1428w" sizes="(min-width: 720px) 720px"></figure><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://nick.rond-eau.com/content/images/2024/03/image-4.png" class="kg-image" alt="Drone CI Workflows for Encrypted Backups to AWS S3" loading="lazy" width="1099" height="545" srcset="https://nick.rond-eau.com/content/images/size/w600/2024/03/image-4.png 600w, https://nick.rond-eau.com/content/images/size/w1000/2024/03/image-4.png 1000w, https://nick.rond-eau.com/content/images/2024/03/image-4.png 1099w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Activate the repository</span></figcaption></figure><p>After the repository is activated, select it. Now create a build with the pipeline.</p><figure class="kg-card kg-image-card"><img src="https://nick.rond-eau.com/content/images/2024/03/image-5.png" class="kg-image" alt="Drone CI Workflows for Encrypted Backups to AWS S3" loading="lazy" width="1876" height="172" srcset="https://nick.rond-eau.com/content/images/size/w600/2024/03/image-5.png 600w, https://nick.rond-eau.com/content/images/size/w1000/2024/03/image-5.png 1000w, https://nick.rond-eau.com/content/images/size/w1600/2024/03/image-5.png 1600w, https://nick.rond-eau.com/content/images/2024/03/image-5.png 1876w" sizes="(min-width: 720px) 720px"></figure><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://nick.rond-eau.com/content/images/2024/03/image-6.png" class="kg-image" alt="Drone CI Workflows for Encrypted Backups to AWS S3" loading="lazy" width="485" height="273"><figcaption><span style="white-space: pre-wrap;">Choose branch and create</span></figcaption></figure><p>You should now have a working pipeline.</p><figure class="kg-card kg-image-card"><img src="https://nick.rond-eau.com/content/images/2024/03/image-7.png" class="kg-image" alt="Drone CI Workflows for Encrypted Backups to AWS S3" loading="lazy" width="1689" height="1003" srcset="https://nick.rond-eau.com/content/images/size/w600/2024/03/image-7.png 600w, https://nick.rond-eau.com/content/images/size/w1000/2024/03/image-7.png 1000w, https://nick.rond-eau.com/content/images/size/w1600/2024/03/image-7.png 1600w, https://nick.rond-eau.com/content/images/2024/03/image-7.png 1689w" sizes="(min-width: 720px) 720px"></figure><p>Let&apos;s verify in S3.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://nick.rond-eau.com/content/images/2024/03/image-8.png" class="kg-image" alt="Drone CI Workflows for Encrypted Backups to AWS S3" loading="lazy" width="1592" height="683" srcset="https://nick.rond-eau.com/content/images/size/w600/2024/03/image-8.png 600w, https://nick.rond-eau.com/content/images/size/w1000/2024/03/image-8.png 1000w, https://nick.rond-eau.com/content/images/2024/03/image-8.png 1592w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">All of our encrypted archives placed neatly in our bucket. </span></figcaption></figure><p>I hope this helps you with your own homelabs and servers. It&apos;s no use hosting services like Vaultwarden if your whole password vault could be corrupted or lost on a local machine.  This is a simple and effective to securely backup these services.  Not to mention cheap.  This costs less than 25 cents a month in s3 charges, if you use KMS for SSE that will cost an extra $1. </p><p>Thank you for reading!</p>]]></content:encoded></item><item><title><![CDATA[Using Docker Swarm on AWS with Ansible & Terraform]]></title><description><![CDATA[<p>Docker Swarm is an open-source container orchestration platform and is the native clustering engine for and by Docker. It allows you to manage multiple containers deployed across multiple host machines.</p><p>One of the key benefits associated with docker swarm is the high level of availability offered for applications. In a</p>]]></description><link>https://nick.rond-eau.com/using-docker-swarm-on-aws-with-ansible-terraform/</link><guid isPermaLink="false">65c654aac695cb0001d98bbc</guid><dc:creator><![CDATA[Nick Rondeau]]></dc:creator><pubDate>Sun, 03 Mar 2024 07:50:25 GMT</pubDate><media:content url="https://nick.rond-eau.com/content/images/2024/02/4068616_19d3--1-.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://nick.rond-eau.com/content/images/2024/02/4068616_19d3--1-.jpg" alt="Using Docker Swarm on AWS with Ansible &amp; Terraform"><p>Docker Swarm is an open-source container orchestration platform and is the native clustering engine for and by Docker. It allows you to manage multiple containers deployed across multiple host machines.</p><p>One of the key benefits associated with docker swarm is the high level of availability offered for applications. In a docker swarm, there are multiple worker nodes and at least one manager node that is responsible for handling the worker nodes&#x2019; resources efficiently and ensuring that the cluster operates efficiently.</p><p><strong>Let&#x2019;s start setting up our cluster in Terraform.</strong></p><p>Verify your Terraform installation:</p><pre><code>$ terraform --version
Terraform v0.14.2</code></pre><p>With Terraform (version 0.14.2 as of writing this) we can provision cloud architecture by writing code which is usually created in a programming language. In this case it&#x2019;s going to be HCL &#x2014; a&#xA0;<a href="https://github.com/hashicorp/hcl?ref=nick.rond-eau.com" rel="noopener ugc nofollow">HashiCorp</a>&#xA0;configuration language.</p><p>We are going to be setting up a Swarm cluster on AWS using Ansible and Terraform, refer to the diagram below. We will be setting up one Master node and two Worker nodes.</p><figure class="kg-card kg-image-card"><img src="https://miro.medium.com/v2/resize:fit:412/1*YGUUjDGONugSkP9XdVC2-Q.png" class="kg-image" alt="Using Docker Swarm on AWS with Ansible &amp; Terraform" loading="lazy" width="412" height="510"></figure><p><strong>Global Variables</strong></p><p>This file contains environment specific configuration AMI, Instance type, region, etc.</p>
<!--kg-card-begin: html-->
<script src="https://gist.github.com/ronnic1/6e72396627e6f777c411a11cd93ab3c1.js"></script>
<!--kg-card-end: html-->
<p><strong>Configure AWS as our Provider</strong></p>
<!--kg-card-begin: html-->
<script src="https://gist.github.com/ronnic1/d1819d6e6f923cf2be87eb09bbe817bc.js"></script>
<!--kg-card-end: html-->
<p><strong>Set up Security Groups for inbound/outbound traffic.</strong></p>
<!--kg-card-begin: html-->
<script src="https://gist.github.com/ronnic1/79d326432754c8c12554510d277fa261.js"></script>
<!--kg-card-end: html-->
<p><strong>Configure our EC2 Instances, Our Workers and Master</strong></p>
<!--kg-card-begin: html-->
<script src="https://gist.github.com/ronnic1/67a2d72fa45b8e635ff53b010dd70fe5.js"></script>
<!--kg-card-end: html-->
<p><strong>Next, our Bootstrap script to install the latest version of Docker</strong></p>
<!--kg-card-begin: html-->
<script src="https://gist.github.com/ronnic1/20d35a3cd2cad82fd80f81f73da26d93.js"></script>
<!--kg-card-end: html-->
<p><strong>Transform to Swarm Cluster with Ansible, setting up the playbook</strong></p>
<!--kg-card-begin: html-->
<script src="https://gist.github.com/ronnic1/c29820266aec1b9f49ccbd184814ecf4.js"></script>
<!--kg-card-end: html-->
<p>Everything is now complete and ready to initialize Terraform, Plan out the actions we are going to take, and then apply them to the cloud.</p><pre><code>$ terraform init
$ terraform plan
$ terraform apply</code></pre><p>Update the&#xA0;<strong>/etc/ansible/hosts</strong>&#xA0;file with the&#xA0;<strong>public ip</strong>&#xA0;of each&#xA0;<strong>EC2&#xA0;</strong>instance.</p><pre><code>$ ansible -i hosts playbook.yml</code></pre><p>I felt this was a good introduction to all of the tools involved.  Thanks for reading.</p><p>I have uploaded all the files to&#xA0;<a href="https://github.com/ronnic1/docker-swarm-ansible-terraform?ref=nick.rond-eau.com" rel="noopener ugc nofollow">GitHub</a>.</p>]]></content:encoded></item><item><title><![CDATA[CI/CD using Docker, Travis CI, ECS, and Python]]></title><description><![CDATA[<p>I am going to take you through the steps setting up an environment for automated building, testing, and deployment, using containers and services hosted in the cloud.</p><p><strong>What we need:</strong></p><ul><li><a href="https://github.com/?ref=nick.rond-eau.com" rel="noopener ugc nofollow">GitHub</a></li><li><a href="https://docs.docker.com/get-docker/?ref=nick.rond-eau.com" rel="noopener ugc nofollow">Docker</a>&#xA0;and&#xA0;<a href="https://docs.docker.com/compose/install/?ref=nick.rond-eau.com" rel="noopener ugc nofollow">Docker Compose</a>&#xA0;installed</li><li><a href="https://hub.docker.com/?ref=nick.rond-eau.com" rel="noopener ugc nofollow">Docker Hub</a></li><li><a href="https://travis-ci.com/?ref=nick.rond-eau.com" rel="noopener ugc nofollow">Travis CI</a></li><li><a href="https://aws.amazon.com/?ref=nick.rond-eau.com" rel="noopener ugc nofollow">Amazon Web Services (AWS)</a></li></ul><figure class="kg-card kg-image-card"><img src="https://miro.medium.com/v2/resize:fit:700/0*rtvSd3sLINN4Fg9P.png" class="kg-image" alt loading="lazy" width="700" height="364"></figure><p>When a commit</p>]]></description><link>https://nick.rond-eau.com/deploying-a-ci-cd-environment-using-docker-travis-ci-ecs-and-python/</link><guid isPermaLink="false">65c6647ec695cb0001d98c1b</guid><dc:creator><![CDATA[Nick Rondeau]]></dc:creator><pubDate>Mon, 12 Feb 2024 18:00:02 GMT</pubDate><media:content url="https://nick.rond-eau.com/content/images/2024/02/1_A1EDxMgzlnxbuoTiZmbo9g.webp" medium="image"/><content:encoded><![CDATA[<img src="https://nick.rond-eau.com/content/images/2024/02/1_A1EDxMgzlnxbuoTiZmbo9g.webp" alt="CI/CD using Docker, Travis CI, ECS, and Python"><p>I am going to take you through the steps setting up an environment for automated building, testing, and deployment, using containers and services hosted in the cloud.</p><p><strong>What we need:</strong></p><ul><li><a href="https://github.com/?ref=nick.rond-eau.com" rel="noopener ugc nofollow">GitHub</a></li><li><a href="https://docs.docker.com/get-docker/?ref=nick.rond-eau.com" rel="noopener ugc nofollow">Docker</a>&#xA0;and&#xA0;<a href="https://docs.docker.com/compose/install/?ref=nick.rond-eau.com" rel="noopener ugc nofollow">Docker Compose</a>&#xA0;installed</li><li><a href="https://hub.docker.com/?ref=nick.rond-eau.com" rel="noopener ugc nofollow">Docker Hub</a></li><li><a href="https://travis-ci.com/?ref=nick.rond-eau.com" rel="noopener ugc nofollow">Travis CI</a></li><li><a href="https://aws.amazon.com/?ref=nick.rond-eau.com" rel="noopener ugc nofollow">Amazon Web Services (AWS)</a></li></ul><figure class="kg-card kg-image-card"><img src="https://miro.medium.com/v2/resize:fit:700/0*rtvSd3sLINN4Fg9P.png" class="kg-image" alt="CI/CD using Docker, Travis CI, ECS, and Python" loading="lazy" width="700" height="364"></figure><p>When a commit or a merge is done to a branch, this will trigger Travis and run a list of instructions to build and run our tests. If the build is done successfully and every test passes, Travis will push our Docker image to Docker Hub and trigger an update event on ECS telling to our cluster that it has a new image version to be downloaded.</p><p>Our Python application consists of two files.&#xA0;<strong>main.py</strong>&#xA0;contains our Flask code and then&#xA0;<strong>requirements.txt&#xA0;</strong>which will layout dependencies etc.</p>
<!--kg-card-begin: html-->
<script src="https://gist.github.com/ronnic1/4591a459c27e8b9208d27ae5f623b307.js"></script>
<!--kg-card-end: html-->

<!--kg-card-begin: html-->
<script src="https://gist.github.com/ronnic1/4d342cbe5d40a157357153f4b0b42be0.js"></script>
<!--kg-card-end: html-->
<p>Now let&#x2019;s build a Dockerfile and a docker-compose for our Python code and upload it to Docker Hub.</p>
<!--kg-card-begin: html-->
<script src="https://gist.github.com/ronnic1/13c1c20463312817968ba9d3b7d11136.js"></script>
<!--kg-card-end: html-->
<p>Now it&#x2019;s time to create the image and push it to Docker Hub.</p><pre><code>$ docker-compose build --pull
$ docker-compose push</code></pre><p>You should see something similar to this when complete:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://miro.medium.com/v2/resize:fit:451/1*iv0v4nIRIbiNjj8SOHW3_A.png" class="kg-image" alt="CI/CD using Docker, Travis CI, ECS, and Python" loading="lazy" width="451" height="425"><figcaption><span style="white-space: pre-wrap;">a successful build</span></figcaption></figure><p>and it&#x2019;s now been pushed to Docker Hub&#x2026;.</p><figure class="kg-card kg-image-card"><img src="https://miro.medium.com/v2/resize:fit:700/1*8pvxgN5B7iKQVQmg3MKodg.png" class="kg-image" alt="CI/CD using Docker, Travis CI, ECS, and Python" loading="lazy" width="700" height="184"></figure><p>Let&#x2019;s check</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://miro.medium.com/v2/resize:fit:700/1*IbcVD3_GWqfWx4Snv4esew.png" class="kg-image" alt="CI/CD using Docker, Travis CI, ECS, and Python" loading="lazy" width="700" height="367"><figcaption><span style="white-space: pre-wrap;">There it is!</span></figcaption></figure><p>Now it&#x2019;s time to log in to&#xA0;<a href="https://www.travis-ci.com/?ref=nick.rond-eau.com" rel="noopener ugc nofollow">Travis-CI</a>&#xA0;with your GitHub account.This will sync your repositories.</p><p>We will need to create&#xA0;<strong>.travis.yml&#xA0;</strong>and<strong>&#xA0;travis-deploy.sh&#xA0;</strong>scripts:</p>
<!--kg-card-begin: html-->
<script src="https://gist.github.com/ronnic1/b3f79a011335a2ad45d063d6a04a6325.js"></script>
<!--kg-card-end: html-->

<!--kg-card-begin: html-->
<script src="https://gist.github.com/ronnic1/03b4ec77e78400f31689db6ff2daf4c2.js"></script>
<!--kg-card-end: html-->
<p>Now we approach the ECS portion&#x2026;</p><figure class="kg-card kg-image-card"><img src="https://miro.medium.com/v2/resize:fit:700/1*KRl12Xavn5rNvx6VT6tnxg.png" class="kg-image" alt="CI/CD using Docker, Travis CI, ECS, and Python" loading="lazy" width="700" height="312"></figure><p>Choose a name for the container. for image, use the following format:</p><p><code>Docker Hub username/Repository name:latest</code></p><p>Also, under port mappings make sure you specify&#xA0;<strong>8000</strong>.</p><figure class="kg-card kg-image-card"><img src="https://miro.medium.com/v2/resize:fit:700/1*5AiRwxzqHG6894KBLUnAwA.png" class="kg-image" alt="CI/CD using Docker, Travis CI, ECS, and Python" loading="lazy" width="700" height="437"></figure><p>Then under Load Balancer type, choose Application Load Balancer.</p><figure class="kg-card kg-image-card"><img src="https://miro.medium.com/v2/resize:fit:700/1*Mab5NoA22rFBk9Uun0Ru6A.png" class="kg-image" alt="CI/CD using Docker, Travis CI, ECS, and Python" loading="lazy" width="700" height="556"></figure><p>Choose a cluster name.</p><figure class="kg-card kg-image-card"><img src="https://miro.medium.com/v2/resize:fit:700/1*KJLGucUHm6wDjB3DdRyM6g.png" class="kg-image" alt="CI/CD using Docker, Travis CI, ECS, and Python" loading="lazy" width="700" height="513"></figure><p>Review and create.</p><p>Now on to Travis CI&#x2026;</p><figure class="kg-card kg-image-card"><img src="https://miro.medium.com/v2/resize:fit:700/1*wGlwJxe9FaIO550-jPgeQA.png" class="kg-image" alt="CI/CD using Docker, Travis CI, ECS, and Python" loading="lazy" width="700" height="215"></figure><figure class="kg-card kg-image-card"><img src="https://miro.medium.com/v2/resize:fit:700/1*CK9COphcE3NmWmABs77ZYg.png" class="kg-image" alt="CI/CD using Docker, Travis CI, ECS, and Python" loading="lazy" width="700" height="276"></figure><p>You will see your GitHub repo you have your travis file uploaded to. it will detect it and run the file.</p><p>We have to set up our environment variables in Travis. Under your project, go to settings.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://miro.medium.com/v2/resize:fit:700/1*rCaU6eWUjsh-uIPbAlFHCw.png" class="kg-image" alt="CI/CD using Docker, Travis CI, ECS, and Python" loading="lazy" width="700" height="211"><figcaption><span style="white-space: pre-wrap;">this is what they should look like</span></figcaption></figure><p>Now, on AWS go to EC2&gt;load balancer&gt;Basic Configuration, copy your DNS. Copy the DNS name and paste in your browser with the port 8000.</p><p>You will see the text we set up earlier</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://miro.medium.com/v2/resize:fit:700/1*kwpVW04hhH0B8Q_Nv-J8yw.png" class="kg-image" alt="CI/CD using Docker, Travis CI, ECS, and Python" loading="lazy" width="700" height="356"><figcaption><span style="white-space: pre-wrap;">text we entered into main.py</span></figcaption></figure><p>Let&#x2019;s go ahead and put everything to the the test! Make a change to the text and commit. You can see events put in motion on Travis.</p><figure class="kg-card kg-image-card"><img src="https://miro.medium.com/v2/resize:fit:571/1*ro2PvEaNZrMZ_3JPQ8jYvQ.png" class="kg-image" alt="CI/CD using Docker, Travis CI, ECS, and Python" loading="lazy" width="571" height="164"></figure><figure class="kg-card kg-image-card"><img src="https://miro.medium.com/v2/resize:fit:700/0*rtvSd3sLINN4Fg9P.png" class="kg-image" alt="CI/CD using Docker, Travis CI, ECS, and Python" loading="lazy" width="700" height="364"></figure><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://miro.medium.com/v2/resize:fit:700/1*P8o8evfvqn04Kha2daQlLQ.png" class="kg-image" alt="CI/CD using Docker, Travis CI, ECS, and Python" loading="lazy" width="700" height="360"><figcaption><span style="white-space: pre-wrap;">Refresh and see the update &#x2014; It worked!</span></figcaption></figure><p>After building and testing Travis will deploy our image to Docker Hub and notify ECS that there&#x2019;s a new version. You have now created your own CI/CD environment! Thanks for reading and following along. Share what you&#x2019;re working on in the comments.</p><p>I have uploaded all my files to&#xA0;<a href="https://github.com/ronnic1/cicd_docker?ref=nick.rond-eau.com" rel="noopener ugc nofollow">GitHub</a>&#xA0;so you can follow along.</p>]]></content:encoded></item><item><title><![CDATA[VPC Peering Between Two AWS Accounts Using Terraform]]></title><description><![CDATA[<p>Continuing with Terraform, it&#x2019;s amazing how easily it cuts through tedious tasks, such as setting up more complicated infrastructure. Taking something you would assume would take hours, into minutes and easily replicated if need be. The end result of this project will be completing a VPC peering connection</p>]]></description><link>https://nick.rond-eau.com/vpc-peering-between-two-aws-accounts/</link><guid isPermaLink="false">65c66312c695cb0001d98c06</guid><dc:creator><![CDATA[Nick Rondeau]]></dc:creator><pubDate>Fri, 09 Feb 2024 17:42:36 GMT</pubDate><media:content url="https://nick.rond-eau.com/content/images/2024/02/Running-fault-tolerant-Keycloak-with-Infinispan-in-Kubernetes-1-1024x521.png" medium="image"/><content:encoded><![CDATA[<img src="https://nick.rond-eau.com/content/images/2024/02/Running-fault-tolerant-Keycloak-with-Infinispan-in-Kubernetes-1-1024x521.png" alt="VPC Peering Between Two AWS Accounts Using Terraform"><p>Continuing with Terraform, it&#x2019;s amazing how easily it cuts through tedious tasks, such as setting up more complicated infrastructure. Taking something you would assume would take hours, into minutes and easily replicated if need be. The end result of this project will be completing a VPC peering connection request across 2 AWS accounts.</p><p>It&#x2019;s assumed that the 2 VPCs that you need peered already have been created previously. We need to create the peering request from the peering owner VPC, accept the peering connection request in the accepter account, and update the route tables in both the VPCs with entries for the peering connection from either side.</p><p>Some of the salient considerations to be kept in mind are:</p><ul><li>Access Credentials are needed for both AWS accounts</li><li>There may be multiple route tables in each VPC, and peering connection entries have to be updated in all of them</li></ul>
<!--kg-card-begin: html-->
<script src="https://gist.github.com/ronnic1/cc22524b80851280a6869fa7edc187a4.js"></script>
<!--kg-card-end: html-->
<p>Since we have 2 accounts, I&#x2019;ve created 2 variables namely&#xA0;<em>owner_profile</em>&#xA0;and&#xA0;<em>accepter_profile</em>&#xA0;to pass these at runtime, or as a&#xA0;<a href="https://learn.hashicorp.com/terraform/getting-started/variables?ref=nick.rond-eau.com" rel="noopener ugc nofollow">.tfvars</a>&#xA0;file. I have the option to hard-code credentials, but it is not a good security practice. Additionally, since we serve multiple customers and we will be re-using the code, variables are the way to go. I&#x2019;ve also defined variables for both the VPCs in question.</p><p>When you do a cross-account peering connection request, you need the 12 digit account ID of the AWS account where the accepter VPC resides. If you&#x2019;ve noticed, I have not defined this as a variable. While it&#x2019;s not straightforward, it&#x2019;s possible to get the account ID using the&#xA0;<a href="https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html?ref=nick.rond-eau.com" rel="noopener ugc nofollow">Amazon Resource Names<u>&#xA0;</u></a>(ARNs).</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://miro.medium.com/v2/resize:fit:700/1*yVvhCnQplrozHoGFTg_z-g.png" class="kg-image" alt="VPC Peering Between Two AWS Accounts Using Terraform" loading="lazy" width="700" height="558"><figcaption><span style="white-space: pre-wrap;">VPC Peering Request Screen- AWS Console</span></figcaption></figure><p>ARNs typically have a format as follows:</p><pre><code>arn:aws:servicename:region:account-id:resource</code></pre><p>Since we now have the VPC id of the accepter VPC, we can use it&#x2019;s ARN to extract the account-id. The code below does exactly that. I am using the&#xA0;<a href="https://www.terraform.io/docs/providers/aws/d/vpc.html?ref=nick.rond-eau.com" rel="noopener ugc nofollow">aws_vpc</a>&#xA0;data source, and using the arn to extract the account-id. I have defined the accepter_account_id as a local value.</p>
<!--kg-card-begin: html-->
<script src="https://gist.github.com/ronnic1/858ef43c024cfe49c902278b3d4827e5.js"></script>
<!--kg-card-end: html-->
<p>Now that I have the information needed to raise a VPC peering request, I am completing the peering request initiation and acceptance as below. If you notice, I am using the profile variables to tag the peering connections as well. Tagging is a good practice, especially if you have multiple peering connections and it&#x2019;ll help during any maintenance or network troubleshooting.</p>
<!--kg-card-begin: html-->
<script src="https://gist.github.com/ronnic1/ed111616254eb76b53ae40a9e93ae920.js"></script>
<!--kg-card-end: html-->
<p>Now that the peering connection is created, we have to update the route table entries on both sides to send traffic via the peering connection. Given that a single VPC can have multiple route tables, and I wanted the to code to work irrespective of the number of route tables each VPC has. I have created a loop using count to cycle through all the route tables and create the route entry to the other VPC via the peering connection ID. Note that the route tables are being updated with the peered VPC CIDR block. It is also possible to route specific subnets via the peering connection, but I have not done that here. This may be relevant in cases where you need to access some central servers such as Directory Service, Anti-virus server, in a Shared Services VPC as part of a&#xA0;<a href="https://aws.amazon.com/answers/aws-landing-zone/?ref=nick.rond-eau.com" rel="noopener ugc nofollow">landing zone</a>. Some organisations may also host other servers in the Shared Services VPC which are not relevant for all the peered VPCs, and want to restrict access to specific subnets only.</p>
<!--kg-card-begin: html-->
<script src="https://gist.github.com/ronnic1/e6ee7a8e0215aeeb5f671d120195adc8.js"></script>
<!--kg-card-end: html-->
<h3 id="conclusion">Conclusion</h3><p>I explained how to create VPC peering connections and update route tables. I also touched upon some concepts of Terraform like data sources, split local values, and loops. You can find the complete Terraform code for the above at my&#xA0;<a href="https://github.com/ronnic1/vpc-peering-project.git?ref=nick.rond-eau.com" rel="noopener ugc nofollow">GitHub repository</a>. Thanks for reading, until next time.</p>]]></content:encoded></item><item><title><![CDATA[The OSI Model - The Seven Layers Explained]]></title><description><![CDATA[<p>The Open Systems Interconnection (OSI) model is a conceptual framework used to understand and standardize the functions of a telecommunications or computing system without regard to its underlying internal structure and technology. Its design is structured into seven distinct layers, each with a specific role in the process of transferring</p>]]></description><link>https://nick.rond-eau.com/the-osi-model-the-seven-layers-explained/</link><guid isPermaLink="false">65c2913fc695cb0001d98ba3</guid><dc:creator><![CDATA[Nick Rondeau]]></dc:creator><pubDate>Tue, 06 Feb 2024 20:10:31 GMT</pubDate><media:content url="https://nick.rond-eau.com/content/images/2024/02/1000019418.png" medium="image"/><content:encoded><![CDATA[<img src="https://nick.rond-eau.com/content/images/2024/02/1000019418.png" alt="The OSI Model - The Seven Layers Explained"><p>The Open Systems Interconnection (OSI) model is a conceptual framework used to understand and standardize the functions of a telecommunications or computing system without regard to its underlying internal structure and technology. Its design is structured into seven distinct layers, each with a specific role in the process of transferring data from one system to another. These layers facilitate modular troubleshooting by allowing network administrators to isolate and address issues at a specific layer, enhancing the efficiency and reliability of network diagnostics.</p><p>In the context of data transmission, the OSI model employs a rule where each layer on the source side (the sender) engages in communication with its equivalent layer on the destination side (the receiver). This layer-to-layer interaction ensures that data packets are processed in a uniform manner across both ends of the communication line. For instance, the Data Link layer, which is responsible for node-to-node data transfer and error correction at the link level, one device will directly communicate with the Data Link layer on another device. This direct correspondence between identical layers across the communication stream ensures data integrity and consistency, thereby facilitating seamless and dependable data exchange. This organized approach to data handling allows networks to operate smoothly, supporting a wide range of applications and services.</p><h3 id="layer-1-physical">Layer 1: Physical</h3><p>This layer is primarily concerned with the tangible elements of network communication, including the hardware and the physical media that facilitate the movement of data across devices. As the initial and most foundational layer of the OSI model, the Physical layer plays a pivotal role in the transmission of digital information.</p><p>Key aspects of the Physical layer include:</p><ul><li><strong>Signal Conversion:</strong> It is responsible for transforming digital data into various forms of signals &#x2014; electrical, optical, or radio &#x2014; depending on the medium of transmission. This conversion is crucial for enabling the physical movement of data between devices on a network.</li><li><strong>Connection Interfaces:</strong> The layer manages the physical connection points, such as network ports, which serve as gateways for data to enter and exit devices. These ports are vital for establishing and maintaining network connections.</li><li><strong>Transmission Rate and Distance:</strong> The Physical layer determines the speed at which data is transmitted and the maximum distance it can cover. These parameters vary based on the types of cables and technologies employed, affecting the efficiency and scope of data communication.</li><li><strong>Signal Integrity:</strong> Maintaining the quality of the data signals over the course of their journey is a critical function of this layer. It ensures that data is delivered accurately to its destination, free from errors and interference.</li></ul><p>In essence, the Physical layer is dedicated to the infrastructure and physical components that underpin data transmission. It lays the groundwork for the seamless exchange of digital information by addressing the practical and mechanical aspects of network communication.</p><h3 id="layer-2-data-link">Layer 2: Data Link</h3><p>Layer 2 of the OSI model termed the Data Link layer, plays a crucial role in the networking framework, particularly in the context of Ethernet networks. It is tasked with overseeing the direct transfer of data between adjacent network nodes and is instrumental in structuring communication at a more granular level compared to the Physical layer.</p><p><strong>Key Functionalities of the Data Link Layer Include:</strong></p><ul><li><strong>Data Packetization:</strong> This layer segments digital data into packets, organizing them into a standardized structure. Each packet encompasses the payload (the actual data intended for transmission) and additional metadata for routing, error detection, and control information. This structured approach to data handling facilitates efficient and reliable data transmission.</li><li><strong>Addressing with MAC Addresses:</strong> A pivotal component of the Data Link layer&apos;s operation is the use of Media Access Control (MAC) addresses. These unique identifiers, akin to a postal address for physical mail, enable the layer to direct data packets to the correct endpoint within a network. Every device connected to the network possesses a distinct MAC address, ensuring precise and directed communication.</li><li><strong>Network Traffic Management:</strong> Devices such as switches and bridges operate within this layer, acting as the network&apos;s traffic managers. They analyze the data packets&apos; MAC addresses and make decisions on how to forward them through the network to reach their destination efficiently. This involves directing traffic to avoid congestion and ensuring that packets are delivered through the most appropriate paths.</li><li><strong>Error Detection and Handling:</strong> Beyond simply transmitting data, the Data Link layer is also responsible for maintaining data integrity. It employs various mechanisms to detect and correct errors that may occur during transmission. This ensures that the data received at the destination is accurate and uncorrupted, maintaining the reliability of network communications.</li></ul><p>In summary, the Data Link layer serves as the linchpin for data packet organization, addressing, and error management within a network. It ensures that data packets are correctly formatted, addressed, and transmitted between devices on the same network, while actively managing and correcting transmission errors to uphold the integrity of the data.</p><h3 id="layer-3-network">Layer 3: Network</h3><p>The Network layer, akin to a sophisticated GPS system for data packets, is responsible for determining the most efficient route for data to traverse from its origin to its destination across interconnected networks. It ensures that data is routed correctly through complex network architectures.</p><p><strong>Key Features:</strong></p><ul><li><strong>Logical Addressing:</strong> Assigns IP addresses to data packets, serving as unique identifiers similar to street addresses, guiding data to its correct destination across the internet.</li><li><strong>Inter-networking:</strong> Facilitates communication between disparate networks, enabling data to move from local networks to the global internet.</li><li><strong>Subnetting:</strong> Improves network management and security by segmenting larger networks into smaller, manageable sub-networks.</li><li><strong>Routing:</strong> Determines the best path for data to follow, optimizing the journey of data packets based on network conditions and topology.</li></ul><h3 id="layer-4-transport">Layer 4: Transport</h3><p>The Transport layer is responsible for the reliable transmission of data segments between points on a network, ensuring that data is delivered in sequence and without errors. It&apos;s like the logistics service of the OSI model, breaking down data into manageable packets and ensuring they arrive safely.</p><p><strong>Key Features:</strong></p><ul><li><strong>End-to-End Communication:</strong> Manages data transmission sessions between devices, ensuring accurate data transfer.</li><li><strong>Segmentation:</strong> Divides larger data streams into smaller segments for easier handling and reassembles them at the destination.</li><li><strong>Flow Control:</strong> Regulates data transmission to prevent overwhelming the receiver, ensuring a smooth data flow.</li><li><strong>Error Handling:</strong> Detects and corrects errors that may occur during data transmission, ensuring data integrity.</li></ul><h3 id="layer-5-session">Layer 5: Session</h3><p>The Session layer establishes, manages, and terminates connections between applications. It&apos;s like a moderator for communications, ensuring that sessions are maintained and properly closed after data exchange.</p><p><strong>Key Features:</strong></p><ul><li><strong>Session Management:</strong> Controls the dialogues (sessions) between computers, establishing, maintaining, and ending connections as needed.</li><li><strong>Synchronization:</strong> Adds checkpoints in data streams to allow for the resumption of data transfer after a disruption.</li></ul><h3 id="layer-6-presentation">Layer 6: Presentation</h3><p>The Presentation layer acts as the translator for the network, converting data into a format that can be understood by both the sending and receiving applications. It&apos;s concerned with the syntax and semantics of the information transmitted.</p><p><strong>Key Features:</strong></p><ul><li><strong>Data Formatting:</strong> Translates data from a format used by the application layer into a common format at the sending station and then back into the application&apos;s format at the receiving station.</li><li><strong>Encryption and Compression:</strong> Provides data encryption for security and data compression for efficient transmission.</li></ul><h3 id="layer-7-application">Layer 7: Application</h3><p>The Application layer is where end-users interact with computers, serving as the window through which networking services are accessed. It supports application and end-user processes, facilitating communication between software applications and lower layers of the OSI model.</p><p><strong>Key Features:</strong></p><ul><li><strong>Resource Sharing:</strong> Facilitates access to network resources and services like file transfers, messaging, and email.</li><li><strong>Remote File Access:</strong> Enables users to access files and directories on remote computers.</li><li><strong>Directory Services:</strong> Provides a framework for naming and addressing that locates resources and devices on a network.</li><li><strong>Network Management:</strong> Supports applications that require network access and management capabilities.</li></ul><p>In essence, the OSI model delineates the roles and functions of different network layers, from the physical transmission of data to the application-level interactions that users engage with. Understanding each layer&apos;s responsibilities provides insight into the complexities of network communication and the protocols that keep us connecte</p>]]></content:encoded></item><item><title><![CDATA[Deploying an AWS ECS Cluster of EC2 Instances With Terraform]]></title><description><![CDATA[<p> This project will utilize two major cloud computing tools.</p><p>Terraform is an infrastructure orchestration tool (also known as <a href="https://en.wikipedia.org/wiki/Infrastructure_as_code?ref=nick.rond-eau.com" rel="noopener">&#x201C;infrastructure as code(IaC)&#x201D;</a>). Using Terraform, you declare every single piece of your infrastructure once, in static files, allowing you to deploy and destroy cloud infrastructure easily, make incremental changes</p>]]></description><link>https://nick.rond-eau.com/deploying-an-aws-ecs-cluster-of-ec2-instances-with-terraform/</link><guid isPermaLink="false">65c094e3c695cb0001d98b20</guid><dc:creator><![CDATA[Nick Rondeau]]></dc:creator><pubDate>Mon, 05 Feb 2024 08:17:10 GMT</pubDate><media:content url="https://nick.rond-eau.com/content/images/2024/02/RWD_BlogIllustration_ECSTerraform_16x9-2048x1075-1.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://nick.rond-eau.com/content/images/2024/02/RWD_BlogIllustration_ECSTerraform_16x9-2048x1075-1.jpg" alt="Deploying an AWS ECS Cluster of EC2 Instances With Terraform"><p> This project will utilize two major cloud computing tools.</p><p>Terraform is an infrastructure orchestration tool (also known as <a href="https://en.wikipedia.org/wiki/Infrastructure_as_code?ref=nick.rond-eau.com" rel="noopener">&#x201C;infrastructure as code(IaC)&#x201D;</a>). Using Terraform, you declare every single piece of your infrastructure once, in static files, allowing you to deploy and destroy cloud infrastructure easily, make incremental changes to the infrastructure, do rollbacks, infrastructure versioning, etc.</p><p>Amazon created an innovative solution for deploying and managing a fleet of virtual machines&#x200A;&#x2014;&#x200A;AWS ECS. Under the hood, ECS utilizes AWSs&#x2019; well-known concept of EC2 virtual machines, as well as CloudWatch for monitoring them, auto scaling groups (for provisioning and deprovisioning machines depending on the current load of the cluster), and most importantly&#x200A;&#x2014;&#x200A;Docker as a containerization engine.</p><p>Here&#x2019;s what&#x2019;s to be done:</p><figure class="kg-card kg-image-card"><img src="https://cdn-images-1.medium.com/max/800/1*ZXjDx90IzuZUVEVZ8O2bog.png" class="kg-image" alt="Deploying an AWS ECS Cluster of EC2 Instances With Terraform" loading="lazy" width="700" height="319"></figure><p>Within a VPC there&#x2019;s an autoscaling group with EC2 instances. ECS manages starting tasks on those EC2 instances based on Docker images stored in ECR container registry. Each EC2 instance is a host for a worker that writes something to RDS MySQL. EC2 and MySQL instances are in different security groups.</p><p>We need to provision a some building blocks:</p><ul><li>a VPC with a public subnet as an isolated pool for our resources</li><li>Internet Gateway to contact the outside world</li><li>Security groups for RDS MySQL and for EC2s</li><li>Auto-scaling group for ECS cluster with launch configuration</li><li>RDS MySQL instance</li><li>ECR container registry</li><li>ECS cluster with task and service definition</li></ul><h3 id="the-terraform-part">The Terraform Part</h3><p>To start with Terraform we need to install it. Just go along with the steps in this document: <a href="https://www.terraform.io/downloads.html?ref=nick.rond-eau.com" rel="noopener nofollow noopener noopener">https://www.terraform.io/downloads.html</a></p><p>Verify the installation by typing:</p><pre><code>$ terraform --version
Terraform v0.13.4</code></pre><p>With Terraform (in this case version 0.13.4) we can provision cloud architecture by writing code which is usually created in a programming language. In this case it&#x2019;s going to be HCL&#x200A;&#x2014;&#x200A;a <a href="https://github.com/hashicorp/hcl?ref=nick.rond-eau.com" rel="noopener nofollow noopener noopener">HashiCorp</a> configuration language.</p><h3 id="terraform-state">Terraform state</h3><p>Before writing the first line of our code lets focus on understanding what is the <strong>Terraform state</strong>.</p><p>The <a href="https://www.terraform.io/docs/state/index.html?ref=nick.rond-eau.com" rel="noopener nofollow noopener noopener">state</a> is a kind of a snapshot of the architecture. Terraform needs to know what was provisioned, what are the resources that were created, track the changes, etc.</p><p>All that information is written either to a local file <code>terraform.state</code> or to a remote location. Generally the code is shared between members of a team, therefore keeping local state file is never a good idea. We want to keep the state in a remote destination. When working with AWS, this destination is s3.</p><p>This is the first thing that we need to code&#x200A;&#x2014;&#x200A;tell terraform that the state location will be remote and kept is s3 (<code>terraform.tf</code>):</p><pre><code>terraform {
    backend &quot;s3&quot; {
        bucket = &quot;terraformeksproject&quot;
        key    = &quot;state.tfstate&quot;
    }
}</code></pre><p>Terraform will keep the state in an s3 bucket under a <code>state.tfstate</code> key. In order that to happen we need to set up three environment variables:</p><pre><code>$ export AWS_SECRET_ACCESS_KEY=...
$ export AWS_ACCESS_KEY_ID=..
$ export AWS_DEFAULT_REGION=...</code></pre><p>These credentials can be found/created in AWS IAM Management Console in <em>&#x201C;My security credentials&#x201D;</em> section.Both access keys and region<strong>must</strong>be stored in environment variables if we want to keep the remote state.</p><h3 id="vpc">VPC</h3><pre><code>provider &quot;aws&quot; {}

resource &quot;aws_vpc&quot; &quot;vpc&quot; {
    cidr_block = &quot;10.0.0.0/24&quot;
    enable_dns_support   = true
    enable_dns_hostnames = true
    tags       = {
        Name = &quot;Terraform VPC&quot;
    }
}</code></pre><p>Terraform needs to know with which API should interact. Here we say it&#x2019;ll be AWS. List of available providers can be found here: <a href="https://www.terraform.io/docs/providers/index.html?ref=nick.rond-eau.com" rel="noopener ugc nofollow">https://www.terraform.io/docs/providers/index.html</a></p><p>The <code>provider</code> section has no parameters because we&#x2019;ve already provided the credentials needed to communicate with AWS API as environment variables in order have remote Terraform state (there is possibility to set it up with<code>provider</code> parameters, though).</p><p>The resource block type <code>aws_vpc</code> with name <code>vpc</code> creates <a href="https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html?ref=nick.rond-eau.com" rel="noopener ugc nofollow">Virtual Private Cloud</a> &#x2014; a logically isolated virtual network. When creating VPC we must provide a range of IPv4 addresses. It&#x2019;s the primary CIDR block for the VPC and this is the only required parameter.</p><p>Parameters <code>enable_dns_support</code> and <code>enable_dns_hostnames</code> arerequired if we want to provision database in our VPC that will be publicly accessible (and we do).</p><h3 id="internet-gateway">Internet gateway</h3><p>In order to allow communication between instances in our VPC and the internet we need to create <a href="https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html?ref=nick.rond-eau.com" rel="noopener ugc nofollow">Internet gateway</a>.</p><pre><code>resource &quot;aws_internet_gateway&quot; &quot;internet_gateway&quot; {
    vpc_id = aws_vpc.vpc.id
}</code></pre><p>The only required parameter is a previously created VPC id that can be obtain by invoking <code>aws_vpc.vpc.id</code> this is a terraform way to get to the resource details: <em>resource.resource_name.resource_parameter.</em></p><h3 id="subnet">Subnet</h3><p>Within the VPC let&#x2019;s add a public subnet:</p><pre><code>resource &quot;aws_subnet&quot; &quot;pub_subnet&quot; {
    vpc_id                  = aws_vpc.vpc.id
    cidr_block              = &quot;10.1.0.0/22&quot;
}</code></pre><p>To create a subnet we need to provide VPC id and CIDR block. Additionally we can specify availability zone, but it&#x2019;s not required.</p><h3 id="route-table">Route Table</h3><p>Route table allows to set up rules that determine where network traffic from our subnets is directed. Let&#x2019;s create new, custom one, just to show how it can be used and associated with subnets.</p><pre><code>resource &quot;aws_route_table&quot; &quot;public&quot; {
    vpc_id = aws_vpc.vpc.id

    route {
        cidr_block = &quot;0.0.0.0/0&quot;
        gateway_id = aws_internet_gateway.internet_gateway.id
    }
}

resource &quot;aws_route_table_association&quot; &quot;route_table_association&quot; {
    subnet_id      = aws_subnet.pub_subnet.id
    route_table_id = aws_route_table.public.id
}</code></pre><p>What we did is created a route table for our VPC that directs all the traffic (<em>0.0.0.0/0</em>) to the internet gateway and associate this route table with both subnets. Each subnet in VPC have to be associated with a route table.</p><h3 id="security-groups">Security Groups</h3><p>Security groups works like a firewalls for the instances (where ACL works like a global firewall for the VPC). Because we allow all the traffic from the internet to and from the VPC we might set some rules to secure the instances themselves.</p><p>We will have two instances in our VPC &#x2014; cluster of EC2s and RDS MySQL, therefore we need to create two security groups.</p><pre><code>resource &quot;aws_security_group&quot; &quot;ecs_sg&quot; {
    vpc_id      = aws_vpc.vpc.id

    ingress {
        from_port       = 22
        to_port         = 22
        protocol        = &quot;tcp&quot;
        cidr_blocks     = [&quot;0.0.0.0/0&quot;]
    }

    ingress {
        from_port       = 443
        to_port         = 443
        protocol        = &quot;tcp&quot;
        cidr_blocks     = [&quot;0.0.0.0/0&quot;]
    }

    egress {
        from_port       = 0
        to_port         = 65535
        protocol        = &quot;tcp&quot;
        cidr_blocks     = [&quot;0.0.0.0/0&quot;]
    }
}

resource &quot;aws_security_group&quot; &quot;rds_sg&quot; {
    vpc_id      = aws_vpc.vpc.id

    ingress {
        protocol        = &quot;tcp&quot;
        from_port       = 3306
        to_port         = 3306
        cidr_blocks     = [&quot;0.0.0.0/0&quot;]
        security_groups = [aws_security_group.ecs_sg.id]
    }

    egress {
        from_port       = 0
        to_port         = 65535
        protocol        = &quot;tcp&quot;
        cidr_blocks     = [&quot;0.0.0.0/0&quot;]
    }
}</code></pre><p>First security group is for the EC2 that will live in ECS cluster. Inbound traffic is narrowed to two ports: 22 for SSH and 443 for HTTPS needed to download the docker image from ECR.</p><p>Second security group is for the RDS that opens just one port, the default port for MySQL &#x2014; 3306. Inbound traffic is also allowed from ECS security group, which means that the application that will live on EC2 in the cluster will have permission to use MySQL.</p><p>Inbound traffic is allowed for any traffic from the Internet (CIDR block 0.0.0.0/0). In real life case there should be limitations, for example, to IP ranges for a specific VPN.</p><p>This ends setting up the networking park of our architecture. Now it&#x2019;s time for autoscaling group for a EC2 instances in ECS cluster.</p><h3 id="autoscaling-group">Autoscaling Group</h3><p>Autoscaling group is a collection of EC2 instances. The number of those instances is determined by scaling policies. We will create autoscaling group using a <a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-asg-launch-template.html?ref=nick.rond-eau.com" rel="noopener ugc nofollow">launch template</a>.</p><p>Before we will launch container instancesandregister them into a cluster, we have to create an IAM role for those instances to use when they are launched:</p><pre><code>data &quot;aws_iam_policy_document&quot; &quot;ecs_agent&quot; {
  statement {
    actions = [&quot;sts:AssumeRole&quot;]

    principals {
      type        = &quot;Service&quot;
      identifiers = [&quot;ec2.amazonaws.com&quot;]
    }
  }
}

resource &quot;aws_iam_role&quot; &quot;ecs_agent&quot; {
  name               = &quot;ecs-agent&quot;
  assume_role_policy = data.aws_iam_policy_document.ecs_agent.json
}


resource &quot;aws_iam_role_policy_attachment&quot; &quot;ecs_agent&quot; {
  role       = &quot;aws_iam_role.ecs_agent.name&quot;
  policy_arn = &quot;arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceforEC2Role&quot;
}

resource &quot;aws_iam_instance_profile&quot; &quot;ecs_agent&quot; {
  name = &quot;ecs-agent&quot;
  role = aws_iam_role.ecs_agent.name
}</code></pre><p>Having IAM role we can create an autoscaling group from template:</p><pre><code>resource &quot;aws_launch_configuration&quot; &quot;ecs_launch_config&quot; {
    image_id             = &quot;ami-094d4d00fd7462815&quot;
    iam_instance_profile = aws_iam_instance_profile.ecs_agent.name
    security_groups      = [aws_security_group.ecs_sg.id]
    user_data            = &quot;#!/bin/bash\necho ECS_CLUSTER=my-cluster &gt;&gt; /etc/ecs/ecs.config&quot;
    instance_type        = &quot;t2.micro&quot;
}

resource &quot;aws_autoscaling_group&quot; &quot;failure_analysis_ecs_asg&quot; {
    name                      = &quot;asg&quot;
    vpc_zone_identifier       = [aws_subnet.pub_subnet.id]
    launch_configuration      = aws_launch_configuration.ecs_launch_config.name

    desired_capacity          = 2
    min_size                  = 1
    max_size                  = 10
    health_check_grace_period = 300
    health_check_type         = &quot;EC2&quot;
}</code></pre><p>If we want to use a created, named ECS cluster we have to put that information into <em>user_data</em>, otherwise our instances will be launched in default cluster.</p><p>Basic scaling information is described by <em>aws_autoscaling_group</em> parameters. Autoscaling policy has to be provided, we will do it later.</p><p>Having autoscaling group set up we are ready to launch our instances and database.</p><h3 id="database-instance">Database Instance</h3><p>Having prepared subnet and security group for RDS we need one more thing to cover before launching the database instance. To provision a database we need to follow some rules:</p><ul><li>Our VPC has to have enabled <em>DNS hostnames</em> and <em>DNS resolution </em>(we did that while creating VPC).</li><li>Our VPC has to have a <a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.WorkingWithRDSInstanceinaVPC.html?ref=nick.rond-eau.com" rel="noopener ugc nofollow">DB subnet group</a> (that is about to happen).</li><li>Our VPC has to have a security group that allows access to the DB instance.</li></ul><p>Let&#x2019;s create the missing piece:</p><pre><code>resource &quot;aws_db_subnet_group&quot; &quot;db_subnet_group&quot; {
    subnet_ids  = [aws_subnet.pub_subnet.id]
}</code></pre><p>And database instance itself:</p><pre><code>resource &quot;aws_db_instance&quot; &quot;mysql&quot; {
    identifier                = &quot;mysql&quot;
    allocated_storage         = 5
    backup_retention_period   = 2
    backup_window             = &quot;01:00-01:30&quot;
    maintenance_window        = &quot;sun:03:00-sun:03:30&quot;
    multi_az                  = true
    engine                    = &quot;mysql&quot;
    engine_version            = &quot;5.7&quot;
    instance_class            = &quot;db.t2.micro&quot;
    name                      = &quot;worker_db&quot;
    username                  = &quot;worker&quot;
    password                  = &quot;worker&quot;
    port                      = &quot;3306&quot;
    db_subnet_group_name      = aws_db_subnet_group.db_subnet_group.id
    vpc_security_group_ids    = [aws_security_group.rds_sg.id, aws_security_group.ecs_sg.id]
    skip_final_snapshot       = true
    final_snapshot_identifier = &quot;worker-final&quot;
    publicly_accessible       = true
}</code></pre><p>All the parameters are more less self explanatory. If we want our database to be publicly accessible you have to set the <code>publicly_accessible</code> parameter as <code>true</code>.</p><h3 id="elastic-container-service">Elastic Container Service</h3><p>ECS is a scalable container orchestration service that allows to run and scale dockerized applications on AWS.</p><p>To launch such an application we need to download image from some repository. For that we will use ECR. We can push images there and use them while launching EC2 instances within our cluster:</p><pre><code>resource &quot;aws_ecr_repository&quot; &quot;worker&quot; {
    name  = &quot;worker&quot;
}</code></pre><p>And the ECS itself:</p><pre><code>resource &quot;aws_ecs_cluster&quot; &quot;ecs_cluster&quot; {
    name  = &quot;my-cluster&quot;
}</code></pre><p>Cluster <code>name</code> is important here, as we used it previously while defining launch configuration. This is where newly created EC2 instances will live.</p><p>To launch a dockerized application we need to create a task &#x2014; a set of simple instructions understood by ECS cluster. The task is a JSON definition that can be kept in a separate file:</p><pre><code>[
  {
    &quot;essential&quot;: true,
    &quot;memory&quot;: 512,
    &quot;name&quot;: &quot;worker&quot;,
    &quot;cpu&quot;: 2,
    &quot;image&quot;: &quot;${REPOSITORY_URL}:latest&quot;,
    &quot;environment&quot;: []
  }
]</code></pre><p>In a JSON file we define what image will be used using template variable provided in a <code>template_file</code> data resource as <code>repository_url</code> tagged with <code>latest</code>. 512 MB of RAM and 2 CPU units that is enough to run the application on EC2.</p><p>Having this prepared we can create terraform resource for the task definition:</p><pre><code>resource &quot;aws_ecs_task_definition&quot; &quot;task_definition&quot; {
  family                = &quot;worker&quot;
  container_definitions = data.template_file.task_definition_template.rendered
}</code></pre><p>The <code>family</code> parameter is required and it represents the unique name of our task definition.</p><p>The last thing that will bind the cluster with the task is a ECS service. The service will guarantee that we always have some number of tasks running all the time:</p><pre><code>resource &quot;aws_ecs_service&quot; &quot;worker&quot; {
  name            = &quot;worker&quot;
  cluster         = aws_ecs_cluster.ecs_cluster.id
  task_definition = aws_ecs_task_definition.task_definition.arn
  desired_count   = 2
}</code></pre><p>This ends the terraform description of an architecture.</p><p>There&#x2019;s just one more thing left to code. We need to output the provisioned components in order to use them in worker application.</p><p>We need to know URLs for:</p><ul><li>ECR repository</li><li>MySQL host</li></ul><p>Terraform provides output block for that. We can print to the console any parameter of any provisioned component.</p><pre><code>output &quot;mysql_endpoint&quot; {
    value = aws_db_instance.mysql.endpoint
}

output &quot;ecr_repository_worker_endpoint&quot; {
    value = aws_ecr_repository.worker.repository_url
}</code></pre><h3 id="applying-the-changes">Applying the changes</h3><p>First we need to initialize a working directory that contains Terraform files by typing <code>terraform init</code>. This command will install needed plugins and provide a code validation.</p><p>Follow up with <code>terraform plan</code>.</p><p>Finding that you&#x2019;re receiving an error?</p><figure class="kg-card kg-image-card"><img src="https://nick.rond-eau.com/content/images/2024/02/image-1.png" class="kg-image" alt="Deploying an AWS ECS Cluster of EC2 Instances With Terraform" loading="lazy" width="583" height="99"></figure><p>You need to manually create theS3bucket through the aws console, making sure to edit <code>terraform.tf</code> with the correct bucket name.</p><p>If everything is fine we can run <code>terraform apply</code> to finally provision the desired infastructure.</p>]]></content:encoded></item></channel></rss>