[IaC] Continuous Delivery with Crossplane and ArgoCD : how to automate the creation of AWS EKS clusters

Nicolas Vogt
17 min readFeb 18, 2022

--

Photo by Justin Luebke on Unsplash

Crossplane is an Infrastructure as Code description language and engine. Like Terraform and Pulumi, it comes with connectors to any public Cloud Provider. The philosophy of Crossplane is to manage infrastructures the same way you manage your Kubernetes workloads, avoiding context switching for DevOps teams. In this guide, I will show you how to deploy an EKS cluster on AWS using ArgoCD.

Forewords

This guide is overly inspired from the work of Vijayan Sarathy. You will find the link to the GitHub repository at the bottom of this page, as well as the official AWS documentation he provided us with. I decided to go a little further into the details because I have struggled to reproduce the whole project and wanted to help you filling the gaps.

Here is the global idea of what we will achieve. We will leverage ArgoCD to deploy an AWS EKS Cluster using a custom Crossplane composition, and deploy applications to this cluster. In this way, we are using the very same tool for both Infrastructure as Code and Kubernetes application deployments.

Argo Workflow will not be discussed here, but will be covered in the next chapter.

Index

This guide intends to give you all the necessary steps to build this kind of infrastructure. At the end, you will be able to deploy any Infrastructure stack with a YAML manifest. These steps include :

If you are already familiar with one of these steps, feel free to jump right to the chapter you would like to focus on.

Prerequisites

If you intend to follow this guide, you will need the following tools. The links bellow point to the installation procedure for each one of them :

ArgoCD

Preparation

First of all, you need a Kubernetes cluster. Either you are using your own computer and you can simply install kind/ minikube, or you can provision a new cluster on AWS using eksctl and a manifest formatted like this :

Store this file in mycluster.yaml or whatever you want to name it, and type the command :

~ » eksctl create cluster -f mycluster.yaml

The creation will take a moment (approximately 20min), and you will end up with one spot node in your EKS cluster.

Installation

Once you have a cluster to work with, you can start with the installation of ArgoCD.

# create the argocd namespace
~ » kubectl create ns argocd
# apply the latest stable version manifest
~ » kubectl apply -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
# wait for the container argocd-server-* to be ready
~ » kubectl wait --for condition=ready pod $(kubectl get pods -n argocd | awk ‘{if ($1 ~ “argocd-server-”) print $1}’) -n argocd

Now that argocd is installed, we will have to log into the web interface using the argocd CLI. By default, the ArgoCD API server is not exposed, you will have to patch the service before starting a port forwarding. Proceed like this :

# patch the service to make the API reachable
~ » kubectl -n argocd patch svc argocd-server -p ‘{“spec”: {“type”: “LoadBalancer”}}’
# start a port-forwarding
~ » nohup kubectl port-forward svc/argocd-server -n argocd 8080:443 > /dev/null 2>&1 &

Okay, now it is time to login into the UI and you will need the ArgoCD CLI to type the following commands. I also describe the method to change the initial password and delete it after the first login.

# Get initial password
~ » PASS=$(kubectl -n argocd get secret argocd-initial-admin-secret --template={{.data.password}} | base64 -d; echo)
# Generate a new random password
~ » echo $(< /dev/urandom tr -dc _A-Z-a-z-0–9 | head -c${1:-32}; echo) > .secret
~ » NEWPASS=$(cat .secret)
# Log in to ARGOCD with the initial password
~ » argocd login localhost:8080 --insecure --username admin --password $PASS
# Change default password and delete the initial
~ » argocd account update-password --current-password $PASS --new-password $NEWPASS
# delete the initial password
~ » kubectl -n argocd delete secret argocd-initial-admin-secret

You can now open a browser on https://localhost:8080 and login with admin and the password stored in the .secret file.

Configuration

Now we will have to configure ArgoCD, starting with declaring the known hosts keys in a ConfigMap, otherwise ArgoCD will not be able to connect to the repository.

Here are all the keys I have found for GitHub.

Simply apply this manifest file.

The next step is mandatory if you consider using kubeseal. As we will use it further in this guide, I recommend you apply this manifest :

The link to the documentation is provided in the manifest if you want to find out more about this.

Last step of this configuration process: you will have to declare all the Helm Chart repositories you will use. In my case I specified Prometheus, Grafana, and Crossplane.

Don’t forget to add your own Helm Chart repository if you have one, otherwise you can follow this link to transform your GitHub pages into a Helm Chart repository.

You can also put your own GIT repository, as I did in my case :

My repository is public, if you want to use a private repository you will have to add a private key secret as explained here.

Ok, so far we have deployed a new cluster on which we have an up and running ArgoCD application, we are able to login to the web UI and got rid of the initial password. Now, let’s have a look at Crossplane, and how to include a deployment into ArgoCD.

Crossplane deployment

Composite Resource

Crossplane comes with providers for all the most popular Cloud Platforms. Obviously, we will use the AWS provider in this example. When Vijayan wrote his article, the latest version of the AWS provider was 0.17, and I have updated it to the 0.23. There are slight changes, so just be aware that you may have to change the API version or the name of the resource you are trying to create if you intend to use a more recent version. In my case, ElasticIp was renamed as Address.

The provider contains APIs for most of the AWS common resources, but when you have to build more complex deployments you have to define a composite resource. It is comparable to a Terraform module. Basically, it is a customizable package of resources.

To build a composition, you need 3 things :

  • A configuration manifest
  • A composition manifest
  • A composite resource definition manifest
https://crossplane.io/docs/v1.6/media/composition-how-it-works.svg

A Claim is an instance of the Composite Resource.

So, let’s start building a composite resource with the EKS Cluster and all its related resources : VPC, subnets, NAT Gateways, …

The first thing to do is to download the kubectl crossplane set of commands, you will find it here.

We will create a new directory with the following files :

This file will specify your package’s metadata and the dependencies. We will use the 0.23 version or further of the AWS provider.

The next file (composition.yaml), contains the Composition, ie all the resources that will be created. Two private and two public subnets will be created, will all the necessary elements to make the connection to the outside world. I encourage you to look more into the details if you intend to use it in production, in order to make it fit your environment. The file is quite long, so it will not be printed here. Just follow the link to get the content of the file.

Note that the node instance type (t2.medium) is hardcoded, but should be set as a variable. If you want to change it, just edit the last resource at the end of the file. It should look like this :

The last file is the CompositeResourceDefinition.yaml. It defines the variables and their boundaries. The allowed regions in my example are eu-west-2 and eu-west-3. Please, feel free to adapt them to your use case. The same is true for the allowed Kubernetes versions.

Once those files are correctly defined, you can simply build your package with the following command :

~ » kubectl crossplane build configuration

This command will create a .xpkg file in the current directory. This package has to be pushed in a registry in order to make it available to your cluster, so you will need a Container Registry to push your package to. If you don’t have one yet, you can create a public AWS ECR repository, and push your package with the following command :

# Set your variables
~ » REGISTRY=public.ecr.aws/XXXXX
~ » REGION=my_region
~ » PACKAGE_NAME=namespace/my_package_name
~ » PACKAGE_VERSION=my_package_version
# Login to your registry
~ » aws ecr-public get-login-password --region $REGION | docker login --username AWS --password-stdin $REGISTRY
# Push your package
~ » kubectl crossplane push configuration $REGISTRY/$PACKAGE_NAME:$PACKAGE_VERSION

Ok, we have just made the template available to our cluster, let’s now build a Helm Chart containing a Claim.

Credentials

Crossplane will require AWS credentials to deploy its resources. You will have to create either a user or a role.

Using a role implies that you have set up an OIDC connection between your EKS cluster and IAM, which is not the case in this example. Although this is the proper way to do it, I have not tested this option yet. Anyway, the steps are pretty straightforward : once the connection is established (here is the link to the official AWS documentation) all you have to do is create a role/policy, and a ServiceAccount on your Kubernetes cluster.

In case of a user creation, the option is a little trickier because we do not want our credentials to be visible in our Git repository. We will have to encrypt the credentials in a vault, and unseal it on demand. Following the work of Vijayan, we will use kubeseal to encrypt our credentials.

Kubeseal deployment will be embedded in the Helm Chart as dependency, so we will not have to worry about the installation. But what we do have to worry about is the certificate and key which will be used to seal/unseal secrets. Otherwise, Kubeseal will create new ones at startup and will not be able to decrypt our credentials. Those are set in a sealed-secrets-key secret in the sealed-secrets namespace.

So we will create the namespace :

~ » kubectl create ns sealed-secrets

If you don’t have a proper certificate, you can generate a self-signed one like explained here :

# export your variables
~ » export PRIVATEKEY="mytls.key"
~ » export PUBLICKEY="mytls.crt"
~ » export NAMESPACE="sealed-secrets"
~ » export SECRETNAME="mycustomkeys"
# generate a RSA key pair (certificates)
~ » openssl req -x509 -nodes -newkey rsa:4096 -keyout "$PRIVATEKEY" -out "$PUBLICKEY" -subj "/CN=sealed-secret/O=sealed-secret"
# Create a tls k8s secret, using the created RSA key pair
~ » kubectl -n "$NAMESPACE" create secret tls "$SECRETNAME" --cert="$PUBLICKEY" --key="$PRIVATEKEY"
~ » kubectl -n "$NAMESPACE" label secret "$SECRETNAME" sealedsecrets.bitnami.com/sealed-secrets-key=active
# optional - deleting the controller Pod (if kubeseal is already deployed)
~ » kubectl -n "$NAMESPACE" delete pod -l name=sealed-secrets-controller

Once you set up your own certificate, you have to encrypt your AWS credentials, the ones that crossplane will use to create / update resources. You can use your own credentials but it is a bad idea if you intend to use Crossplane in production. Following the least privileges principle, you must create an account with the minimum permissions. In our case, you will need those permissions (you can omit the permissions which give you access to CloudFormation).

Put them in a mycreds.txt file formatted like that :

[default]
aws_access_key_id =
aws_secret_access_key =

And then encode this file in base64 with the following command :

~ » cat aws-credentials-unsealed.yaml | base64 | tr -d "\n"

Insert it in a file templated like this :

aws-credentials-unsealed.yaml

And finally, encrypt this file with the generated public certificate :

~ » kubeseal --cert=$PUBLICKEY --format=yaml < aws-credentials-unsealed.yaml > aws-credentials-sealed.yaml

Declaring a Claim (XR)

The claim declaration is pretty straightforward. You specify the value of the variables declared in your Composite Resource Definition. Here is the manifest I have used :

Note two important things here. The first thing to notice is the sync-ware annotation. It refers to an ArgoCD parameter that we will cover in the next paragraph. The second thing to notice is the link to the EKS Cluster Role and the Worker Role. We assume here that you already have those roles defined in your IAM configuration, if not, you must create them with the generic EKS permissions.

Create an application in ArgoCD

Let’s resume what we have done so far, we have :

  • an up and running ArgoCD,
  • an EKS cluster composition pushed in our public registry,
  • set up a Kubeseal certificate and key,
  • created an AWS account with the proper permissions and encrypt it with the kubeseal certificate,
  • declared a claim (XR resource) of the composition.

Now it is time we bundle everything into a Helm chart and declare it as an ArgoCD application.

We will create the Helm chart with the following structure :

_
|_ charts/
|
|_ templates/
| |_ 1-controller.yaml
| |_ 2-aws-credentials.yaml
| |_ 3-aws-provider.yaml
| |_ 4-aws-provider-config.yaml
| |_ 5-crossplane-eks-composition.yaml
| |_ 6-eks-cluster-xr.yaml
|
|_ Chart.yaml

We already have those two files from the previous steps :

  • 2-aws-credentials.yaml
  • 6-eks-cluster-xr.yaml

Here are the missing files and their utility :

These files will be delivered into several sync-waves by ArgoCD. This annotation has been set because we want to deploy the kubesel controller before Crossplane, otherwise Crossplane will not be able to unseal the secret. The same is true for the composition, otherwise the XR resource will not find any api resource to match our deployment.

At the end of each sync-wave, ArgoCD will wait 10 seconds before starting the next one.

To proceed with the next step, you will need a Helm Repository to push your package to. If you don’t have any, you can turn your GitHub account into a Helm Chart Registry or use ECR as so. To package your chart and push it to your registry, use the following commands :

# get the chart version
~ » CHART_NAME=mychart
# get package version
~ » VERSION=$(grep appVersion charts/$CHART_NAME/Chart.yaml | awk -F ":" '{print $3}' | sed -e 's/"//g');
# download dependencies if any
~ » helm dependency update charts/${CHART_NAME};
# package your chart
~ » helm package charts/${CHART_NAME} --version="$VERSION";
# push the package in your registry
~ » helm push ${CHART_NAME}-${VERSION}.tgz oci://myregistry

When this is done, all your have to do is to create an ArgoCD application that points to your helm chart. First, create a new project. In this manifest, you will have to declare all the repositories, destination namespaces and resources your applications can deploy. One project can be associated with many applications. The manifest in our example is the one below.

And finally, create a new application :

Now that you are done with your infrastructure stack, your applications will synchronize and deploy one after the other : kubeseal, crossplane, your provider and its configuration, your composition, and your XR resource.

You should have one application in your ArgoCD console (the one in the top left corner in this capture) :

If you navigate through the deployment, you should see your composition and all the related objects (VPC, address, NAT gateways, …).

Note that there is a bug in this deployment, the application appears Out of Sync at the end of the deployment. This is because the role created when deploying Crossplane at the beginning is modified afterwards during one of the sync wave. I think here Flux is better than ArgoCD to handle this kind of problems during a deployment leveraging Kustomization Dependencies.

In command line, you can check your crossplane deployment using kubectl.

To see if your packages are correctly installed :

~ » kubectl get pkg 
NAME INSTALLED HEALTHY PACKAGE AGE provider.pkg.crossplane.io/crossplane-provider-aws True True registry.upbound.io/crossplane/provider-aws:v0.23.0 10d
NAME INSTALLED HEALTHY PACKAGE AGE
configuration.pkg.crossplane.io/crossplane-eks-composition True True public.ecr.aws/XXXXX/XXXX/eks-cluster-composition:3.0.32 10d

Both your crossplane provider and your composition should appear HEALTHY. If this is not the case, use a describe command to understand what is going on.

Then use the following command to retrieve all your crossplane resources :

~ » kubectl get crossplane

To list one type of resource, for example the VPCs, you can type the following :

~ » kubectl get vpc                                                                                                                                                                                    
NAME READY SYNCED ID CIDR IPV6CIDR AGE
crossplane-prod-cluster-x8h49 True True vpc-066edd2ad918d36db 10.20.0.0/16 10d

The resource must be READY and SYNCED. That means crossplane is done with the creation and you should see it in your AWS console. The EKS cluster creation can take some time, so don’t worry if your resource does not appear SYNCED even after 5–10 minutes.

~ » kubectl get cluster  
NAME READY SYNCED AGE
crossplane-prod-cluster-wx8tb True True 10d

Once the cluster is SYNCED, a secret should have been created in a namespace named eks :

~ » kubectl get secret -n eks           
NAME TYPE DATA AGE
crossplane-prod-cluster-connection connection.crossplane.io/v1alpha1 3 10d
default-token-zqq4c kubernetes.io/service-account-token 3 10d

It should be starting by crossplane-prod-cluster or whatever name you specified in your XR resource in the metadata.name field. It contains the endpoint, certificate and the content of a kubeconfig file you can use to connect to your cluster.

~ » kubectl get secret -n eks crossplane-prod-cluster-connection -o yaml | yq '.data.apiserver-endpoint' | base64 -d                                                                                       user@work-aws
https://XXXXXXXXXXXXXXXXX.XXX.eu-west-3.eks.amazonaws.com

You can output the kubeconfig configuration into a file with the following command :

~ » kubectl get secret -n eks crossplane-prod-cluster-connection -o yaml | yq '.data.value' | base64 -d > crossplane-prod-cluster-kubeconfig

Then connect to you cluster with kubectl :

~ » kubectl get nodes --kubeconfig ./crossplane-prod-cluster-kubeconfig

Modify the aws-auth config map in the kube-system namespace in order to give your user/role the masters permissions.

~ » kubectl edit configmap aws-auth -n kube-system --kubeconfig ./crossplane-prod-cluster-kubeconfig

Insert your user like this :

I though about automating this step, setting the master user/role in the XR resource. I haven’t figured out a solution yet, but I think I can easily add a job at the end of the sync-wave to manage this extra step and the one we will see in the next section.

Your Workload

At this stage, we should have an up and running ArgoCD application deploying an EKS cluster using Crossplane and a custom Composition. We are able to connect to this cluster using our own user or role and play kubectl commands on it.

Now, we would like to deploy applications to this cluster with ArgoCD. In order to do this, we will have first to register this new cluster in the ArgoCD configuration before creating a new project and a new application.

Register the cluster

The registering process is still manual. As I said in the previous section, I haven’t found out any solution to automate this step yet, but as it only requires applying manifests, it should not be a too difficult automation to do.

Use the Argo CD CLI to register your cluster like this :

~ » argocd cluster add my-cluster-alias

If you are not sure of the name of your cluster alias, you can list them with this command, or use the full cluster name:

~ » kubectl config get-contexts

The argocd cluster add command deploys an argocd-manager service account on your cluster. If you want more details about what lies behind this command, try this link.

Create a new Project

The new project will be created in the same way we have proceeded during the configuration of our crossplane application.

Here is the manifest we will apply :

Create a target application

This application will be deployed on our cluster. You can adapt the following to your use case, and create as many applications as you want. Repeat this step until you have defined all of them.

In this example, we will build a helm chart named monitoring, that will deploy prometheus and grafana. Let’s create the following structure :

_
|_ charts/
|
|_ Chart.yaml
|_ grafana-values.yaml
|_ prometheus-values.yaml

The Chart.yaml will look like this :

The values.yaml :

grafana-values.yaml
prometheus-values.yaml

Package your chart and push it to your registry like we did before :

# get the chart version
~ » CHART_NAME=mychart
# get package version
~ » VERSION=$(grep appVersion charts/$CHART_NAME/Chart.yaml | awk -F ":" '{print $3}' | sed -e 's/"//g');
# download dependencies if any
~ » helm dependency update charts/${CHART_NAME};
# package your chart
~ » helm package charts/${CHART_NAME} --version="$VERSION";
# push the package in your registry
~ » helm push ${CHART_NAME}-${VERSION}.tgz oci://myregistry

As I said, repeat the process if you have other applications to deploy.

Deploy your ArgoCD application

I have not mentioned it, but I opted for an app of apps pattern. This makes it easier to synchronize multiple applications in a given cluster. A friend of mine told me about application sets which can help you to deploy the same application to multiple clusters. It could be useful if you are dealing with multi sites infrastructures. Let’s define a very basic folder structure for our application that mimics the helm chart structure :

_
|_ charts/
|
|_ templates/
| |_ 1-monitor.yaml
|
|_ Chart.yaml

In the templates folder, we have only one file corresponding to our example. If you have other applications to deploy, add them into a separate manifest file adapting the 1-monitor.yaml content provided here :

And create the Chart.yaml like this :

ArgoCD allows you to synchronize your application either from a Helm chart or a GIT repository. In the crossplane example, the application points to a helm chart so we have to apply a change to ArgoCD when a new version of the chart is released. We could have used a GIT sync to prevent us from doing this extra manual step and make ArgoCD listen to new releases on a branch and apply all the changes automatically. It has not been done for crossplane because I wanted to trigger any change on the infrastructure myself, but I would have if I had a proper CI pipeline to enforce pre-checks on my manifests. I have opted for a GIT synchronization, but if you choose to keep it as a helm, you just have to package it and push it to your registry.

So, now create your ArgoCD application that points to that chart. We will listen to the HEAD version of the main repository branch in order to apply :

When you apply it, you should see two new applications :

  • workload-apps : the app of apps
  • monitoring: the target application deployed to your target cluster
multiple applications in the workload-apps
monitoring application

Conclusion

Well I hope this guide helped you to reproduce the CD chain with Crossplane, I tried to describe all the steps I went through. Here are more resources to help you out:

Don’t hesitate to ask me if you have any questions regarding this guide, I will be happy to answer them.

--

--

Nicolas Vogt

Curious, most of the time, eager to learn something new when I’m not