Tim Van Wassenhove

Passionate geek, interested in Technology. Proud father of two

25 Feb 2020

Leverage Terraform, NGINX Ingress Controller, cert-manager and Let's Encrypt to quickly create a Kubernetes cluster on AWS.

In my previous post I demonstrated how easy it has become to deploy a webapplications with an HTTPS backend on Kubernetes and Azure. In this post I demonstrate the same but on AWS.

In order to follow along you should clone the sample code from this repository:

git clone https://github.com/timvw/sample-terraform-aws-k8s-nginx-letsencrypt

First configure the aws access_key and secret_key for Terraform:

export AWS_ACCESS_KEY="XXXXXXXXXXXXXXXXXX"
export AWS_SECRET_KEY="XXXXXXXXXXXXXXXXXX"
export AWS_DEFAULT_REGION="eu-west-1"

With all this configuration in place we can instruct Terraform to create the kubernetes cluster:

terraform init
terraform apply -auto-approve

After a couple (~15) of minutes your cluster will be ready. Importing the credentials into your ~/.kube/config can be done as following:

aws eks --region $AWS_DEFAULT_REGION update-kubeconfig --name demo

There are some differences with AKS:

  • On AKS a client and key certificate are added to your kubeconfig. On EKS an entry is added which invokes aws eks get-token

  • On EKS the Kubernetes master runs in a different network and you need to provision such that the nodegroups can connect to this master. In my example this is achieved by installing an internet gateway.

Another remark: In case you try to create a Fargate profile and it fails you should verify that you are doing it in a supported region.

Now it is time to deploy the NGINX Ingress Controller. We also need to apply the aws specific additions:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.29.0/deploy/static/mandatory.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.29.0/deploy/static/provider/aws/service-l4.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.29.0/deploy/static/provider/aws/patch-configmap-l4.yaml

Deploying the NGINX Ingress Controller results in the creation of a loadbalancer and a public ip. Here is how you can fetch that address:

aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[].DNSName'

In this example we want to access our applications as https://XXX.aws.icteam.be. We achieve this by adding an A-record (the azure public ip address) pointing to *.aws.icteam.be

For the HTTPS part we install cert-manager and use Let’s Encrypt to provide certificates:

kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.13.0/cert-manager.yaml
kubectl apply -f letsencrypt.yaml

With all this infrastructure in place we can deploy a sample application:

kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node
kubectl expose deployment hello-node --port=8080
kubectl apply -f hello-node-ingress.yaml 

Or we can deploy and expose the kubernetes dashboard:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
kubectl apply -f dashboard-sa.yaml
kubectl apply -f dashboard-ingress.yaml

You can fetch the token as following:

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep eks-admin | awk '{print $1}')