Tim Van Wassenhove

Passionate geek, interested in Technology. Proud father of two

15 Mar 2021

Notes on microk8s and cert-manager

Last couple of weeks I’ve been using MicroK8s for local development.

Installing current version of cert-manager just worked by following the installation instructions:

kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.2.0/cert-manager.yaml

Then I enabled the ingress addon:

microk8s.enable ingress

Configuring Let’s Encrypt required some deviations from the documentation. Only resources of type ClusterIssuer and public as ingress class seem to work:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-staging
spec:
  acme:
    email: tim@timvw.be
    server: https://acme-staging-v02.api.letsencrypt.org/directory
    privateKeySecretRef:
      name: letsencrypt-staging
    solvers:
    - http01:
        ingress:
          class: public
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: tim@timvw.be
    privateKeySecretRef:
      name: letsencrypt-prod
    solvers:
    - http01:
        ingress:
          class: public 

When using nginx as ingress class I ran into various errors:

  • challenge propagation: wrong status code ‘404’, expected ‘200’
  • certificate never becoming ‘Ready’

Here are some helpful commands:

kubectl logs -f -n cert-manager -f app=cert-manager
kubectl get ingress

Then I noticed that acme-staging-v02.api.letsencrypt.org could not be resolved by the cert-manager pods (trying to resolve from 127.0.0.1:53), thus I also enabled the dns addon and restarted the pods (by deleting them)

microk8s.enable dns
kubectl delete pod -n cert-manager -l app=cert-manager

And then all was fine. Eg: https://strava.apps.timvw.be just works :)

20 Jan 2021

Use cases for Github Actions

These days many systems are built with Unix Philosophy in mind, but the applications are in the form of containers. How many malicious containers images would be out there? :)

Here are a couple of examples where I leveraged Github Actions to glue such applications:

Publishing this blog with HUGO

Frustrated with jekyll ndependency hell I decided to swith to HUGO. It’s fast, really fast. And does whatever I need it to do.

Each time I push commits to this repository, this repository is checked out, hugo generates the static website and that website is push to gh-pages for delivery.

https://github.com/timvw/timvw.github.io/blob/master/.github/workflows/publish.yml

PS: Whenever azure blob-storage (or static websites) gets proper support (no DNS flattening hacks) for hosting websites from a storage account I might consider moving the hosting over there…

Releasing on central

With the shutdown of travis-ci.org in sight it was time to migrate to an alternative system to release artifacts on central.

Each time a commit is pushed to this repository, a build (and test) is triggered. Whenever a tag is pushed, that tag is released.

https://github.com/timvw/frameless-ext/blob/master/.github/workflows/ci.yml

Managing cloud infrastructure

When you have automated the provisioning of your cloud infrastructure with a tool such as Terraform you may want to automate the roll-out as well. Careful, as this currently requires some annoying “in between” steps to avoid big bang deployments. You probably do not want to replace an entire pool of nodes by simply wiping it and putting another pool in place (but do it in a phased approach.

So, each time a PR is created (or updated) we run terraform plan to validate the output and add a comment to the PR displaying the output. When the PR is merged into master you we run terraform apply (and add a comment to the commit with the output).

https://gist.github.com/timvw/7a245947a9b3b027d5a0fcd5ad3d9977

PS: In this example we have multiple terraform modules, which are all planned/applied via a matrix and terraform’s -chdir parameter.

05 Jan 2021

cluster-info on k8s cluster with limited permissions

When you have limited access (eg: only a specific namespace (~ openshift project) on a kubernetes cluster you may not be able to run kubectl cluster-info.

➜  ~ kubectl cluster-info

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Error from server (Forbidden): services is forbidden: User "timvw" cannot list resource "services" in API group "" in the namespace "kube-system"

As soon as you run the command in your namespace it will work:

➜  ~ kubectl cluster-info -n spark
Kubernetes master is running at https://api.customer.example:443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
09 Dec 2020

Leverage Terraform to create virtual machine scaleset with spot instances

In a previous post I demonstrated how easy it has become to deploy a webapplications with an HTTPS backend on Kubernetes and Azure. Let’s expand this cluster with a node pool that is backed by spot instances:

resource "azurerm_kubernetes_cluster_node_pool" "spot" {
    name                  = "spot"
    kubernetes_cluster_id = azurerm_kubernetes_cluster.k8s.id
    vm_size         = "standard_D2s_v4"

    node_count      = 0
    enable_auto_scaling = true
    min_count = 0
    max_count = 5

    priority = "Spot"
    eviction_policy = "Delete"
    spot_max_price = 0.02

    tags       = var.tags
}

The nodes in this pool will be tainted with kubernetes.azure.com/scalesetpriority=spot:NoSchedule.

For a pod to land on a node in this pool you will have to specify a toleration. Here is how you would do this in Apache Spark:

First create a pod.yml file in which you specify the toleration:

apiVersion: v1
kind: Pod
metadata:
  annotations:
    cluster-autoscaler.kubernetes.io/safe-to-evict: true
spec:
  tolerations:
  - key: "kubernetes.azure.com/scalesetpriority"
    operator: "Equal"
    value: "spot"
    effect: "NoSchedule"

And now you can submit the app

./bin/spark-submit \
  --master k8s://$KUBERNETES_MASTER_API \
  --deploy-mode cluster \
  --name spark-pi \
  --class org.apache.spark.examples.SparkPi \
  --conf spark.executor.instances=1 \
  --conf spark.kubernetes.container.image=timvw/spark:3.0.1-hadoop2.7 \
  --conf spark.kubernetes.driver.podTemplateFile=pod.yml \
  --conf spark.kubernetes.executor.podTemplateFile=pod.yml \
  --conf spark.kubernetes.node.selector.agentpool=spot \
  local:///opt/spark/examples/jars/spark-examples_2.12-3.0.1.jar
25 Feb 2020

Leverage Terraform, NGINX Ingress Controller, cert-manager and Let's Encrypt to quickly create a Kubernetes cluster on AWS.

In my previous post I demonstrated how easy it has become to deploy a webapplications with an HTTPS backend on Kubernetes and Azure. In this post I demonstrate the same but on AWS.

In order to follow along you should clone the sample code from this repository:

git clone https://github.com/timvw/sample-terraform-aws-k8s-nginx-letsencrypt

First configure the aws access_key and secret_key for Terraform:

export AWS_ACCESS_KEY="XXXXXXXXXXXXXXXXXX"
export AWS_SECRET_KEY="XXXXXXXXXXXXXXXXXX"
export AWS_DEFAULT_REGION="eu-west-1"

With all this configuration in place we can instruct Terraform to create the kubernetes cluster:

terraform init
terraform apply -auto-approve

After a couple (~15) of minutes your cluster will be ready. Importing the credentials into your ~/.kube/config can be done as following:

aws eks --region $AWS_DEFAULT_REGION update-kubeconfig --name demo

There are some differences with AKS:

  • On AKS a client and key certificate are added to your kubeconfig. On EKS an entry is added which invokes aws eks get-token

  • On EKS the Kubernetes master runs in a different network and you need to provision such that the nodegroups can connect to this master. In my example this is achieved by installing an internet gateway.

Another remark: In case you try to create a Fargate profile and it fails you should verify that you are doing it in a supported region.

Now it is time to deploy the NGINX Ingress Controller. We also need to apply the aws specific additions:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.29.0/deploy/static/mandatory.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.29.0/deploy/static/provider/aws/service-l4.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.29.0/deploy/static/provider/aws/patch-configmap-l4.yaml

Deploying the NGINX Ingress Controller results in the creation of a loadbalancer and a public ip. Here is how you can fetch that address:

aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[].DNSName'

In this example we want to access our applications as https://XXX.aws.icteam.be. We achieve this by adding an A-record (the azure public ip address) pointing to *.aws.icteam.be

For the HTTPS part we install cert-manager and use Let’s Encrypt to provide certificates:

kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.13.0/cert-manager.yaml
kubectl apply -f letsencrypt.yaml

With all this infrastructure in place we can deploy a sample application:

kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node
kubectl expose deployment hello-node --port=8080
kubectl apply -f hello-node-ingress.yaml 

Or we can deploy and expose the kubernetes dashboard:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
kubectl apply -f dashboard-sa.yaml
kubectl apply -f dashboard-ingress.yaml

You can fetch the token as following:

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep eks-admin | awk '{print $1}')
10 Feb 2020

Leverage Terraform, NGINX Ingress Controller, cert-manager and Let's Encrypt to quickly create a Kubernetes cluster which can serve webapps over HTTPS.

In this post I demonstrate how easy it has become to create a kubernetes cluster which serves webapplications over HTTPS.

In order to follow along you should clone the sample code from this repository:

git clone https://github.com/timvw/sample-terraform-azure-k8s-nginx-letsencrypt

First configure the azure service principal for Terraform:

export ARM_CLIENT_ID="XXXXX-XXXXX-XXXXX-XXXXX-XXXXX"
export ARM_CLIENT_SECRET="XXXXX-XXXXX-XXXXX-XXXXX-XXXXX"
export ARM_SUBSCRIPTION_ID="XXXXX-XXXXX-XXXXX-XXXXX-XXXXX"
export ARM_TENANT_ID="XXXXX-XXXXX-XXXXX-XXXXX-XXXXX"

The resources in this example depend on the following variables: client_id, client_secret, aks_service_principal_app_id and aks_service_principal_client_secret. One way to configure them is by exporting their values as a TF_VAR_xxx:

export TF_VAR_client_id=$ARM_CLIENT_ID
export TF_VAR_client_secret=$ARM_CLIENT_SECRET
export TF_VAR_aks_service_principal_app_id=$ARM_CLIENT_ID
export TF_VAR_aks_service_principal_client_secret=$ARM_CLIENT_SECRET

With all this configuration in place we can instruct Terraform to create the kubernetes cluster:

terraform init
terraform apply -auto-approve

After a couple (~10) of minutes your cluster will be ready. Importing the credentials into your ~/.kube/config can be done as following:

az aks get-credentials --resource-group k8s-test --name kaz

kubectx is an awesome tool that allows you to easily switch between contexts.

Now it is time to deploy the NGINX Ingress Controller. We also need to apply the azure specific additions:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.28.0/deploy/static/mandatory.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.28.0/deploy/static/provider/cloud-generic.yaml

Deploying the NGINX Ingress Controller results in the creation of a loadbalancer and a public ip. Here is how you can fetch that address:

az network public-ip list | grep -Po '(?<="ipAddress": ")([^"]*)'

In this example we want to access our applications as https://XXX.apps.icteam.be. We achieve this by adding an A-record (the azure public ip address) pointing to *.apps.icteam.be

For the HTTPS part we install cert-manager and use Let’s Encrypt to provide certificates:

kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v0.13.0/cert-manager.yaml
kubectl apply -f letsencrypt.yaml 

With all this infrastructure in place we can deploy a sample application:

kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node
kubectl expose deployment hello-node --port=8080
kubectl apply -f ingress.yaml

Now we can access our application:

curl -v https://hello-node.apps.icteam.be

Here are some useful commands to help with debugging:

# get some cluster info
kubectl cluster-info
kubectl proxy

# follow logs of the ingress controller
kubectl logs -n ingress-nginx deployment/nginx-ingress-controller -f

# restart the ingress controller
kubectl scale deployment -n ingress-nginx --replicas=0 nginx-ingress-controller
kubectl scale deployment -n ingress-nginx --replicas=1 nginx-ingress-controller

Remove the hello-node application (pods/deployment/service/ingress):

kubectl delete all -l app=hello-node

Finally you want to remove everything:

terraform destroy -auto-approve
31 Dec 2016

Looking back at 2016

Starting 2016 at Emergency Care was a game changer for me.

On a personal level I have successfully changed my diet and lost 45kg (BMI went from 50 to 35), completed Start2Run (0-5km/30min) and Continue2Run (5-10km/60min). Looking forward to loosing more weight and getting a BMI < 30 again. Finishing a 1/2 marathon would be awesome too. Had plenty of quality family time as well!

On a professional level I have changed focus from Enterprise Apps/.NET to Data intensive (Big Data, Augmented/Business Intelligence and a bit of Machine learning) mainly using Scala. Had some serious fun bringing a Kafka cluster in place as the central messaging interface in an industrial setting (24/7) continuously pushing > 1 million messages/sec. Became familiar with Docker and looking forward to see it replace Java Application Servers.

20 Nov 2016

Parsing lines from Spark RDD

A typical Apache Spark application using RDD api starts as following:

val lines = sc.textFile("/path/to/data")
val records = lines.map(parseLineToRecord)

case class Record(...)

def parseLineToRecord(line: String) : Record = {
  val parts = line.split("\t", -1)
  ...
  Record(..)
}

In case of bad records you very often want to discard the unparseable lines:

def parseLineToRecordOption(line: String) = Option[Record] = {
  try {
    ...
    Some(Record(..))
  } catch {
    case _ => None
  }
}

val records = lines.map(parseLineToRecordOption).filter(x => x.isDefined).map(x => x.get)

And then you discover that you there is an implicit conversion from Option[T] to Iterable[T]. The nice thing is that you now can use flatMap instead of filter + map:

val records = lines.flatMap(parseLineToRecordOption)

Strangely enough there is no such implicit conversion for a Try[T] so we convert to Option first:

def tryParseLineToRecordOption(line: String) : Try[Record] =
  Try {
    ...
    Some(Record(..))
  }

val records = lines.map(tryParseLineToRecordOption).flatMap(x => x.toOption)
04 Oct 2016

Blog migrated to Jekyll

As many other people I have decided to leave Wordpress behind and move to Jekyll instead.

Most posts have been migrated pretty well but I do not feel like bothering with them anymore. In fact, some of them have become irrelevant, some of them are just wrong and others only attract spammers. In summary: they are not worth my time anymore. In case you really loved them, you can still send a pull-request :P

01 Oct 2016

Docker toolbox to the rescue

With the help of Docker Toolbox a lot of apps become easily available…

By default volumes can only be mapped on folders under the user home directory.

Here is how to enable mapping of the entire c-drive

#!/bin/sh
# script to expose c-drive to docker vm and docker containers
#
# stop the docker vm
docker-machine stop default
# share your windows c-drive with the docker (host) vm
/c/Program\ Files/Oracle/VirtualBox/VBoxManage.exe sharedfolder add default --name C_DRIVE --hostpath c:/
# start the docker (host) vm
docker-machine start default
# mount the c-drive in the docker (host) vm
docker-machine ssh default 'sudo chown docker /var/lib/boot2docker/profile && echo mount -t vboxsf C_DRIVE /c >> /var/lib/boot2docker/profile'

Examples

Amazon ECS (EC2 Container Service) cli tools

docker run -i -t -v "//c/Users/timvw/.aws:/root/.aws" timvw/docker-aws

Jekyll (Yes, you can make it work on windows but why bother?)

docker run -i -t -v "//c/src/timvw.github.io:/opt/webiste" timvw/docker-jekyll
jekyll server --incremental --watch --force_polling