Kuberneteskubernetes logo

Kubernetes is a container management platform – an open source and inspired by Google’s internal container platform (Borg). We’re going to use it to do some ChatOps, building and deploying containers to a Kubernetes cluster using a bot on Slack. This is a write up of a talk I gave at a recent Infracoders meetup which you can find here. So, to get started we need a Kubernetes cluster! Luckily there are some helpful scripts to get started on several cloud providers. I’m choosing AWS, although Google’s own cloud probably has the best compatibility with Kubernetes. First we’ll set some environment variables, then we’ll curl the installer script (obviously you should really download and check the script before piping it to bash!):

export KUBERNETES_PROVIDER=aws
export AWS_DEFAULT_PROFILE=myprofile
export KUBE_AWS_ZONE=eu-west-1b
export NUM_NODES=3
export NUM_MINIONS=$NUM_NODES
export NODE_SIZE=t2.small
export MINION_SIZE=$NODE_SIZE
export AWS_S3_REGION=eu-west-1
curl -sS https://get.k8s.io | bash

You need an AWS account, the script will use the default one if you’ve already configured it with the AWS command line tools. Otherwise, you can specify the profile as above. Once the script has finished, check that it worked by using kubectl to find the Kubernetes nodes:

$ ./kubernetes/cluster/kubectl.sh get node
NAME                                        LABELS                                                             STATUS    AGE
ip-172-20-0-59.eu-west-1.compute.internal   kubernetes.io/hostname=ip-172-20-0-59.eu-west-1.compute.internal   Ready     2m
ip-172-20-0-60.eu-west-1.compute.internal   kubernetes.io/hostname=ip-172-20-0-60.eu-west-1.compute.internal   Ready     2m
ip-172-20-0-61.eu-west-1.compute.internal   kubernetes.io/hostname=ip-172-20-0-61.eu-west-1.compute.internal   Ready     2m

 

Build Infrastructure

Now we have a cluster we need to set up some infrastructure for building containers and a registry in which to store our Docker images. We’ll start by creating a Docker registry using a replication controller and service that I’ve defined for you.

git clone https://bitbucket.org/automationlogic/k8s.git
cd k8s
./kubernetes/cluster/kubectl.sh create -f ./registry/registry-rc.yml 
./kubernetes/cluster/kubectl.sh create -f ./registry/registry-service.yml

Immediately, we have a problem to solve. Whilst Kubernetes is good at exposing and connecting services between containers – we need our docker registry to be exposed on the Kubernetes minions themselves. To make this work we can use a small proxy container on each host that exposes the registry to the host:

./kubernetes/cluster/kubectl.sh create -f ./registry/reg-proxy-rc.yml

This exposes the registry on port 5000 on all the hosts. Now we’re going to create a Jenkins pod and service. We’ll want the Jenkins config to persist, so we’ll use a Kubernetes persistent volume (backed by EBS). First we have to create the EBS volume, either using the AWS console or through the command line:

aws ec2 create-volume --availability-zone eu-west-1b --size 10 --volume-type gp2
{
    "AvailabilityZone": "eu-west-1b", 
    "Encrypted": false, 
    "VolumeType": "gp2", 
    "VolumeId": "vol-b13faf0f", 
    "State": "creating", 
    "Iops": 15, 
    "SnapshotId": "", 
    "CreateTime": "2016-03-28T16:00:48.376Z", 
    "Size": 5
}

Make a note of the volume ID that is returned and also be careful to match the availability zone to the Kubernetes nodes’ availability zone, otherwise AWS can’t attach the volume. Now open the persistent volume definition (./jenkins/jenkins-pv.yml) in your editor and update the volume ID to the one we just created:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-jenkins
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  awsElasticBlockStore:
    volumeID: aws://eu-west-1b/vol-b13faf0f
    fsType: ext4

Now we can create a persistent volume, a claim to that volume, the pod to run the jenkins container and a service to expose it like so:

./kubernetes/cluster/kubectl.sh create -f ./jenkins

If you check the definition of the Jenkins pod:

spec:
  containers:
    - name: jenkins
      image: "localhost:5000/jenkins:latest"
      ports:
        - containerPort: 8080
      volumeMounts:
      - mountPath: "/var/jenkins_home"
        name: jenkins-root
      - mountPath: "/var/run/docker.sock"
        name: docker-sock
      - mountPath: "/bin/docker"
        name: docker-bin

You’ll notice that we’re exposing the docker socket and docker binary from the host into the container. This isn’t exactly recommended practice because it essentially gives the Jenkins container root on the Kubernetes node that it is running on. However, it is the easiest way to give the Jenkins pod the ability to build containers. It does create another problem, Kubernetes will mount these volumes as root, so they won’t be accessible to the Jenkins user. To get round this we can build our own Jenkins container and push it to the registry we just created. Our container will run Jenkins as root, again not a very good idea but an easy way to make things work. Hopefully an update to Kubernetes will let us mount volumes as another user at some point in the future. For now, create the following Dockerfile:

FROM jenkins
USER root

then:

docker build -t jenkins .
docker tag jenkins <myregistry_host>:5000/jenkins
docker push <myregistry_host>:5000/jenkins

You can find the host for your registry like so:

./kubernetes/cluster/kubectl.sh get svc local-reg -o json| jq '.["status"]["loadBalancer"]["ingress"][0]["hostname"]'

You’ll need jq do:
brew install jq
if you don’t already have it installed.

Once you’ve pushed your container, the Kubernetes cluster should detect the image that was missing and start the jenkins pod. Then the service should become available once the health check on the ELB passes. You can get the URL in the same way as for the registry:

./kubernetes/cluster/kubectl.sh get svc jenkins -o json| jq '.["status"]["loadBalancer"]["ingress"][0]["hostname"]'

In the next part, we’ll deploy an example web application and a chat bot to our new cluster.

< Back