Table of Contents
Get root access on pods in an AKS cluster
Summary: Usually you wouldn't need to get root access to pods in a kubernetes cluster, but sometimes it can be convenient for a quick test or as emergency access. I recently had the desire to be root at one of the pods, and it turned out to be quite difficult nowadays, so I wrote a post on it. I'll be documenting the various options you have to interact with pods and containers in pods, until I finally found the solution. I hope you enjoy the journey.
Date: 27 December 2024
Note that this was done on an aks cluster with version 1.27.7.
Using kubectl
The first thing you might try is to use kubectl to exec into the pod. This is the most common way to interact with pods and containers in pods. You can use the following command to exec into a pod:
kubectl exec -it podname -- /bin/bash
This will give you a shell inside the container, but it will not be a root shell. You can try to use sudo, but it will most likely not work. You can also try to use the -u flag to specify a user, but this is unfortunately not supported in kubectl:
PS C:\Repos\GetShifting\work> kubectl exec -it kube-prometheus-stack-grafana-7d448f8457-wr9hh -c grafana -u root -- /bin/bash error: unknown shorthand flag: 'u' in -u See 'kubectl exec --help' for usage.
As there are no other options in kubectl to get root access, we'll have to look at other options.
Using docker
Another option is to use docker to exec into the container. You can use the following command to exec into a container:
docker exec -it containerid /bin/bash
To be able to use this on an aks cluster node, you'll first have to access the node. To gain access to the node, first check the nodename on which the pod is running, and then use the kubectl debug command to get access to the node. Finally run chroot to get a shell on the node:
kubectl get pods kube-prometheus-stack-grafana-7d448f8456-12345 -o wide kubectl debug node/aks-agentpool-14577651-vmss000033 -it --image=mcr.microsoft.com/cbl-mariner/busybox:2.0 chroot /host
Now you should be able to use docker to exec into the container, but unfortunately docker was removed from the kubernetes already in version 1.24:
root@aks-agentpool-14577651-vmss000033:/# docker exec -it -u root <dockerid> /bin/bash sudo: docker: command not found
As it is not recommended to install docker on the node, we'll have to look at other options.
Using runc
Note:
runc is a command line client for running applications packaged according to the Open Container Initiative (OCI) format and is a compliant implementation of the Open Container Initiative specification
crictl provides a CLI for CRI-compatible container runtimes. This allows the CRI runtime developers to debug their runtime without needing to set up Kubernetes components.
Another option is to use runc to exec into the container. However, we would first need to get the full container id running inside the pod:
root@aks-agentpool-14577651-vmss000033:/# crictl ps | grep grafana 6439882b3402a f9095e2f0444d 6 days ago Running grafana 0 75ba31cfe2863 kube-prometheus-stack-grafana-7d448f8457-wr9hh 658d77517b020 2e6ed1888609c 6 days ago Running grafana-sc-datasources 0 75ba31cfe2863 kube-prometheus-stack-grafana-7d448f8457-wr9hh fc35630b1f362 2e6ed1888609c 6 days ago Running grafana-sc-dashboard 0 75ba31cfe2863 kube-prometheus-stack-grafana-7d448f8457-wr9hh root@aks-agentpool-14577651-vmss000033:/# crictl ps --verbose --id 6439882b3402 | grep ID ID: 6439882b3402a66761dde073c60721cca3ce89f9e2332a9ca4da3e8cbb268dec PodID: 75ba31cfe28639a688f06693b7120faccef4bb1a724ed1e8cb01b1cb4bcdc881
Runc has a command to exec into a container, which failed too:
root@aks-agentpool-14577651-vmss000033:/# runc --root /run/containerd/runc/k8s.io/ exec --tty --user 0 6439882b3402a66761dde073c60721cca3ce89f9e2332a9ca4da3e8cbb268dec sh FATA[0000] nsexec-1[636535]: failed to open /proc/29605/ns/ipc: Permission denied FATA[0000] nsexec-0[636532]: failed to sync with stage-1: next state: Success ERRO[0000] exec failed: unable to start container process: error executing setns process: exit status 1
I was getting a bit frustrated, and after a lot of searching and asking for help at stackoverflow, I finally found the solution.
Using ctr
Note:
ctr is an unsupported debug and administrative client for interacting with the containerd daemon
The solution was to use ctr to exec into the container. An important note which was not very clear is to define both the user and the group, otherwise you'll get a permission denied error:
# Just the user, getting permission denied root@aks-agentpool-14577651-vmss000033:/# ctr -n k8s.io task exec --user 0 --exec-id 0 --fifo-dir /tmp -t 6439882b3402a66761dde073c60721cca3ce89f9e2332a9ca4da3e8cbb268dec sh ctr: failed to unmount /tmp/containerd-mount299059857: operation not permitted: failed to mount /tmp/containerd-mount299059857: permission denied # Providing both user and group, getting root access: root@aks-agentpool-14577651-vmss000033:/# ctr -n k8s.io task exec --user 0:0 --exec-id 0 --fifo-dir /tmp -t 6439882b3402a66761dde073c60721cca3ce89f9e2332a9ca4da3e8cbb268dec sh /usr/share/grafana # whoami root
Don't forget to cleanup
After using the kubectl debug
command, don't forget to cleanup the debug pod:
PS C:\Repos\GetShifting\work> kubectl get pods NAME READY STATUS RESTARTS AGE node-debugger-aks-agentpool-14577650-vmss000003-qql2x 0/1 Completed 0 26m PS C:\Repos\GetShifting\work> kubectl delete pod node-debugger-aks-agentpool-14577650-vmss000003-qql2x pod "node-debugger-aks-agentpool-14577650-vmss000003-qql2x" deleted