Granting an OIDC user access to a Kubernetes Cluster

How to provide Kubernetes API access to an OIDC-authenticated user.
Jan 21, 2024 ยท 467 words ยท 3 minute read

Introduction ๐Ÿ”—

I run up clusters with kubeadm . Once a cluster is up and running, it can be enticing to just copy the /etc/kubernetes/admin.conf credentials down to your local .kube/config file and crack on. However, as you can guess, that’s not best practice, especially when working in teams.

In order for audit logs to contain more specific details about which user performed which actions, and for general access control purposes, it helps a lot if users authenticate to clusters using their own credentials, not using shared credentials.

In the post, I will cover how to configure the cluster to authenticate users against an OIDC service. In this case the OIDC service that I will be using will be Keycloak.

Kubernetes auth ๐Ÿ”—

To authenticate as an Adminstrator to a K8S cluster, you must generate a CSR (certificate signing request) which must be signed by the cluster API server. The resulting client certificate will allow you to authenticate to the cluster using the Kubernetes API (i.e. with ‘kubectl’ and other client tools).

For more info, see upstream docs .

Create a user certificate ๐Ÿ”—

In summary, to create a CSR…

openssl genrsa -out rossg.key 2048
openssl req -new -key rossg.key -out rossg.csr -subj "/CN=rossg/O=admin"
cat >rossg-csr.yaml <<EOF
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
  name: rossg
spec:
  request: $(cat rossg.csr | base64 | tr -d '\n')
  signerName: kubernetes.io/kube-apiserver-client
  usages:
  - client auth
EOF

Get user certificate signed by cluster ๐Ÿ”—

Copy that YAML file up to your home folder on cluster’s master node, and apply it as the admin user…

ssh your-cluster-master-node-1
...
sudo -s
...
export KUBECONFIG=/etc/kubernetes/admin.conf
kubectl apply -f rossg-csr.yaml
kubectl certificate approve rossg
...

Add the user to any roles/groups as applicable.

kubectl create clusterrolebinding your-cluster-admin \
  --clusterrole=cluster-admin \
  --group=admin

Prepare the signed certificate and cluster CA certificate for download…

kubectl get csr rossg -o jsonpath='{.status.certificate}' | base64 -d > rossg.crt
kubectl -n default get configmap kube-root-ca.crt -o jsonpath="{.data['ca\.crt']}" > k8s-api.crt

Configure signed certificate into local client configuration ๐Ÿ”—

Next, retrieve the signed certificate and the cluster CA to your laptop and use it to configure a profile in your local user’s ~/.kube/config (or Windows equivalent).

scp your-cluster-node-1:rossg.crt .
scp your-cluster-node-1:k8s-api.crt .

Create a user in the local K8S client configuration file.

kubectl config set users.rossg.client-certificate-data \
  "$(cat rossg.crt | base64 -w0)"
kubectl config set users.rossg.client-key-data \
  "$(cat rossg.key | base64 -w0)"

Now create a cluster profile, and select it as the current context…

kubectl config set-context your-cluster \
  --cluster=your-cluster \
  --user=rossg
kubectl config use-context your-cluster

At this point we need to set the API server endpoint details for the newly created cluster profile, otherwise it will assume 127.0.0.1.

kubectl config set-cluster your-cluster-logging \
  --server=https://10.96.4.1:6443
kubectl config set clusters.your-cluster.certificate-authority-data \
  "$(cat k8s-api.crt | base64 -w0)"

Test it with…

kubectl cluster-info
kubectl get nodes
kubectl get pods --all-namespaces

Keep those creds safe!!!