Create context for remote cluster administration
This is a short guide on how to create context on your local machine so you could administer remote kubernetes cluster with kubectl.
First we need to create our certificate request. We can do this on our local machine. Important part is to set CN to username we will use later with the kubernetes cluster. So we create user admin1
in group admin
with the following:
openssl genrsa -out admin1.key 2048
openssl req -new -key admin1.key -out admin1.csr -subj "/CN=admin1/O=admin"
Now we create user certificate on one of the master nodes or on your local machine if you have CA files from master, both private key and certificate, available. For this purpose we will do it on master node and it will be valid for 10 years:
sudo openssl x509 -req -in admin1.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out admin1.crt -days 3650
Signature ok
subject=/CN=admin1/O=admin
Getting CA Private Key
While we are on master we will use clusterrolebinding to give admin1 user admin rights on whole cluster. There is cluster-admin role we will use for this that is already there
kubectl create clusterrolebinding cluster-admin1-binding --clusterrole=cluster-admin --user=admin1
clusterrolebinding.rbac.authorization.k8s.io/cluster-admin1-binding created
Now we move to finish setting up context for us to be able to administer the cluster from our local machine. I usually copy certificate files to ~/.kube so I have everything in one place and run kubectl in .kube dir. So let's start with creating user:
kubectl config set-credentials admin1 --client-certificate=admin1.crt --client-key=admin1.key --embed-certs=true
User "admin1" set.
Next we create cluster name, pass the cluster certificate and set master address or loadbalancer address depending how your cluster is setup:
kubectl config set-cluster cluster1 --certificate-authority=ca.crt --embed-certs=true --server=https://k8scluster:6443
Cluster "aws-admin1" set.
Last step is to setup context itself. We choose name and specify cluster and user to use with that context:
kubectl config set-context my-cluster1 --cluster=cluster1 --user=admin1
I had previously setup one context so I can have it for this test. So we can list all available contexts:
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* test1 test-cluster testuser
my-cluster1 cluster1 admin1
As we can see currently we are working in context name test1. Let's switch to newly created context:
kubectl config use-context my-cluster1
Switched to context "my-cluster1".
Every kubectl we run now will be run against specified cluster. To check lets get all nodes:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
master1 Ready master 7d v1.17.0
master2 Ready master 7d v1.17.0
master3 Ready master 7d v1.17.0
node1 Ready < none > 7d v1.17.0
node2 Ready < none > 7d v1.17.0
And that's it. Every subsequent run of set-context or set-cluster command will overwrite/update existing so you can play with it.