(Photo by sergio souza on Unsplash)
This is the second post in my series on Kubernetes. The previous post was the introduction, available here.
For this post, we'll have a closer look at the Master node and the processes that allows it to control what's happening in the Kuberenetes cluster. This post will still only go over some conceptual ideas however, to give you a better understanding of how it works, and will not cover any practical tutorials or how-to guides on how to control your cluster.
So in brief, the Master node is usually a single node in your cluster that orchestrates the whole shebang via 3 main processes - the Kube-API-Server, the Kube-Control-Manager and the Kube-scheduler. Additionally, the cluster state(and it's configured desired state) resides on and is managed by the Master node with a key-value data store, the ETCD.
Let's have a closer look.
ETCD
The ETCD data store is a fast, distributed key-value store that handles a lot of things for you. It holds both the cluster current-state and configured desired state. The kube-API-server process watches for change events in your cluster, and everything the state up to date by writing it to the ETCD store.
In Kubernetes the ETCD stores information about the nodes, pods, config variables, secrets, roles, account details, and more.For instance, all the information returned to queries from the command-line kubectl utility is information supplied by the ETCD store on the Master node.
The ETCD store can be installed and configured on your cluster in two ways;
- Installing it manually from a binary and running it as a service on your master node.
- Deploying it as an individual pod when you create your cluster with the help of kubeadm(we'll get into this more later)
ETCD listens to events on the Kubernetes advertise-client-url, which is the servers' IP address, on port 2379(the default ETCD port). This is configured with the kube-API-server.
In high-availability cluster configurations you might have multiple Master nodes with multiple ETCD services all talking with each other, but for now lets keep it simple and assume we only have one Master node in our cluster.
The Kube-API-Server
This is the primary process running on your Master node. Any kubectl command you enter is sent to and handled by the kube-API-server. The kube-API-server is a REST server that handles communication between all the cluster entities, managing things like fetching and storing information from the ETCD store and handling authentication and requests between all the different cluster components, for instance.
Similar to the ETCD store, the kube-API-server can be deployed as an individual service on the Master node or as a separate pod, depending on how you set up your cluster. The settings configured for the kube-API-server could be set at two possible places.
For kube-API-server running as a Master node service, it would be under
/etc/systemd/system/kube-apiserver.service.
When running as a separate pod, it would be accessed under
/etc/kubernetes/manifests/kube-apiserver.
Alternatively, you can grep out the kube-apiserver service running on your Master node to see active the config settings.
The Kube-Controller-Manager
The kube-controller-manager watches for status changes from the worker nodes through a control loop via the kube-API-server.
Some controllers include;
- Node-Controller –> Watches for status updates from each node running in the cluster every 5 seconds. After 40s of no response a nodes' status is set to 'unreachable'. Unreachable nodes are destroyed after 5 minutes of no response and provision the service to the other active pods of the replication set.
- Replication-Controller –> Watches the status of replication sets and ensures that the number of pods conform to the requirements in of the desired state.
- Endpoints-Controller –> Connects services and pods.
- Service Account and Token Controllers –> Creates accounts and API access for new namespaces.
The kube-controller-manager can also be installed as a binary and run as a service on the Master node, or launched as an individual pod with the kubeadm utility.
The config would be available in the pod case under
/etc/kubernetes/manifests/kube-controller-manager.yaml
And for a service running on Master at
/etc/systemd/system/kube-controller-manager.service
Or, again, you can view the active process settings by grepping the kube-controller-manager service like so
ps -aux | grep kube-controller-manager
The Kube-Scheduler
The kube-scheduler controls which containers get deployed to which Worker nodes.
It determines this by looking at the container requirements and proceeding by
- filtering out nodes that do not meet the container resource requirements, like CPU and RAM specs,
- it then ranks the leftover nodes out of 10 according how much resources those nodes have left to run the new containers,
- and then filtering out the option further by considering any other rules you might have defined(the how of which we'll go to in more detail later).
As with the previous Kubernetes Cluster Control Plane processes, you can install kube-scheduler with a binary and run it as a service on your Master node, or you can install it via the kubeadm utility, which will run it in its own pod.
The kube-scheduler options would be available in the pod case under
/etc/kubernetes/manifests/kube-scheduler.yaml
And as a service on Master at
/etc/systemd/system/kube-scheduler.service
Or, once again, you can view the active process settings by grepping the kube-scheduler service
ps -aux | grep kube-scheduler
Conclusion
That's it for the closer look at the Master node and it's Kubernetes Cluster Control Plane processes.
With the next post, we'll examine the Worker nodes and the their processes in more detail.