Module 10 — Bootstrap the Control Plane
The control plane is the brain of Kubernetes. It consists of three components that run on both cp1 and cp2:
- kube-apiserver — the only component that talks to etcd. Every
kubectlcommand, every kubelet heartbeat, every controller reconciliation goes through the API server. - kube-controller-manager — runs reconciliation loops (deployments, replicasets, nodes, service accounts). It watches the desired state in etcd (via the API server) and makes the actual state match.
- kube-scheduler — watches for newly created pods with no assigned node and selects a worker node for each one.
In this module you install all three as systemd services on both control plane nodes, configure RBAC for kubelet API access, and verify the cluster is healthy.
┌───────────────────────────────┐
│ Load Balancer (lb) │
│ 192.168.56.20:6443 │
└──────────┬──────────┬──────────┘
│ │
┌───────────────┘ └───────────────┐
▼ ▼
┌───────────────────┐ ┌───────────────────┐
│ cp1 │ │ cp2 │
│ 192.168.56.21 │ │ 192.168.56.22 │
│ │ │ │
│ kube-apiserver │◄────── etcd ────────►│ kube-apiserver │
│ controller-mgr │ (peer replication)│ controller-mgr │
│ scheduler │ │ scheduler │
└───────────────────┘ └───────────────────┘
Both nodes run identical components. The controller-manager and scheduler use leader election — only one instance is active at a time, the other is on standby.
1. Download Kubernetes Control Plane Binaries
Run these steps on both cp1 and cp2. SSH into each node:
ssh cp1
1.1 Download binaries
K8S_VERSION=v1.31.0
sudo mkdir -p /usr/local/bin
for binary in kube-apiserver kube-controller-manager kube-scheduler kubectl; do
curl -sL "https://dl.k8s.io/release/${K8S_VERSION}/bin/linux/amd64/${binary}" \
-o "/tmp/${binary}"
chmod +x "/tmp/${binary}"
sudo mv "/tmp/${binary}" /usr/local/bin/
done
1.2 Verify
kube-apiserver --version
kube-controller-manager --version
kube-scheduler --version
kubectl version --client
Expected: All return Kubernetes v1.31.0.
Repeat on cp2 before continuing.
Checkpoint: All four binaries return
v1.31.0on both cp1 and cp2.
2. Prepare Directories and Certificates
Run on both cp1 and cp2.
2.1 Create directories
sudo mkdir -p /var/lib/kubernetes/
sudo mkdir -p /etc/kubernetes/config/
/var/lib/kubernetes/— stores certificates, kubeconfigs, and the encryption config/etc/kubernetes/config/— stores component configuration files (scheduler config)
2.2 Move certificates and kubeconfigs
The certificates (Module 07) and kubeconfigs (Module 08) were distributed to ~/ on each control plane node. Move them to the standard locations:
# Certificates
sudo cp ~/ca.pem ~/ca-key.pem \
~/kubernetes.pem ~/kubernetes-key.pem \
~/etcd.pem ~/etcd-key.pem \
~/service-account.pem ~/service-account-key.pem \
/var/lib/kubernetes/
# Kubeconfigs
sudo cp ~/kube-controller-manager.kubeconfig \
~/kube-scheduler.kubeconfig \
~/admin.kubeconfig \
/var/lib/kubernetes/
# Encryption config
sudo cp ~/encryption-config.yaml /var/lib/kubernetes/
2.3 Verify
ls /var/lib/kubernetes/
Expected: You should see all the .pem files, .kubeconfig files, and encryption-config.yaml.
Checkpoint:
/var/lib/kubernetes/contains certificates, kubeconfigs, and encryption config on both nodes.
3. Configure kube-apiserver
The API server has the most flags of any Kubernetes component. Each flag serves a specific purpose — do not skip any.
3.1 Set the node's internal IP
On cp1:
INTERNAL_IP=192.168.56.21
On cp2:
INTERNAL_IP=192.168.56.22
3.2 Create the systemd unit file
cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--advertise-address=${INTERNAL_IP} \\
--allow-privileged=true \\
--apiserver-count=2 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/audit.log \\
--authorization-mode=Node,RBAC \\
--bind-address=0.0.0.0 \\
--client-ca-file=/var/lib/kubernetes/ca.pem \\
--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
--etcd-cafile=/var/lib/kubernetes/ca.pem \\
--etcd-certfile=/var/lib/kubernetes/etcd.pem \\
--etcd-keyfile=/var/lib/kubernetes/etcd-key.pem \\
--etcd-servers=https://192.168.56.21:2379,https://192.168.56.22:2379 \\
--event-ttl=1h \\
--encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
--kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
--kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
--runtime-config=api/all=true \\
--service-account-key-file=/var/lib/kubernetes/service-account.pem \\
--service-account-signing-key-file=/var/lib/kubernetes/service-account-key.pem \\
--service-account-issuer=https://kubernetes.default.svc.cluster.local \\
--service-cluster-ip-range=10.32.0.0/24 \\
--service-node-port-range=30000-32767 \\
--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
--tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
What each flag does
Identity and networking:
| Flag | Purpose |
|---|---|
--advertise-address | IP that this API server advertises to cluster members |
--bind-address | Address to listen on (0.0.0.0 = all interfaces) |
--apiserver-count | Expected number of API server instances (used for leader election coordination) |
--service-cluster-ip-range | CIDR for ClusterIP services. 10.32.0.1 becomes the kubernetes service IP |
--service-node-port-range | Port range for NodePort services |
Authentication and authorization:
| Flag | Purpose |
|---|---|
--authorization-mode=Node,RBAC | Node authorizer (restricts kubelet access to its own pods) + RBAC |
--client-ca-file | CA used to verify client certificates |
--enable-admission-plugins | Admission controllers that validate/mutate requests before they hit etcd |
--service-account-key-file | Public key to verify service account tokens |
--service-account-signing-key-file | Private key to sign service account tokens |
--service-account-issuer | Issuer URL embedded in service account tokens |
etcd connection:
| Flag | Purpose |
|---|---|
--etcd-servers | Both etcd endpoints — the API server connects to whichever is available |
--etcd-cafile | CA to verify etcd's server certificate |
--etcd-certfile | Client certificate for authenticating to etcd |
--etcd-keyfile | Private key for the etcd client certificate |
TLS (serving):
| Flag | Purpose |
|---|---|
--tls-cert-file | Server certificate presented to clients (kubectl, kubelet, etc.) |
--tls-private-key-file | Private key for the server certificate |
Kubelet connection (API server → kubelet):
| Flag | Purpose |
|---|---|
--kubelet-certificate-authority | CA to verify kubelet's serving certificate |
--kubelet-client-certificate | Client cert when connecting to kubelet (for logs, exec, port-forward) |
--kubelet-client-key | Private key for the kubelet client certificate |
Encryption and audit:
| Flag | Purpose |
|---|---|
--encryption-provider-config | Config for encrypting Secrets at rest in etcd |
--audit-log-path | Write API audit events to this file |
--allow-privileged | Allow pods to run in privileged mode |
Checkpoint:
/etc/systemd/system/kube-apiserver.serviceexists on both cp1 and cp2.
4. Configure kube-controller-manager
The controller-manager runs all the built-in controllers (deployment, replicaset, node, service-account, etc.). It connects to the local API server via its kubeconfig.
4.1 Create the systemd unit file
Run on both cp1 and cp2:
cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
--bind-address=0.0.0.0 \\
--cluster-cidr=10.200.0.0/16 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
--leader-elect=true \\
--root-ca-file=/var/lib/kubernetes/ca.pem \\
--service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\
--service-cluster-ip-range=10.32.0.0/24 \\
--allocate-node-cidrs=true \\
--use-service-account-credentials=true \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
What each flag does
| Flag | Purpose |
|---|---|
--cluster-cidr | Pod network CIDR (10.200.0.0/16). Used to allocate per-node pod CIDRs. |
--allocate-node-cidrs | Automatically assign a /24 from the cluster CIDR to each node |
--cluster-signing-cert-file | CA cert for signing kubelet certificate signing requests |
--cluster-signing-key-file | CA key for signing kubelet CSRs |
--kubeconfig | How to connect to the API server (points to 127.0.0.1:6443) |
--leader-elect | Only one controller-manager is active across cp1/cp2 |
--root-ca-file | CA cert injected into service account token secrets |
--service-account-private-key-file | Key for signing service account tokens |
--service-cluster-ip-range | Must match the API server's value |
--use-service-account-credentials | Each controller runs with its own service account (better audit trail) |
Checkpoint:
/etc/systemd/system/kube-controller-manager.serviceexists on both nodes.
5. Configure kube-scheduler
The scheduler watches for unscheduled pods and assigns them to nodes based on resource availability, affinity rules, and constraints.
5.1 Create the scheduler configuration file
cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
apiVersion: kubescheduler.config.k8s.io/v1
kind: KubeSchedulerConfiguration
clientConnection:
kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
leaderElection:
leaderElect: true
EOF
The leaderElect: true setting means only one scheduler instance (across cp1 and cp2) actively makes scheduling decisions at a time. The other waits in standby.
5.2 Create the systemd unit file
cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
--config=/etc/kubernetes/config/kube-scheduler.yaml \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
Checkpoint:
/etc/kubernetes/config/kube-scheduler.yamland/etc/systemd/system/kube-scheduler.serviceexist on both nodes.
6. Start the Control Plane Services
Run on both cp1 and cp2:
sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler
6.1 Verify each service
sudo systemctl status kube-apiserver
sudo systemctl status kube-controller-manager
sudo systemctl status kube-scheduler
Expected: All three show Active: active (running).
If any service fails, check logs:
sudo journalctl -u kube-apiserver --no-pager -l | tail -40
sudo journalctl -u kube-controller-manager --no-pager -l | tail -20
sudo journalctl -u kube-scheduler --no-pager -l | tail -20
6.2 Test with kubectl
Use the admin kubeconfig on a control plane node:
kubectl cluster-info --kubeconfig /var/lib/kubernetes/admin.kubeconfig
Expected:
Kubernetes control plane is running at https://127.0.0.1:6443
Checkpoint: All three services are
active (running)on both cp1 and cp2.kubectl cluster-inforeturns the control plane URL.
7. Configure RBAC for Kubelet API Access
The API server needs to call back to kubelets for operations like kubectl logs, kubectl exec, and kubectl port-forward. By default, the API server does not have permission to access kubelet endpoints. You need to create a ClusterRole and bind it.
Run this once from any control plane node (RBAC resources are cluster-wide):
7.1 Create the ClusterRole
kubectl apply --kubeconfig /var/lib/kubernetes/admin.kubeconfig -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups: [""]
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
verbs:
- "*"
EOF
7.2 Bind the ClusterRole to the "kubernetes" user
The API server authenticates to kubelets using the kubernetes.pem certificate, which has CN=kubernetes. This binding grants that identity access to kubelet endpoints:
kubectl apply --kubeconfig /var/lib/kubernetes/admin.kubeconfig -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetes
EOF
7.3 Verify
kubectl get clusterrole system:kube-apiserver-to-kubelet \
--kubeconfig /var/lib/kubernetes/admin.kubeconfig
kubectl get clusterrolebinding system:kube-apiserver \
--kubeconfig /var/lib/kubernetes/admin.kubeconfig
Expected: Both resources exist without errors.
Tip: Without this RBAC binding, the cluster will appear healthy but
kubectl logs,kubectl exec, andkubectl port-forwardwill returnForbiddenerrors after worker nodes join.
Checkpoint: The ClusterRole and ClusterRoleBinding are created.
8. Verify the Control Plane
8.1 Component status
kubectl get componentstatuses --kubeconfig /var/lib/kubernetes/admin.kubeconfig
Expected:
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy ok
etcd-1 Healthy ok
Note: The
componentstatusesAPI is deprecated in newer Kubernetes versions but still works in v1.31. It provides a quick health check for control plane components.
8.2 API server responds from both nodes
From your Mac (using the admin kubeconfig from Module 08):
cd ~/k8s-cluster/certs
# Direct to cp1
kubectl --kubeconfig=admin.kubeconfig \
--server=https://192.168.56.21:6443 get namespaces
# Direct to cp2
kubectl --kubeconfig=admin.kubeconfig \
--server=https://192.168.56.22:6443 get namespaces
Both should return the same default namespaces (default, kube-system, kube-public, kube-node-lease).
8.3 Verify leader election
Check which node is the active controller-manager:
kubectl --kubeconfig /var/lib/kubernetes/admin.kubeconfig \
get lease kube-controller-manager -n kube-system -o jsonpath='{.spec.holderIdentity}'
echo
And the active scheduler:
kubectl --kubeconfig /var/lib/kubernetes/admin.kubeconfig \
get lease kube-scheduler -n kube-system -o jsonpath='{.spec.holderIdentity}'
echo
Each shows which node (cp1 or cp2) currently holds the leader lease.
Checkpoint: All component statuses are
Healthy. Both API servers respond. Leader election shows one active leader for controller-manager and scheduler.
9. Set Up kubectl on Your Mac
Configure kubectl on your Mac to connect through the load balancer. The admin kubeconfig was generated in Module 08 pointing to https://192.168.56.20:6443, but the load balancer is not configured yet (that is Module 13).
For now, copy the admin kubeconfig and point it directly at one of the control plane nodes:
cd ~/k8s-cluster/certs
cp admin.kubeconfig ~/.kube/config
Note: This kubeconfig points to the load balancer IP (
192.168.56.20:6443). Until HAProxy is configured in Module 13, you can temporarily override the server:
kubectl --server=https://192.168.56.21:6443 get namespaces
Or use the kubeconfig directly from a control plane node:
ssh cp1 "kubectl --kubeconfig /var/lib/kubernetes/admin.kubeconfig get namespaces"
10. Troubleshooting
kube-apiserver fails with "open /var/lib/kubernetes/ca.pem: no such file or directory"
Certificates were not copied to /var/lib/kubernetes/. Re-run Section 2.2. Verify with ls /var/lib/kubernetes/*.pem.
kube-apiserver fails with "connection refused" on etcd endpoints
etcd is not running on one or both nodes. Check:
sudo systemctl status etcd
If etcd is not running, go back to Module 09 and start it. The API server needs at least one healthy etcd endpoint.
kubectl returns "The connection to the server was refused"
The API server is not running or not listening on the expected port. Check:
sudo systemctl status kube-apiserver
ss -tlnp | grep 6443
kubectl returns "Unauthorized"
The admin kubeconfig is not using the correct certificate. Verify the kubeconfig points to the right cert files:
kubectl config view --kubeconfig /var/lib/kubernetes/admin.kubeconfig
The user should be admin and the cluster server should be https://127.0.0.1:6443 or the appropriate endpoint.
kube-scheduler keeps restarting — "leader election lost"
This is normal if you are checking logs on the standby node. Only one scheduler is the leader. Check the other node — it should be stable.
"error: failed to create listener" — bind address already in use
Another process is using port 6443, 10257, or 10259. Check with:
ss -tlnp | grep -E '6443|10257|10259'
Kill the conflicting process or reboot the node.
11. What You Have Now
| Capability | Verification Command |
|---|---|
| kube-apiserver running on cp1 and cp2 | ssh cp1 "sudo systemctl status kube-apiserver" |
| kube-controller-manager running | ssh cp1 "sudo systemctl status kube-controller-manager" |
| kube-scheduler running | ssh cp1 "sudo systemctl status kube-scheduler" |
| API server connected to etcd | kubectl get componentstatuses — etcd-0/1 Healthy |
| Node+RBAC authorization enabled | --authorization-mode=Node,RBAC in unit file |
| Secret encryption at rest configured | --encryption-provider-config in unit file |
| RBAC for kubelet API access | kubectl get clusterrole system:kube-apiserver-to-kubelet |
| Leader election for controller-manager | kubectl get lease kube-controller-manager -n kube-system |
| Leader election for scheduler | kubectl get lease kube-scheduler -n kube-system |
The control plane is running. The API server can accept requests, the controller-manager watches for work, and the scheduler is ready to place pods — but there are no worker nodes yet to schedule pods onto.
Next up: Module 11 — Bootstrap the Worker Nodes — install containerd, kubelet, and kube-proxy on worker1 and worker2 so they register with the cluster.