Skip to main content

Module 08 — Kubernetes Configuration Files

In Module 07 you created certificates for every Kubernetes component. But a certificate alone does not tell a component where to connect or which CA to trust. A kubeconfig file bundles all three: the API server address, the client certificate, and the CA certificate.

Every component that talks to the API server needs its own kubeconfig. In this module you generate six kubeconfig files and one encryption config, then distribute them to the correct nodes.

All file generation runs on your Mac from the ~/k8s-cluster/certs directory.


1. What Is a Kubeconfig File

A kubeconfig is a YAML file with three sections:

SectionWhat it containsExample
clustersAPI server URL + CA certificatehttps://192.168.56.20:6443 + ca.pem
usersClient certificate + private keyworker1.pem + worker1-key.pem
contextsBinds a cluster to a user"Use these credentials for this cluster"

When a component starts, it reads its kubeconfig, extracts the server URL and credentials, and opens a TLS connection to the API server.

Which API server endpoint?

ComponentRuns onConnects toWhy
kubeletWorker nodeshttps://192.168.56.20:6443 (lb)Load-balanced across both API servers
kube-proxyWorker nodeshttps://192.168.56.20:6443 (lb)Load-balanced across both API servers
kube-controller-managerControl planehttps://127.0.0.1:6443 (local)Same node as API server — no LB needed
kube-schedulerControl planehttps://127.0.0.1:6443 (local)Same node as API server — no LB needed
admin (kubectl)Your Machttps://192.168.56.20:6443 (lb)External access through LB

Control plane components connect locally to avoid depending on the load balancer for their own health. Worker components connect through the load balancer for high availability.


2. Install kubectl

You need kubectl to generate kubeconfig files (it has built-in commands for this).

macOS (Homebrew):

brew install kubectl

Linux:

curl -LO "https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/

Verify:

kubectl version --client

Expected: Client Version: v1.31.x

Checkpoint: kubectl version --client returns version information.


3. Generate Kubelet Kubeconfigs

Each worker node gets its own kubeconfig because each kubelet authenticates with a different certificate (identified by system:node:<hostname>).

cd ~/k8s-cluster/certs

worker1

kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://192.168.56.20:6443 \
--kubeconfig=worker1.kubeconfig

kubectl config set-credentials system:node:worker1 \
--client-certificate=worker1.pem \
--client-key=worker1-key.pem \
--embed-certs=true \
--kubeconfig=worker1.kubeconfig

kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:node:worker1 \
--kubeconfig=worker1.kubeconfig

kubectl config use-context default --kubeconfig=worker1.kubeconfig

worker2

kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://192.168.56.20:6443 \
--kubeconfig=worker2.kubeconfig

kubectl config set-credentials system:node:worker2 \
--client-certificate=worker2.pem \
--client-key=worker2-key.pem \
--embed-certs=true \
--kubeconfig=worker2.kubeconfig

kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:node:worker2 \
--kubeconfig=worker2.kubeconfig

kubectl config use-context default --kubeconfig=worker2.kubeconfig

What each command does

CommandPurpose
set-clusterDefines the cluster (API server URL + CA cert)
set-credentialsDefines the user (client cert + key)
set-contextLinks the cluster to the user
use-contextSets the active context

The --embed-certs=true flag base64-encodes the certificate data directly into the kubeconfig file, making it self-contained — no need to reference external .pem files on the target node.

Checkpoint: worker1.kubeconfig and worker2.kubeconfig exist.


4. Generate Kube-Proxy Kubeconfig

Kube-proxy runs on every worker node but uses the same certificate (it is not per-node like kubelet). One kubeconfig is shared across all workers.

kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://192.168.56.20:6443 \
--kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials system:kube-proxy \
--client-certificate=kube-proxy.pem \
--client-key=kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-proxy \
--kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

Checkpoint: kube-proxy.kubeconfig exists.


5. Generate Controller Manager Kubeconfig

The controller-manager runs on the control plane nodes and connects to the local API server (127.0.0.1), not the load balancer.

kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-credentials system:kube-controller-manager \
--client-certificate=kube-controller-manager.pem \
--client-key=kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig

kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig

Checkpoint: kube-controller-manager.kubeconfig exists.


6. Generate Scheduler Kubeconfig

Also connects to the local API server.

kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-scheduler.kubeconfig

kubectl config set-credentials system:kube-scheduler \
--client-certificate=kube-scheduler.pem \
--client-key=kube-scheduler-key.pem \
--embed-certs=true \
--kubeconfig=kube-scheduler.kubeconfig

kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-scheduler \
--kubeconfig=kube-scheduler.kubeconfig

kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig

Checkpoint: kube-scheduler.kubeconfig exists.


7. Generate Admin Kubeconfig

The admin kubeconfig is for kubectl on your Mac. It connects through the load balancer.

kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://192.168.56.20:6443 \
--kubeconfig=admin.kubeconfig

kubectl config set-credentials admin \
--client-certificate=admin.pem \
--client-key=admin-key.pem \
--embed-certs=true \
--kubeconfig=admin.kubeconfig

kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=admin \
--kubeconfig=admin.kubeconfig

kubectl config use-context default --kubeconfig=admin.kubeconfig

Checkpoint: admin.kubeconfig exists.


8. Generate the Data Encryption Config

Kubernetes can encrypt Secret resources at rest in etcd. This requires an encryption key that the API server uses to encrypt and decrypt data.

8.1 Generate an encryption key

ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)

8.2 Create the encryption config

cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: ${ENCRYPTION_KEY}
- identity: {}
EOF

How this works:

  • aescbc — AES-CBC encryption with the generated key. New secrets are encrypted with this provider.
  • identity: {} — fallback that reads unencrypted data. This allows the API server to read secrets that were stored before encryption was enabled.

The provider order matters: the first provider (aescbc) is used for writing, and all providers are tried in order for reading.

Checkpoint: encryption-config.yaml exists and contains a base64-encoded key.


9. Verify All Files

ls -1 *.kubeconfig encryption-config.yaml

Expected output (7 files):

admin.kubeconfig
encryption-config.yaml
kube-controller-manager.kubeconfig
kube-proxy.kubeconfig
kube-scheduler.kubeconfig
worker1.kubeconfig
worker2.kubeconfig

Inspect a kubeconfig

kubectl config view --kubeconfig=worker1.kubeconfig
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://192.168.56.20:6443
name: kubernetes-the-hard-way
contexts:
- context:
cluster: kubernetes-the-hard-way
user: system:node:worker1
name: default
current-context: default
users:
- name: system:node:worker1
user:
client-certificate-data: DATA+OMITTED
client-key-data: DATA+OMITTED

Key things to verify:

  • server points to the correct endpoint (LB or localhost)
  • user matches the certificate CN
  • Certificate data is embedded (DATA+OMITTED in the view means it is base64-encoded in the file)

Quick endpoint check for all kubeconfigs

for kc in *.kubeconfig; do
server=$(kubectl config view --kubeconfig="$kc" -o jsonpath='{.clusters[0].cluster.server}')
user=$(kubectl config view --kubeconfig="$kc" -o jsonpath='{.users[0].name}')
echo "$kc -> server=$server user=$user"
done

Expected output:

admin.kubeconfig -> server=https://192.168.56.20:6443 user=admin
kube-controller-manager.kubeconfig -> server=https://127.0.0.1:6443 user=system:kube-controller-manager
kube-proxy.kubeconfig -> server=https://192.168.56.20:6443 user=system:kube-proxy
kube-scheduler.kubeconfig -> server=https://127.0.0.1:6443 user=system:kube-scheduler
worker1.kubeconfig -> server=https://192.168.56.20:6443 user=system:node:worker1
worker2.kubeconfig -> server=https://192.168.56.20:6443 user=system:node:worker2

Controller-manager and scheduler point to 127.0.0.1. Everything else goes through the load balancer.

Checkpoint: All 6 kubeconfigs have the correct server endpoint and user identity.


10. Distribute Files

Each node gets only the kubeconfigs it needs.

Worker nodes

scp worker1.kubeconfig kube-proxy.kubeconfig worker1:~/
scp worker2.kubeconfig kube-proxy.kubeconfig worker2:~/

Each worker gets its own kubelet kubeconfig plus the shared kube-proxy kubeconfig.

Control plane nodes

for node in cp1 cp2; do
scp admin.kubeconfig \
kube-controller-manager.kubeconfig \
kube-scheduler.kubeconfig \
encryption-config.yaml \
"${node}:~/"
done

Control plane nodes get the admin kubeconfig (for local kubectl), the controller-manager and scheduler kubeconfigs, and the encryption config.

Verify distribution

echo "=== worker1 ==="
ssh worker1 "ls ~/*.kubeconfig"

echo "=== worker2 ==="
ssh worker2 "ls ~/*.kubeconfig"

echo "=== cp1 ==="
ssh cp1 "ls ~/*.kubeconfig ~/encryption-config.yaml"

echo "=== cp2 ==="
ssh cp2 "ls ~/*.kubeconfig ~/encryption-config.yaml"

Expected:

NodeFiles
worker1worker1.kubeconfig, kube-proxy.kubeconfig
worker2worker2.kubeconfig, kube-proxy.kubeconfig
cp1admin.kubeconfig, kube-controller-manager.kubeconfig, kube-scheduler.kubeconfig, encryption-config.yaml
cp2Same as cp1

Checkpoint: Workers have 2 kubeconfigs each. Control planes have 3 kubeconfigs + encryption config.


11. File Summary

Here is every file generated in this module and where it goes:

FileGoes toUsed byAPI server endpoint
worker1.kubeconfigworker1kubelethttps://192.168.56.20:6443 (lb)
worker2.kubeconfigworker2kubelethttps://192.168.56.20:6443 (lb)
kube-proxy.kubeconfigworker1, worker2kube-proxyhttps://192.168.56.20:6443 (lb)
kube-controller-manager.kubeconfigcp1, cp2controller-managerhttps://127.0.0.1:6443 (local)
kube-scheduler.kubeconfigcp1, cp2schedulerhttps://127.0.0.1:6443 (local)
admin.kubeconfigcp1, cp2, your Mackubectlhttps://192.168.56.20:6443 (lb)
encryption-config.yamlcp1, cp2kube-apiserver

Combined with Module 07, the full set of files on each node is now:

NodeFrom Module 07From Module 08Total
cp112 .pem files3 kubeconfigs + encryption config16
cp212 .pem files3 kubeconfigs + encryption config16
worker15 .pem files2 kubeconfigs7
worker25 .pem files2 kubeconfigs7

12. Troubleshooting

kubectl: command not found

kubectl is not installed. Follow the install steps in Section 2. Verify with which kubectl.

Kubeconfig has wrong server URL

Regenerate the kubeconfig. You cannot edit the file directly because the certificates are base64-embedded. Re-run the four kubectl config commands from the relevant section with the correct --server value.

error: unable to read certificate-authority ca.pem

You are not in the ~/k8s-cluster/certs directory. Either cd there or use absolute paths:

--certificate-authority=$HOME/k8s-cluster/certs/ca.pem

Kubeconfig shows DATA+OMITTED — is the cert actually there?

Yes. kubectl config view hides embedded certs by default. To see the raw data:

cat worker1.kubeconfig

The certificate-authority-data, client-certificate-data, and client-key-data fields contain base64-encoded certificate content.

permission denied during scp

Check your SSH config in ~/.ssh/config — the IdentityFile path must match the Vagrant private key location from Module 06. Test with ssh worker1 first.


13. What You Have Now

CapabilityVerification Command
6 kubeconfig files generatedls ~/k8s-cluster/certs/*.kubeconfig | wc -l — returns 6
Encryption config generatedcat ~/k8s-cluster/certs/encryption-config.yaml
Worker kubeconfigs point to LBSee endpoint check in Section 9
CP kubeconfigs point to localhostSee endpoint check in Section 9
Certs embedded in kubeconfigskubectl config view --kubeconfig=worker1.kubeconfig — shows DATA+OMITTED
Files distributed to workersssh worker1 "ls ~/*.kubeconfig"
Files distributed to control planesssh cp1 "ls ~/*.kubeconfig ~/encryption-config.yaml"

The configuration layer is complete. Every component has both its TLS certificates (Module 07) and its connection configuration (this module). The nodes are ready for the actual Kubernetes components.


Next up: Module 09 — Bootstrap the etcd Cluster — install and configure the distributed key-value store that backs the Kubernetes API server.