Module 07 — Certificate Authority & TLS
Kubernetes components do not trust each other by default. Every connection — API server to etcd, kubelet to API server, scheduler to API server — must be authenticated with TLS certificates. Without them, anyone on the network could impersonate a kubelet or read etcd data.
In this module you create a Certificate Authority (CA), generate certificates for every Kubernetes component, and distribute them to the correct nodes. This is the most certificate-heavy part of the entire track, but every cert has a clear purpose.
All certificate generation runs on your Mac (or any machine with cfssl installed). You distribute the files to the VMs afterward.
1. How Kubernetes Uses TLS
Every Kubernetes component runs as either a server (accepts connections) or a client (initiates connections) — and often both. TLS provides two things:
- Encryption — traffic between components is encrypted on the wire
- Authentication — each component proves its identity with a certificate signed by the CA
Who talks to whom
| Client | Server | Certificate needed |
|---|---|---|
| kubectl (admin) | kube-apiserver | Admin client cert |
| kube-apiserver | etcd | API server client cert for etcd |
| kube-apiserver | kubelet | API server client cert |
| kubelet | kube-apiserver | Kubelet client cert (per node) |
| kube-proxy | kube-apiserver | Kube-proxy client cert |
| kube-scheduler | kube-apiserver | Scheduler client cert |
| kube-controller-manager | kube-apiserver | Controller-manager client cert |
| etcd peer | etcd peer | etcd peer cert |
How identity works
Kubernetes reads the client certificate's CN (Common Name) and O (Organization) fields to determine identity:
| Component | CN | O | Kubernetes identity |
|---|---|---|---|
| Admin | admin | system:masters | Cluster admin (full access) |
| Kubelet (worker1) | system:node:worker1 | system:nodes | Node identity for RBAC |
| Kube-proxy | system:kube-proxy | — | kube-proxy RBAC binding |
| Scheduler | system:kube-scheduler | — | Scheduler RBAC binding |
| Controller-manager | system:kube-controller-manager | — | Controller-manager RBAC binding |
This is why getting the CN and O values right matters — Kubernetes uses them for authorization decisions.
2. Install cfssl
cfssl (CloudFlare's SSL toolkit) generates certificates from JSON configuration files. It is the standard tool for Kubernetes The Hard Way.
macOS (Homebrew):
brew install cfssl
Linux:
curl -sL https://github.com/cloudflare/cfssl/releases/download/v1.6.5/cfssl_1.6.5_linux_amd64 -o cfssl
curl -sL https://github.com/cloudflare/cfssl/releases/download/v1.6.5/cfssljson_1.6.5_linux_amd64 -o cfssljson
chmod +x cfssl cfssljson
sudo mv cfssl cfssljson /usr/local/bin/
Verify:
cfssl version
cfssljson --version
Expected: Version 1.6.x.
Checkpoint:
cfssl versionreturns version information.
3. Create a Working Directory
mkdir -p ~/k8s-cluster/certs
cd ~/k8s-cluster/certs
All certificates will be generated here and distributed to the VMs later.
4. Certificate Authority
The CA is the root of trust. Every certificate you generate in this module will be signed by this CA. Every Kubernetes component will be configured to trust this CA.
4.1 CA configuration
cat > ca-config.json <<'EOF'
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": ["signing", "key encipherment", "server auth", "client auth"],
"expiry": "8760h"
}
}
}
}
EOF
8760h= 1 year. Production clusters use longer-lived CAs with automated rotation.- The
kubernetesprofile allows certificates to be used for both server and client authentication.
4.2 CA certificate signing request
cat > ca-csr.json <<'EOF'
{
"CN": "Kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "CA",
"ST": "Oregon"
}
]
}
EOF
4.3 Generate the CA
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
This produces three files:
| File | Purpose |
|---|---|
ca.pem | CA certificate (public) — distributed to all nodes |
ca-key.pem | CA private key — kept secret, used only to sign other certs |
ca.csr | Certificate signing request (not needed after this) |
Verify:
openssl x509 -in ca.pem -text -noout | head -15
Look for Issuer: ... CN = Kubernetes and CA:TRUE in the Basic Constraints.
Checkpoint:
ca.pemandca-key.pemexist in~/k8s-cluster/certs/.
5. Admin Client Certificate
The admin certificate is used by kubectl on your Mac to authenticate as a cluster administrator.
cat > admin-csr.json <<'EOF'
{
"CN": "admin",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:masters",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson -bare admin
The O: system:masters group grants this certificate full cluster-admin privileges via Kubernetes' built-in RBAC ClusterRoleBinding.
Checkpoint:
admin.pemandadmin-key.pemexist.
6. Kubelet Client Certificates
Each worker node gets its own certificate. Kubernetes uses the CN to identify which node the kubelet represents. This is how the Node Authorizer knows which pods a kubelet is allowed to manage.
worker1
cat > worker1-csr.json <<'EOF'
{
"CN": "system:node:worker1",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:nodes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=worker1,192.168.56.23 \
-profile=kubernetes \
worker1-csr.json | cfssljson -bare worker1
worker2
cat > worker2-csr.json <<'EOF'
{
"CN": "system:node:worker2",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:nodes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=worker2,192.168.56.24 \
-profile=kubernetes \
worker2-csr.json | cfssljson -bare worker2
The -hostname flag adds Subject Alternative Names (SANs) — the node's hostname and IP. The kubelet's serving certificate needs these so the API server can verify the connection when it calls back to the kubelet (e.g., for kubectl logs or kubectl exec).
Checkpoint:
worker1.pem,worker1-key.pem,worker2.pem,worker2-key.pemexist.
7. Controller Manager Client Certificate
cat > kube-controller-manager-csr.json <<'EOF'
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:kube-controller-manager",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
Checkpoint:
kube-controller-manager.pemandkube-controller-manager-key.pemexist.
8. Kube-Proxy Client Certificate
cat > kube-proxy-csr.json <<'EOF'
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:node-proxier",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-proxy-csr.json | cfssljson -bare kube-proxy
Checkpoint:
kube-proxy.pemandkube-proxy-key.pemexist.
9. Scheduler Client Certificate
cat > kube-scheduler-csr.json <<'EOF'
{
"CN": "system:kube-scheduler",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:kube-scheduler",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-scheduler-csr.json | cfssljson -bare kube-scheduler
Checkpoint:
kube-scheduler.pemandkube-scheduler-key.pemexist.
10. Kubernetes API Server Certificate
The API server certificate is the most complex because the API server is reachable via many addresses. The certificate must include all of them as Subject Alternative Names (SANs), or TLS clients will reject the connection.
cat > kubernetes-csr.json <<'EOF'
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
KUBERNETES_HOSTNAMES=kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=10.32.0.1,192.168.56.20,192.168.56.21,192.168.56.22,127.0.0.1,${KUBERNETES_HOSTNAMES} \
-profile=kubernetes \
kubernetes-csr.json | cfssljson -bare kubernetes
Why each SAN is needed
| SAN | Why |
|---|---|
10.32.0.1 | First IP of the service CIDR — the kubernetes ClusterIP service |
192.168.56.20 | Load balancer (lb) IP — external clients connect here |
192.168.56.21 | cp1 IP — API server runs here |
192.168.56.22 | cp2 IP — API server runs here |
127.0.0.1 | Localhost — components on the same node connect here |
kubernetes.* | DNS names used by pods inside the cluster |
Verify the SANs:
openssl x509 -in kubernetes.pem -text -noout | grep -A 1 "Subject Alternative Name"
You should see all the IPs and hostnames listed.
Checkpoint:
kubernetes.pemandkubernetes-key.pemexist. The SAN list includes all 5 IPs and the kubernetes DNS names.
11. etcd Server Certificate
etcd needs a certificate for both client connections (API server → etcd) and peer connections (etcd node → etcd node). You can use a single certificate with SANs covering both control plane nodes.
cat > etcd-csr.json <<'EOF'
{
"CN": "etcd",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "etcd",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=cp1,cp2,192.168.56.21,192.168.56.22,127.0.0.1 \
-profile=kubernetes \
etcd-csr.json | cfssljson -bare etcd
The SANs include both control plane hostnames and IPs because etcd peers connect by hostname, while the API server connects by IP.
Checkpoint:
etcd.pemandetcd-key.pemexist.
12. Service Account Key Pair
The controller-manager uses a key pair to sign service account tokens. The API server uses the same public key to verify them. This is not a TLS certificate — it is an RSA key pair used for JWT token signing.
cat > service-account-csr.json <<'EOF'
{
"CN": "service-accounts",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
service-account-csr.json | cfssljson -bare service-account
Checkpoint:
service-account.pemandservice-account-key.pemexist.
13. Verify All Certificates
List everything you generated:
ls -1 *.pem
Expected output (14 .pem files):
admin-key.pem
admin.pem
ca-key.pem
ca.pem
etcd-key.pem
etcd.pem
kube-controller-manager-key.pem
kube-controller-manager.pem
kube-proxy-key.pem
kube-proxy.pem
kube-scheduler-key.pem
kube-scheduler.pem
kubernetes-key.pem
kubernetes.pem
service-account-key.pem
service-account.pem
worker1-key.pem
worker1.pem
worker2-key.pem
worker2.pem
That is 20 files total (10 key pairs). Quick sanity check — verify each cert is signed by the CA:
for cert in admin.pem worker1.pem worker2.pem kubernetes.pem etcd.pem \
kube-controller-manager.pem kube-proxy.pem kube-scheduler.pem service-account.pem; do
echo -n "$cert: "
openssl verify -CAfile ca.pem "$cert" 2>&1
done
Every line should end with : OK.
Checkpoint: All 20
.pemfiles exist. All certificates verify against the CA.
14. Distribute Certificates
Each node only needs the certificates relevant to the components it runs. Sending all certs to all nodes would be a security risk.
Control plane nodes (cp1, cp2)
These run etcd, kube-apiserver, kube-controller-manager, and kube-scheduler:
for node in cp1 cp2; do
scp ca.pem ca-key.pem \
kubernetes.pem kubernetes-key.pem \
etcd.pem etcd-key.pem \
service-account.pem service-account-key.pem \
kube-controller-manager.pem kube-controller-manager-key.pem \
kube-scheduler.pem kube-scheduler-key.pem \
"${node}:~/"
done
Worker nodes (worker1, worker2)
Each worker gets only its own kubelet cert plus the CA cert and kube-proxy cert:
scp ca.pem worker1.pem worker1-key.pem kube-proxy.pem kube-proxy-key.pem worker1:~/
scp ca.pem worker2.pem worker2-key.pem kube-proxy.pem kube-proxy-key.pem worker2:~/
Load balancer (lb)
The lb VM only runs HAProxy — it does not need Kubernetes certificates. It will be configured in a later module.
Verify distribution
echo "=== cp1 ==="
ssh cp1 "ls ~/*.pem | wc -l"
echo "=== cp2 ==="
ssh cp2 "ls ~/*.pem | wc -l"
echo "=== worker1 ==="
ssh worker1 "ls ~/*.pem | wc -l"
echo "=== worker2 ==="
ssh worker2 "ls ~/*.pem | wc -l"
Expected counts:
| Node | Files | Why |
|---|---|---|
| cp1 | 12 | CA (2) + API server (2) + etcd (2) + service-account (2) + controller-manager (2) + scheduler (2) |
| cp2 | 12 | Same as cp1 |
| worker1 | 5 | CA (1) + worker1 cert (2) + kube-proxy (2) |
| worker2 | 5 | CA (1) + worker2 cert (2) + kube-proxy (2) |
Checkpoint:
ssh cp1 "ls ~/*.pem | wc -l"returns12.ssh worker1 "ls ~/*.pem | wc -l"returns5.
15. Certificate Summary
Here is every certificate, who uses it, and what it authenticates:
| Certificate | CN | Used by | Used for |
|---|---|---|---|
ca.pem | Kubernetes | All nodes | Root of trust — verifies all other certs |
admin.pem | admin | kubectl (your Mac) | Cluster admin authentication |
worker1.pem | system:node:worker1 | kubelet on worker1 | Node identity for API server |
worker2.pem | system:node:worker2 | kubelet on worker2 | Node identity for API server |
kube-controller-manager.pem | system:kube-controller-manager | controller-manager | Client auth to API server |
kube-proxy.pem | system:kube-proxy | kube-proxy | Client auth to API server |
kube-scheduler.pem | system:kube-scheduler | scheduler | Client auth to API server |
kubernetes.pem | kubernetes | kube-apiserver | Server cert (TLS termination) + client cert to kubelet |
etcd.pem | etcd | etcd | Server + peer TLS |
service-account.pem | service-accounts | controller-manager + API server | Sign/verify service account tokens |
16. Troubleshooting
cfssl: command not found
cfssl is not installed or not in your PATH. On macOS: brew install cfssl. On Linux: download the binaries as shown in Section 2.
failed to sign the certificate: ...
The CA key file (ca-key.pem) is missing or corrupt. Regenerate the CA (Section 4) and all certificates that depend on it.
x509: certificate is valid for X, not Y
A TLS client is connecting to an address not listed in the certificate's SANs. Check which address the client is using and add it to the -hostname flag when regenerating the certificate. Most common: forgetting 127.0.0.1 or the load balancer IP in the API server cert.
openssl verify returns error
openssl verify -CAfile ca.pem kubernetes.pem
If this fails, the certificate was not signed by the CA. Regenerate it using -ca=ca.pem -ca-key=ca-key.pem.
Permission denied during scp
The SSH key for the node is not configured. Verify your ~/.ssh/config has the correct IdentityFile path for each node (set up in Module 06).
17. What You Have Now
| Capability | Verification Command |
|---|---|
| Certificate Authority | openssl x509 -in ~/k8s-cluster/certs/ca.pem -text -noout | grep "CA:TRUE" |
| Admin client cert | openssl x509 -in ~/k8s-cluster/certs/admin.pem -noout -subject |
| Worker kubelet certs | openssl x509 -in ~/k8s-cluster/certs/worker1.pem -noout -subject |
| API server cert with SANs | openssl x509 -in ~/k8s-cluster/certs/kubernetes.pem -text -noout | grep -A 1 "Subject Alternative" |
| etcd cert | openssl x509 -in ~/k8s-cluster/certs/etcd.pem -noout -subject |
| Service account key pair | ls ~/k8s-cluster/certs/service-account*.pem |
| Certs distributed to cp1/cp2 | ssh cp1 "ls ~/*.pem | wc -l" — returns 12 |
| Certs distributed to workers | ssh worker1 "ls ~/*.pem | wc -l" — returns 5 |
| All certs verify against CA | openssl verify -CAfile ca.pem kubernetes.pem — OK |
The PKI is complete. Every Kubernetes component has a certificate that identifies it, and all nodes have the certificates they need.
Next up: Module 08 — Kubernetes Configuration Files — generate kubeconfig files that bundle certificates with API server connection details for each component.