This blog post documents the steps I took to create a Kubernetes cluster according to Kelsey Hightower’s “Kubernets The Hard Way ”. The cluster setup is meant for learning purposes. This guide uses LXC containers to simulate Kubernetes nodes.
Environment
VirtualBox VM with Ubuntu 18.04.3 LTS
Kubernetes 1.15.3
DNS cluster addon (CoreDNS)
Envoy as load balancer for controller nodes
Envoy, controller nodes and worker nodes in LXC containers
Contrary to Kubernetes The Hard Way, no IPs are hardcoded
Stacked HA cluster topology, every controller node runs an etcd instance
Kubernetes cluster overview. Every colored box represents an LXC container
Setup LXD/LXC
Install LXD and LXC:
1
sudo apt install -y lxd lxc
Initialize LXD:
1
2
3
4
5
# Add user to lxd group
sudo usermod -a -G lxd $( whoami)
newgrp lxd
# Initialize with defaults
sudo lxd init --auto
Create LXC containers
Create LXC profile with security settings and resource limits:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
{
lxc profile create kubernetes
cat > kubernetes.profile <<EOF
config:
limits.cpu: "2"
limits.memory: 2GB
linux.kernel_modules: ip_tables,ip6_tables,netlink_diag,nf_nat,overlay
raw.lxc: "lxc.apparmor.profile=unconfined\nlxc.cap.drop= \nlxc.cgroup.devices.allow=a\nlxc.mount.auto=proc:rw sys:rw"
security.nesting: "true"
security.privileged: "true"
description: Kubernetes LXD profile
devices:
eth0:
name: eth0
nictype: bridged
parent: lxdbr0
type: nic
root:
path: /
pool: default
type: disk
name: kubernetes
EOF
lxc profile edit kubernetes < kubernetes.profile
}
Launch containers:
1
2
3
4
5
6
7
8
9
10
11
{
lxc launch ubuntu:18.04 envoy --profile kubernetes
lxc launch ubuntu:18.04 controller-0 --profile kubernetes
lxc launch ubuntu:18.04 controller-1 --profile kubernetes
lxc launch ubuntu:18.04 controller-2 --profile kubernetes
lxc launch ubuntu:18.04 worker-0 --profile kubernetes
lxc launch ubuntu:18.04 worker-1 --profile kubernetes
lxc launch ubuntu:18.04 worker-2 --profile kubernetes
}
The cfssl
and cfssljson
command line utilities will be used to provision a PKI infrastructure and generate TLS certificates. The kubectl
command line utility is used to interact with the Kubernetes API Server.
1
2
3
4
5
6
wget -q --show-progress --https-only --timestamping \
https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/linux/cfssl \
https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/linux/cfssljson \
https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kubectl
chmod +x cfssl cfssljson kubectl
sudo cp cfssl cfssljson kubectl /usr/local/bin/
Verification
Verify kubectl
version 1.15.3 or higher is installed:
1
kubectl version --client
Output:
1
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:54Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Setup Envoy as control plane load balancer
Envoy will be used as load balancer fronting the controller instances. The installation instructions from https://www.getenvoy.io/install/ were adapted for usage with LXC.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
{
lxc exec envoy -- apt-get update
# Install packages required for apt to communicate via HTTPS
lxc exec envoy -- apt-get install -y apt-transport-https ca-certificates curl gnupg-agent software-properties-common
# Add the Tetrate GPG key.
lxc exec envoy -- bash -c "curl -sL 'https://getenvoy.io/gpg' | apt-key add -"
# Add the stable repository.
lxc exec envoy -- add-apt-repository "deb [arch=amd64] https://dl.bintray.com/tetrate/getenvoy-deb $( lsb_release -cs) stable"
# Install Envoy binary.
lxc exec envoy -- bash -c "apt-get update && apt-get install -y getenvoy-envoy"
# Verify Envoy is installed.
lxc exec envoy -- envoy --version
}
Create the envoy configuration:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
{
CONTROLLER0_IP= $( lxc info controller-0 | grep eth0 | head -1 | awk '{print $3}' )
CONTROLLER1_IP= $( lxc info controller-1 | grep eth0 | head -1 | awk '{print $3}' )
CONTROLLER2_IP= $( lxc info controller-2 | grep eth0 | head -1 | awk '{print $3}' )
cat > envoy.yaml <<EOF
static_resources:
listeners:
- name: k8s-controllers-listener
address:
socket_address: { address: 0.0.0.0, port_value: 6443 }
filter_chains:
- filters:
- name: envoy.tcp_proxy
config:
stat_prefix: ingress_k8s_control
cluster: k8s-controllers
clusters:
- name: k8s-controllers
connect_timeout: 0.5s
type: STRICT_DNS
lb_policy: round_robin
http2_protocol_options: {}
load_assignment:
cluster_name: k8s-controllers
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address: { address: ${CONTROLLER0_IP}, port_value: 6443 }
- endpoint:
address:
socket_address: { address: ${CONTROLLER1_IP}, port_value: 6443 }
- endpoint:
address:
socket_address: { address: ${CONTROLLER2_IP}, port_value: 6443 }
EOF
}
Place the configuration in the envoy container:
1
2
3
4
{
lxc exec envoy -- mkdir -p /etc/envoy/
lxc file push envoy.yaml envoy/etc/envoy/
}
Create envoy.service systemd unit file:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
cat > envoy.service <<EOF
[Unit]
Description=envoy
Documentation=https://www.envoyproxy.io/docs
[Service]
Type=simple
ExecStart=/usr/bin/envoy -c /etc/envoy/envoy.yaml
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
Place envoy.service systemd service file:
1
lxc file push envoy.service envoy/etc/systemd/system/
Start the envoy service:
1
2
3
4
5
{
lxc exec envoy -- systemctl daemon-reload
lxc exec envoy -- systemctl enable envoy
lxc exec envoy -- systemctl start envoy
}
Provisioning a CA and generating TLS certificates
Julia Evan’s blog post “How Kubernetes certificate authorities work ” is a great resource to get a basic understanding about PKI in Kubernetes.
Create certificate authority
In this section you will provision a Certificate Authority that can be used to generate additional TLS certificates.
Generate the CA configuration file, certificate, and private key:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
{
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": ["signing", "key encipherment", "server auth", "client auth"],
"expiry": "8760h"
}
}
}
}
EOF
cat > ca-csr.json <<EOF
{
"CN": "Kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "DE",
"L": "Karlsruhe",
"O": "HomeLab",
"OU": "Kubernetes The Hard Way",
"ST": "Baden-Württemberg"
}
]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
}
Results:
Admin client certificate
Generate the admin
client certificate and private key:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
{
cat > admin-csr.json <<EOF
{
"CN": "admin",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "DE",
"L": "Karlsruhe",
"O": "system:masters",
"OU": "Kubernetes The Hard Way",
"ST": "Baden-Württemberg"
}
]
}
EOF
cfssl gencert \
-ca= ca.pem \
-ca-key= ca-key.pem \
-config= ca-config.json \
-profile= kubernetes \
admin-csr.json | cfssljson -bare admin
}
Results:
1
2
admin-key.pem
admin.pem
Kubelet client certificates
Kubernetes uses a special-purpose authorization mode called Node Authorizer, that specifically authorizes API requests made by Kubelets. In order to be authorized by the Node Authorizer, Kubelets must use a credential that identifies them as being in the system:nodes
group, with a username of system:node:<nodeName>
. In this section you will create a certificate for each Kubernetes worker node that meets the Node Authorizer requirements.
Generate a certificate and private key for each Kubernetes worker node:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
{
for instance in worker-0 worker-1 worker-2; do
cat > ${ instance} -csr.json <<EOF
{
"CN": "system:node:${instance}",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "DE",
"L": "Karlsruhe",
"O": "system:nodes",
"OU": "Kubernetes The Hard Way",
"ST": "Baden-Württemberg"
}
]
}
EOF
EXTERNAL_IP= $( lxc info ${ instance} | grep eth0 | head -1 | awk '{print $3}' )
cfssl gencert \
-ca= ca.pem \
-ca-key= ca-key.pem \
-config= ca-config.json \
-hostname= ${ instance} ,${ EXTERNAL_IP} \
-profile= kubernetes \
${ instance} -csr.json | cfssljson -bare ${ instance}
done
}
Results:
1
2
3
4
5
6
worker-0-key.pem
worker-0.pem
worker-1-key.pem
worker-1.pem
worker-2-key.pem
worker-2.pem
Controller manager client certificate
Generate the kube-controller-manager
client certificate and private key:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
{
cat > kube-controller-manager-csr.json <<EOF
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "DE",
"L": "Karlsruhe",
"O": "system:kube-controller-manager",
"OU": "Kubernetes The Hard Way",
"ST": "Baden-Württemberg"
}
]
}
EOF
cfssl gencert \
-ca= ca.pem \
-ca-key= ca-key.pem \
-config= ca-config.json \
-profile= kubernetes \
kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
}
Results:
1
2
kube-controller-manager-key.pem
kube-controller-manager.pem
Kube proxy client certificate
Generate the kube-proxy
client certificate and private key:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
{
cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "DE",
"L": "Karlsruhe",
"O": "system:node-proxier",
"OU": "Kubernetes The Hard Way",
"ST": "Baden-Württemberg"
}
]
}
EOF
cfssl gencert \
-ca= ca.pem \
-ca-key= ca-key.pem \
-config= ca-config.json \
-profile= kubernetes \
kube-proxy-csr.json | cfssljson -bare kube-proxy
}
Results:
1
2
kube-proxy-key.pem
kube-proxy.pem
Scheduler client certificate
Generate the kube-scheduler
client certificate and private key:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
{
cat > kube-scheduler-csr.json <<EOF
{
"CN": "system:kube-scheduler",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "DE",
"L": "Karlsruhe",
"O": "system:kube-scheduler",
"OU": "Kubernetes The Hard Way",
"ST": "Baden-Württemberg"
}
]
}
EOF
cfssl gencert \
-ca= ca.pem \
-ca-key= ca-key.pem \
-config= ca-config.json \
-profile= kubernetes \
kube-scheduler-csr.json | cfssljson -bare kube-scheduler
}
Results:
1
2
kube-scheduler-key.pem
kube-scheduler.pem
Kubernetes API server certificate
The Envoy address will be included in the list of subject alternative names for the Kubernetes API Server certificate. This will ensure the certificate can be validated by remote clients.
The Kubernetes API server is automatically assigned the kubernetes
internal DNS name, which will be linked to the first IP address (10.32.0.1
) from the address range (10.32.0.0/24
) reserved for internal cluster services during the control plane bootstrapping step.
Generate the Kubernetes API Server certificate and private key:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
{
KUBERNETES_PUBLIC_ADDRESS= $( lxc info envoy | grep eth0 | head -1 | awk '{print $3}' )
KUBERNETES_HOSTNAMES= kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.svc.cluster.local
CONTROLLER_IPS=
for instance in controller-0 controller-1 controller-2; do
IP= $( lxc info ${ instance} | grep eth0 | head -1 | awk '{print $3}' )
CONTROLLER_IPS= ${ CONTROLLER_IPS} ,${ IP}
done
# Remove leading comma
CONTROLLER_IPS= ${ CONTROLLER_IPS:1}
ALL_HOSTNAMES= ${ CONTROLLER_IPS} ,${ KUBERNETES_HOSTNAMES} ,${ KUBERNETES_PUBLIC_ADDRESS} ,127.0.0.1,10.32.0.1
echo "Hostnames for Kubernetes API server certificate: ${ ALL_HOSTNAMES} "
cat > kubernetes-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "DE",
"L": "Karlsruhe",
"O": "HomeLab",
"OU": "Kubernetes The Hard Way",
"ST": "Baden-Württemberg"
}
]
}
EOF
cfssl gencert \
-ca= ca.pem \
-ca-key= ca-key.pem \
-config= ca-config.json \
-hostname= ${ ALL_HOSTNAMES} \
-profile= kubernetes \
kubernetes-csr.json | cfssljson -bare kubernetes
}
Results:
1
2
kubernetes-key.pem
kubernetes.pem
Service account key pair
The Kubernetes Controller Manager leverages a key pair to generate and sign service account tokens as described in the managing service accounts documentation.
Generate the service-account
certificate and private key:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
{
cat > service-account-csr.json <<EOF
{
"CN": "service-accounts",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "DE",
"L": "Karlsruhe",
"O": "HomeLab",
"OU": "Kubernetes The Hard Way",
"ST": "Baden-Württemberg"
}
]
}
EOF
cfssl gencert \
-ca= ca.pem \
-ca-key= ca-key.pem \
-config= ca-config.json \
-profile= kubernetes \
service-account-csr.json | cfssljson -bare service-account
}
Results:
1
2
service-account-key.pem
service-account.pem
Generating Kubernetes configuration files for authentication
Kubernetes configuration files , also known as kubeconfigs, enable Kubernetes clients to locate and authenticate to the Kubernetes API Servers. In this section you will generate kubeconfig files for the controller manager
, kubelet
, kube-proxy
, and scheduler
clients and the admin
user.
Kubernetes public IP address
Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the envoy load balancer fronting the Kubernetes API Servers will be used.
Kubelet Kubernetes configuration files
When generating kubeconfig files for Kubelets the client certificate matching the Kubelet’s node name must be used. This will ensure Kubelets are properly authorized by the Kubernetes Node Authorizer .
The following commands must be run in the same directory used to generate the SSL certificates during the “Generating TLS Certificates” step.
Generate a kubeconfig file for each worker node:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
for instance in worker-0 worker-1 worker-2; do
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority= ca.pem \
--embed-certs= true \
--server= https://${ KUBERNETES_PUBLIC_ADDRESS} :6443 \
--kubeconfig= ${ instance} .kubeconfig
kubectl config set-credentials system:node:${ instance} \
--client-certificate= ${ instance} .pem \
--client-key= ${ instance} -key.pem \
--embed-certs= true \
--kubeconfig= ${ instance} .kubeconfig
kubectl config set-context default \
--cluster= kubernetes-the-hard-way \
--user= system:node:${ instance} \
--kubeconfig= ${ instance} .kubeconfig
kubectl config use-context default --kubeconfig= ${ instance} .kubeconfig
done
Results:
1
2
3
worker-0.kubeconfig
worker-1.kubeconfig
worker-2.kubeconfig
Kube-proxy Kubernetes configuration files
Generate a kubeconfig file for the kube-proxy
service:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority= ca.pem \
--embed-certs= true \
--server= https://${ KUBERNETES_PUBLIC_ADDRESS} :6443 \
--kubeconfig= kube-proxy.kubeconfig
kubectl config set-credentials system:kube-proxy \
--client-certificate= kube-proxy.pem \
--client-key= kube-proxy-key.pem \
--embed-certs= true \
--kubeconfig= kube-proxy.kubeconfig
kubectl config set-context default \
--cluster= kubernetes-the-hard-way \
--user= system:kube-proxy \
--kubeconfig= kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig= kube-proxy.kubeconfig
}
Results:
Kube-controller-manager Kubernetes configuration files
Generate a kubeconfig file for the kube-controller-manager
service:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority= ca.pem \
--embed-certs= true \
--server= https://127.0.0.1:6443 \
--kubeconfig= kube-controller-manager.kubeconfig
kubectl config set-credentials system:kube-controller-manager \
--client-certificate= kube-controller-manager.pem \
--client-key= kube-controller-manager-key.pem \
--embed-certs= true \
--kubeconfig= kube-controller-manager.kubeconfig
kubectl config set-context default \
--cluster= kubernetes-the-hard-way \
--user= system:kube-controller-manager \
--kubeconfig= kube-controller-manager.kubeconfig
kubectl config use-context default --kubeconfig= kube-controller-manager.kubeconfig
}
Results:
1
kube-controller-manager.kubeconfig
Kube-scheduler Kubernetes configuration files
Generate a kubeconfig file for the kube-scheduler
service:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority= ca.pem \
--embed-certs= true \
--server= https://127.0.0.1:6443 \
--kubeconfig= kube-scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler \
--client-certificate= kube-scheduler.pem \
--client-key= kube-scheduler-key.pem \
--embed-certs= true \
--kubeconfig= kube-scheduler.kubeconfig
kubectl config set-context default \
--cluster= kubernetes-the-hard-way \
--user= system:kube-scheduler \
--kubeconfig= kube-scheduler.kubeconfig
kubectl config use-context default --kubeconfig= kube-scheduler.kubeconfig
}
Results:
1
kube-scheduler.kubeconfig
Admin Kubernetes configuration file
Generate a kubeconfig file for the admin
user:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority= ca.pem \
--embed-certs= true \
--server= https://127.0.0.1:6443 \
--kubeconfig= admin.kubeconfig
kubectl config set-credentials admin \
--client-certificate= admin.pem \
--client-key= admin-key.pem \
--embed-certs= true \
--kubeconfig= admin.kubeconfig
kubectl config set-context default \
--cluster= kubernetes-the-hard-way \
--user= admin \
--kubeconfig= admin.kubeconfig
kubectl config use-context default --kubeconfig= admin.kubeconfig
}
Results:
1
2
3
4
5
6
7
8
9
admin.kubeconfig
``
Upload the admin kubeconfig to the controller nodes. We will use the kubeconfig to execute ` kubectl` commands on the controller nodes for testing purposes.
``` bash
for instance in controller-0 controller-1 controller-2; do
lxc file push admin.kubeconfig ${ instance} /root/
done
Generating the data encryption config and key for etcd
Kubernetes stores a variety of data including cluster state, application configurations, and secrets. Kubernetes supports the ability to encrypt cluster data at rest.
In this step you will generate an encryption key and an encryption config suitable for encrypting Kubernetes Secrets.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
{
ENCRYPTION_KEY= $( head -c 32 /dev/urandom | base64)
cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: ${ENCRYPTION_KEY}
- identity: {}
EOF
}
Bootstrapping the etcd cluster
Kubernetes components are stateless and store cluster state in etcd . In this step you will bootstrap a three node etcd cluster and configure it for high availability and secure remote access.
Download and Install the etcd binaries
1
2
3
4
5
6
7
8
9
{
wget -q --show-progress --https-only --timestamping \
"https://github.com/etcd-io/etcd/releases/download/v3.4.0/etcd-v3.4.0-linux-amd64.tar.gz"
tar --warning= no-unknown-keyword -xvf etcd-v3.4.0-linux-amd64.tar.gz
for instance in controller-0 controller-1 controller-2; do
lxc file push etcd-v3.4.0-linux-amd64/etcd* ${ instance} /usr/local/bin/
done
}
1
2
3
4
for instance in controller-0 controller-1 controller-2; do
lxc exec ${ instance} -- mkdir -p /etc/etcd /var/lib/etcd
lxc file push ca.pem kubernetes-key.pem kubernetes.pem ${ instance} /etc/etcd/
done
Create etcd.service systemd unit files:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
{
INITIAL_CLUSTER=
for instance in controller-0 controller-1 controller-2; do
IP= $( lxc info ${ instance} | grep eth0 | head -1 | awk '{print $3}' )
INITIAL_CLUSTER= ${ INITIAL_CLUSTER} ,${ instance} = https://${ IP} :2380
done
# Remove leading comma
INITIAL_CLUSTER= ${ INITIAL_CLUSTER:1}
# INITIAL_CLUSTER will be something like
# controller-0=https://10.240.0.10:2380,controller-1=https://10.240.0.11:2380,controller-2=https://10.240.0.12:2380
for instance in controller-0 controller-1 controller-2; do
INTERNAL_IP= $( lxc info ${ instance} | grep eth0 | head -1 | awk '{print $3}' )
ETCD_NAME= $( lxc exec ${ instance} -- hostname -s)
cat > etcd.${ instance} .service <<EOF
[Unit]
Description=etcd
Documentation=https://github.com/coreos
[Service]
Type=notify
ExecStart=/usr/local/bin/etcd \\
--name ${ETCD_NAME} \\
--cert-file=/etc/etcd/kubernetes.pem \\
--key-file=/etc/etcd/kubernetes-key.pem \\
--peer-cert-file=/etc/etcd/kubernetes.pem \\
--peer-key-file=/etc/etcd/kubernetes-key.pem \\
--trusted-ca-file=/etc/etcd/ca.pem \\
--peer-trusted-ca-file=/etc/etcd/ca.pem \\
--peer-client-cert-auth \\
--client-cert-auth \\
--initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
--listen-peer-urls https://${INTERNAL_IP}:2380 \\
--listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\
--advertise-client-urls https://${INTERNAL_IP}:2379 \\
--initial-cluster-token etcd-cluster-0 \\
--initial-cluster ${INITIAL_CLUSTER} \\
--initial-cluster-state new \\
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
done
}
Distribute etcd.service files:
1
2
3
for instance in controller-0 controller-1 controller-2; do
lxc file push etcd.${ instance} .service ${ instance} /etc/systemd/system/etcd.service
done
Start the etcd server:
1
2
3
4
5
for instance in controller-0 controller-1 controller-2; do
lxc exec ${ instance} -- systemctl daemon-reload
lxc exec ${ instance} -- systemctl enable etcd
lxc exec ${ instance} -- systemctl start --no-block etcd
done
Verification:
1
2
3
4
5
lxc exec --env ETCDCTL_API= 3 controller-0 -- etcdctl member list \
--endpoints= https://127.0.0.1:2379 \
--cacert= /etc/etcd/ca.pem \
--cert= /etc/etcd/kubernetes.pem \
--key= /etc/etcd/kubernetes-key.pem
Output:
1
2
3
cc75261c9110202, started, controller-2, https://10.218.142.189:2380, https://10.218.142.189:2379, false
9d07a436d6d2cc3a, started, controller-1, https://10.218.142.35:2380, https://10.218.142.35:2379, false
d82a89a7e2ee7360, started, controller-0, https://10.218.142.174:2380, https://10.218.142.174:2379, false
Bootstrapping the Kubernetes control plane
In this lab you will bootstrap the Kubernetes control plane across three compute instances and configure it for high availability. You will also create an external load balancer that exposes the Kubernetes API Servers to remote clients. The following components will be installed on each node: Kubernetes API Server, Scheduler, and Controller Manager.
Provision the Kubernetes control plane
Create the Kubernetes configuration directory:
1
2
3
for instance in controller-0 controller-1 controller-2; do
lxc exec ${ instance} -- mkdir -p /etc/kubernetes/config
done
Donwload Kubernetes controller binaries:
1
2
3
4
5
6
7
8
9
{
wget -q --show-progress --https-only --timestamping \
"https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kube-apiserver" \
"https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kube-controller-manager" \
"https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kube-scheduler" \
"https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kubectl"
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
}
Install the Kubernetes binaries:
1
2
3
for instance in controller-0 controller-1 controller-2; do
lxc file push kube-apiserver kube-controller-manager kube-scheduler kubectl ${ instance} /usr/local/bin/
done
1
2
3
4
5
6
for instance in controller-0 controller-1 controller-2; do
lxc exec ${ instance} -- mkdir -p /var/lib/kubernetes/
lxc file push ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem \
encryption-config.yaml ${ instance} /var/lib/kubernetes/
done
Create kube-apiserver systemd unit files:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
{
ETCD_SERVERS=
for instance in controller-0 controller-1 controller-2; do
IP= $( lxc info ${ instance} | grep eth0 | head -1 | awk '{print $3}' )
ETCD_SERVERS= ${ ETCD_SERVERS} ,https://${ IP} :2379
done
# Remove leading comma
ETCD_SERVERS= ${ ETCD_SERVERS:1}
for instance in controller-0 controller-1 controller-2; do
INTERNAL_IP= $( lxc info ${ instance} | grep eth0 | head -1 | awk '{print $3}' )
cat > kube-apiserver.${ instance} .service <<EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--advertise-address=${INTERNAL_IP} \\
--allow-privileged=true \\
--apiserver-count=3 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/audit.log \\
--authorization-mode=Node,RBAC \\
--bind-address=0.0.0.0 \\
--client-ca-file=/var/lib/kubernetes/ca.pem \\
--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
--etcd-cafile=/var/lib/kubernetes/ca.pem \\
--etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
--etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\
--etcd-servers=${ETCD_SERVERS} \\
--event-ttl=1h \\
--encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
--kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
--kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
--kubelet-https=true \\
--runtime-config=api/all \\
--service-account-key-file=/var/lib/kubernetes/service-account.pem \\
--service-cluster-ip-range=10.32.0.0/24 \\
--service-node-port-range=30000-32767 \\
--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
--tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
done
}
Move Kubernetes API server systemd unit file into place:
1
2
3
for instance in controller-0 controller-1 controller-2; do
lxc file push kube-apiserver.${ instance} .service ${ instance} /etc/systemd/system/kube-apiserver.service
done
Move kube-controller-manager kubeconfig into place:
1
2
3
for instance in controller-0 controller-1 controller-2; do
lxc file push kube-controller-manager.kubeconfig ${ instance} /var/lib/kubernetes/
done
Create the kube-controller-manager systemd unit file:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
cat > kube-controller-manager.service <<EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
--address=0.0.0.0 \\
--cluster-cidr=10.200.0.0/16 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
--leader-elect=true \\
--root-ca-file=/var/lib/kubernetes/ca.pem \\
--service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\
--service-cluster-ip-range=10.32.0.0/24 \\
--use-service-account-credentials=true \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
Move kube-controller-manager systemd unit files into place:
1
2
3
for instance in controller-0 controller-1 controller-2; do
lxc file push kube-controller-manager.service ${ instance} /etc/systemd/system/
done
Move the kube-scheduler kubeconfig into place:
1
2
3
for instance in controller-0 controller-1 controller-2; do
lxc file push kube-scheduler.kubeconfig ${ instance} /var/lib/kubernetes/
done
Create the kube-scheduler.yaml configuration file:
1
2
3
4
5
6
7
8
cat > kube-scheduler.yaml <<EOF
apiVersion: kubescheduler.config.k8s.io/v1alpha1
kind: KubeSchedulerConfiguration
clientConnection:
kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
leaderElection:
leaderElect: true
EOF
Move the kube-scheduler.yaml configuration file into place:
1
2
3
for instance in controller-0 controller-1 controller-2; do
lxc file push kube-scheduler.yaml ${ instance} /etc/kubernetes/config/
done
Create the kube-scheduler systemd unit file:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
cat > kube-scheduler.service <<EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
--config=/etc/kubernetes/config/kube-scheduler.yaml \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
Move kube-scheduler systemd unit files into place:
1
2
3
for instance in controller-0 controller-1 controller-2; do
lxc file push kube-scheduler.service ${ instance} /etc/systemd/system/
done
Start the Kubernetes controller services
1
2
3
4
5
for instance in controller-0 controller-1 controller-2; do
lxc exec ${ instance} -- systemctl daemon-reload
lxc exec ${ instance} -- systemctl enable kube-apiserver kube-controller-manager kube-scheduler
lxc exec ${ instance} -- systemctl start kube-apiserver kube-controller-manager kube-scheduler
done
Allow up to 10 seconds for the Kubernetes API Server to fully initialize.
Verification
1
lxc exec controller-0 -- kubectl get componentstatuses --kubeconfig admin.kubeconfig
Output:
1
2
3
4
5
6
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-1 Healthy { "health" :"true" }
etcd-2 Healthy { "health" :"true" }
etcd-0 Healthy { "health" :"true" }
RBAC for Kubelet Authorization
In this section you will configure RBAC permissions to allow the Kubernetes API Server to access the Kubelet API on each worker node. Access to the Kubelet API is required for retrieving metrics, logs, and executing commands in pods.
The commands in this section will effect the entire cluster and only need to be run once from one of the controller nodes.
Create the system:kube-apiserver-to-kubelet
ClusterRole with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
cat <<EOF | lxc exec controller-0 -- kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
verbs:
- "*"
EOF
The Kubernetes API Server authenticates to the Kubelet as the kubernetes
user using the client certificate as defined by the --kubelet-client-certificate
flag.
Bind the system:kube-apiserver-to-kubelet
ClusterRole to the kubernetes
user:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
cat <<EOF | lxc exec controller-0 -- kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetes
EOF
Bootstrapping the Kubernetes worker nodes
In this step you will bootstrap three Kubernetes worker nodes. The following components will be installed on each node: runc , container networking plugins , containerd , kubelet , and kube-proxy .
Install OS dependencies
1
2
3
4
for instance in worker-0 worker-1 worker-2; do
lxc exec ${ instance} -- apt-get update
lxc exec ${ instance} -- apt-get -y install socat conntrack ipset
done
Download and install worker binaries
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
{
wget -q --show-progress --https-only --timestamping \
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.15.0/crictl-v1.15.0-linux-amd64.tar.gz \
https://github.com/opencontainers/runc/releases/download/v1.0.0-rc8/runc.amd64 \
https://github.com/containernetworking/plugins/releases/download/v0.8.2/cni-plugins-linux-amd64-v0.8.2.tgz \
https://github.com/containerd/containerd/releases/download/v1.2.9/containerd-1.2.9.linux-amd64.tar.gz \
https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kubectl \
https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kube-proxy \
https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kubelet
mkdir containerd cni
tar -xvf crictl-v1.15.0-linux-amd64.tar.gz
sudo tar -xvf containerd-1.2.9.linux-amd64.tar.gz -C containerd
tar -xvf cni-plugins-linux-amd64-v0.8.2.tgz -C cni
sudo cp runc.amd64 runc
chmod +x crictl kubectl kube-proxy kubelet runc
}
Create the installation directories and move the binaries into place:
1
2
3
4
5
6
7
8
9
10
11
12
13
for instance in worker-0 worker-1 worker-2; do
lxc exec ${ instance} -- mkdir -p \
/etc/cni/net.d \
/opt/cni/bin \
/var/lib/kubelet \
/var/lib/kube-proxy \
/var/lib/kubernetes \
/var/run/kubernetes
lxc file push cni/* ${ instance} /opt/cni/bin/
lxc file push containerd/bin/* ${ instance} /bin/
lxc file push crictl kubectl kube-proxy kubelet runc ${ instance} /usr/local/bin/
done
Create the bridge
network configuration files and move them into place:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
{
# Associative array mapping worker container name to pod cidr
declare -A pod_cidrs
pod_cidrs=([ "worker-0" ]= "10.200.0.0/24" [ "worker-1" ]= "10.200.1.0/24" [ "worker-2" ]= "10.200.2.0/24" )
for instance in worker-0 worker-1 worker-2; do
cat > 10-bridge.${ instance} .conf <<EOF
{
"cniVersion": "0.3.1",
"name": "bridge",
"type": "bridge",
"bridge": "cnio0",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"ranges": [
[{"subnet": "${pod_cidrs[${instance}]}"}]
],
"routes": [{"dst": "0.0.0.0/0"}]
}
}
EOF
done
for instance in worker-0 worker-1 worker-2; do
lxc file push 10-bridge.${ instance} .conf ${ instance} /etc/cni/net.d/10-bridge.conf
done
}
Create the loopback network configuration file and move it into place:
1
2
3
4
5
6
7
8
9
10
11
12
13
{
cat > 99-loopback.conf <<EOF
{
"cniVersion": "0.3.1",
"name": "lo",
"type": "loopback"
}
EOF
for instance in worker-0 worker-1 worker-2; do
lxc file push 99-loopback.conf ${ instance} /etc/cni/net.d/
done
}
Create the containerd
configuration file and move it into place:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
{
cat > containerd.config.toml <<EOF
[plugins]
[plugins.cri.containerd]
snapshotter = "overlayfs"
[plugins.cri.containerd.default_runtime]
runtime_type = "io.containerd.runtime.v1.linux"
runtime_engine = "/usr/local/bin/runc"
runtime_root = ""
EOF
for instance in worker-0 worker-1 worker-2; do
lxc config device add " ${ instance} " "kmsg" unix-char source= "/dev/kmsg" path= "/dev/kmsg"
lxc exec ${ instance} -- mkdir -p /etc/containerd
lxc file push containerd.config.toml ${ instance} /etc/containerd/config.toml
done
}
Create the containerd.service systemd unit file and move it into place:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
{
cat > containerd.service <<EOF
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target
[Service]
ExecStart=/bin/containerd
Restart=always
RestartSec=5
Delegate=yes
KillMode=process
OOMScoreAdjust=-999
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity
[Install]
WantedBy=multi-user.target
EOF
for instance in worker-0 worker-1 worker-2; do
lxc exec ${ instance} -- mkdir -p /etc/containerd
lxc file push containerd.service ${ instance} /etc/systemd/system/
done
}
Move the kubelet kubeconfig, the CA and the certificates into place:
1
2
3
4
5
for instance in worker-0 worker-1 worker-2; do
lxc file push ${ instance} -key.pem ${ instance} .pem ${ instance} /var/lib/kubelet/
lxc file push ${ instance} .kubeconfig ${ instance} /var/lib/kubelet/kubeconfig
lxc file push ca.pem ${ instance} /var/lib/kubernetes/
done
Create the kubelet-config.yaml
configuration files and move them into place:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
{
# Associative array mapping worker container name to pod cidr
declare -A pod_cidrs
pod_cidrs=([ "worker-0" ]= "10.200.0.0/24" [ "worker-1" ]= "10.200.1.0/24" [ "worker-2" ]= "10.200.2.0/24" )
for instance in worker-0 worker-1 worker-2; do
cat > kubelet-config.${ instance} .yaml <<EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientCAFile: "/var/lib/kubernetes/ca.pem"
authorization:
mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
- "10.32.0.10"
podCIDR: "${pod_cidrs[${instance}]}"
resolvConf: "/run/systemd/resolve/resolv.conf"
runtimeRequestTimeout: "15m"
tlsCertFile: "/var/lib/kubelet/${instance}.pem"
tlsPrivateKeyFile: "/var/lib/kubelet/${instance}-key.pem"
EOF
done
for instance in worker-0 worker-1 worker-2; do
lxc file push kubelet-config.${ instance} .yaml ${ instance} /var/lib/kubelet/kubelet-config.yaml
done
}
The resolvConf
configuration is used to avoid loops when using CoreDNS for service discovery on systems running systemd-resolved
.
Create the kubelet.service
systemd unit file and move it into place:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
{
cat > kubelet.service <<EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service
[Service]
ExecStart=/usr/local/bin/kubelet \\
--config=/var/lib/kubelet/kubelet-config.yaml \\
--container-runtime=remote \\
--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
--image-pull-progress-deadline=2m \\
--kubeconfig=/var/lib/kubelet/kubeconfig \\
--network-plugin=cni \\
--register-node=true \\
--v=2 \\
--fail-swap-on=false
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
for instance in worker-0 worker-1 worker-2; do
lxc file push kubelet.service ${ instance} /etc/systemd/system/
done
}
Note --fail-swap-on
is set to false
.
1
2
3
for instance in worker-0 worker-1 worker-2; do
lxc file push kube-proxy.kubeconfig ${ instance} /var/lib/kube-proxy/kubeconfig
done
Create the kube-proxy-config.yaml
configuration file and move it into place:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
{
cat > kube-proxy-config.yaml <<EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
kubeconfig: "/var/lib/kube-proxy/kubeconfig"
mode: "iptables"
clusterCIDR: "10.200.0.0/16"
EOF
for instance in worker-0 worker-1 worker-2; do
lxc file push kube-proxy-config.yaml ${ instance} /var/lib/kube-proxy/
done
}
Create the kube-proxy.service
systemd unit file and move it into place:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
{
cat > kube-proxy.service <<EOF
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-proxy \\
--config=/var/lib/kube-proxy/kube-proxy-config.yaml
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
for instance in worker-0 worker-1 worker-2; do
lxc file push kube-proxy.service ${ instance} /etc/systemd/system/
done
}
Start the worker services
1
2
3
4
5
for instance in worker-0 worker-1 worker-2; do
lxc exec ${ instance} -- systemctl daemon-reload
lxc exec ${ instance} -- systemctl enable containerd kubelet kube-proxy
lxc exec ${ instance} -- systemctl start containerd kubelet kube-proxy
done
Verification
List the registered Kubernetes nodes:
1
lxc exec controller-0 -- kubectl get nodes --kubeconfig admin.kubeconfig -owide
Output:
1
2
3
4
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
worker-0 NotReady <none> 7s v1.15.3 10.218.142.60 <none> Ubuntu 18.04.3 LTS 5.3.0-28-generic containerd://1.2.9
worker-1 NotReady <none> 7s v1.15.3 10.218.142.24 <none> Ubuntu 18.04.3 LTS 5.3.0-28-generic containerd://1.2.9
worker-2 NotReady <none> 7s v1.15.3 10.218.142.247 <none> Ubuntu 18.04.3 LTS 5.3.0-28-generic containerd://1.2.9
Configuring kubectl for remote access
In this section you will generate a kubeconfig file for the kubectl
command line utility based on the admin
user credentials.
Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the enovy load balancer fronting the Kubernetes API Servers will be used.
Generate a kubeconfig file suitable for authenticating as the admin
user:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
{
KUBERNETES_PUBLIC_ADDRESS= $( lxc info envoy | grep eth0 | head -1 | awk '{print $3}' )
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority= ca.pem \
--embed-certs= true \
--server= https://${ KUBERNETES_PUBLIC_ADDRESS} :6443
kubectl config set-credentials admin \
--client-certificate= admin.pem \
--client-key= admin-key.pem
kubectl config set-context kubernetes-the-hard-way \
--cluster= kubernetes-the-hard-way \
--user= admin
kubectl config use-context kubernetes-the-hard-way
}
Verification
Check the health of the remote Kubernetes cluster:
1
kubectl get componentstatuses
Output:
1
2
3
4
5
6
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
etcd-0 Healthy { "health" :"true" }
etcd-1 Healthy { "health" :"true" }
etcd-2 Healthy { "health" :"true" }
controller-manager Healthy ok
List the nodes in the remote Kubernetes cluster:
1
kubectl get nodes -owide
Output:
1
2
3
4
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
worker-0 Ready <none> 76s v1.15.3 10.218.142.60 <none> Ubuntu 18.04.3 LTS 5.3.0-28-generic containerd://1.2.9
worker-1 Ready <none> 76s v1.15.3 10.218.142.24 <none> Ubuntu 18.04.3 LTS 5.3.0-28-generic containerd://1.2.9
worker-2 Ready <none> 76s v1.15.3 10.218.142.247 <none> Ubuntu 18.04.3 LTS 5.3.0-28-generic containerd://1.2.9
Provisioning pod network routes
Pods scheduled to a node receive an IP address from the node’s Pod CIDR range. At this point pods can not communicate with other pods running on different nodes due to missing network routes.
In this lab you will create a route for each worker node that maps the node’s Pod CIDR range to the node’s internal IP address.
Add routes:
1
2
3
4
5
6
7
8
9
{
WORKER0_IP= $( lxc info worker-0 | grep eth0 | head -1 | awk '{print $3}' )
WORKER1_IP= $( lxc info worker-1 | grep eth0 | head -1 | awk '{print $3}' )
WORKER2_IP= $( lxc info worker-2 | grep eth0 | head -1 | awk '{print $3}' )
sudo ip route add 10.200.0.0/24 via ${ WORKER0_IP}
sudo ip route add 10.200.1.0/24 via ${ WORKER1_IP}
sudo ip route add 10.200.2.0/24 via ${ WORKER2_IP}
}
Check routes:
Output:
1
2
3
4
5
6
7
8
default via 10.0.2.2 dev enp0s3 proto dhcp metric 100
10.0.2.0/24 dev enp0s3 proto kernel scope link src 10.0.2.15 metric 100
10.0.3.0/24 dev lxcbr0 proto kernel scope link src 10.0.3.1 linkdown
10.200.0.0/24 via 10.218.142.60 dev lxdbr0
10.200.1.0/24 via 10.218.142.24 dev lxdbr0
10.200.2.0/24 via 10.218.142.247 dev lxdbr0
10.218.142.0/24 dev lxdbr0 proto kernel scope link src 10.218.142.1
169.254.0.0/16 dev enp0s3 scope link metric 1000
Verification
Verify pods can reach each other across nodes. Start two busybox pods in separate terminals:
1
2
kubectl run busybox0 -it --rm --restart= Never --image busybox -- sh
kubectl run busybox1 -it --rm --restart= Never --image busybox -- sh
Ping the IP of one of the pods from the other pod:
1
2
3
4
5
# In busybox0
hostname -i
# In busybox1
ping <IP>
Deploying the DNS cluster add-on
Deploy the coredns
cluster add-on:
1
kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns.yaml
Output:
1
2
3
4
5
6
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.extensions/coredns created
service/kube-dns created
List the pods created by the kube-dns
deployment:
1
kubectl get pods -l k8s-app= kube-dns -n kube-system
Output:
1
2
3
NAME READY STATUS RESTARTS AGE
coredns-699f8ddd77-94qv9 1/1 Running 0 20s
coredns-699f8ddd77-gtcgb 1/1 Running 0 20s
Verification
Create a busybox
deployment:
1
kubectl run --generator= run-pod/v1 busybox --image= busybox:1.28 --command -- sleep 3600
Retrieve the full name of the busybox
pod and execute a DNS lookup for the kubernetes
service inside the busybox
pod:
1
2
3
4
{
POD_NAME= $( kubectl get pods -l run= busybox -o jsonpath= "{.items[0].metadata.name}" )
kubectl exec -ti $POD_NAME -- nslookup kubernetes
}
Output:
1
2
3
4
5
Server: 10.32.0.10
Address 1: 10.32.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.32.0.1 kubernetes.default.svc.cluster.local
Smoke test
In this section you will complete a series of tasks to ensure your Kubernetes cluster is functioning correctly.
Etcd data encryption
Create a generic secret:
1
kubectl create secret generic kubernetes-the-hard-way --from-literal= "mykey=mydata"
Print a hexdump of the kubernetes-the-hard-way
secret stored in etcd:
1
2
3
4
5
6
lxc exec --env ETCDCTL_API= 3 controller-0 -- etcdctl get \
--endpoints= https://127.0.0.1:2379 \
--cacert= /etc/etcd/ca.pem \
--cert= /etc/etcd/kubernetes.pem \
--key= /etc/etcd/kubernetes-key.pem\
/registry/secrets/default/kubernetes-the-hard-way | hexdump -C
Output:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret|
00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern|
00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa|
00000030 79 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 63 62 63 |y.k8s:enc:aescbc|
00000040 3a 76 31 3a 6b 65 79 31 3a 4c 8f cb d3 c6 fc 6a |:v1:key1:L.....j|
00000050 63 73 65 4f 51 68 3d 2f d6 fa f2 39 9c 37 41 74 |cseOQh=/...9.7At|
00000060 b1 f6 b2 9a b2 a4 7e 74 e7 01 d2 a8 58 20 14 11 |......~t....X ..|
00000070 73 b5 e8 7f d9 ba c5 d2 d0 b8 e3 5d de a1 78 d7 |s..........]..x.|
00000080 2f 49 14 2c 68 93 04 73 f7 4a 54 c1 76 fc 30 a8 |/I.,h..s.JT.v.0.|
00000090 bc b0 1b aa 08 65 99 15 b1 db 7d fc fb fc 9d 3e |.....e....}....>|
000000a0 1f 8f d9 d7 da c8 11 3c 89 09 75 af 04 4b d3 85 |.......<..u..K..|
000000b0 ce b1 ea 76 be c6 7d d4 ae 0b cf 4c fb 23 4f 63 |...v..}....L.#Oc|
000000c0 cb 2a de d3 07 c3 12 47 3b 5f da f5 44 e6 25 66 |.*.....G;_..D.%f|
000000d0 b4 dd 47 03 d5 8a a8 a3 46 7c 25 4c e9 71 a3 32 |..G.....F|%L.q.2|
000000e0 f1 39 27 0c 13 b6 00 84 17 0a |.9'.......|
000000ea
The etcd key should be prefixed with k8s:enc:aescbc:v1:key1
, which indicates the aescbc
provider was used to encrypt the data with the key1
encryption key.
Deployments
In this section you will verify the ability to create and manage Deployments .
Create a deployment for the nginx web server:
1
kubectl create deployment nginx --image= nginx
List the pod created by the nginx deployment:
1
kubectl get pods -l app= nginx
Output:
1
2
NAME READY STATUS RESTARTS AGE
nginx-554b9c67f9-pwd5l 1/1 Running 0 16s
Port Forwarding
In this section you will verify the ability to access applications remotely using port forwarding .
Retrieve the full name of the nginx
pod:
1
POD_NAME= $( kubectl get pods -l app= nginx -o jsonpath= "{.items[0].metadata.name}" )
Forward port 8080
on your local machine to port 80
of the nginx
pod:
1
kubectl port-forward $POD_NAME 8080:80
Output:
1
2
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [ ::1] :8080 -> 80
In a new terminal make an HTTP request using the forwarding address:
1
2
3
4
{
sudo apt install -y curl
curl --head http://127.0.0.1:8080
}
Output:
1
2
3
4
5
6
7
8
9
HTTP/1.1 200 OK
Server: nginx/1.17.3
Date: Sat, 14 Sep 2019 21:10:11 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 13 Aug 2019 08:50:00 GMT
Connection: keep-alive
ETag: "5d5279b8-264"
Accept-Ranges: bytes
Switch back to the previous terminal and stop the port forwarding to the nginx pod with CTRL + C.
Logs
In this section you will verify the ability to retrieve container logs .
Print the nginx
pod logs:
Output:
1
127.0.0.1 - - [03/Feb/2020:21:01:00 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.58.0" "-"
Exec
In this section you will verify the ability to execute commands in a container .
Print the nginx version by executing the nginx -v
command in the nginx
container:
1
kubectl exec -ti $POD_NAME -- nginx -v
Output:
1
nginx version: nginx/1.17.8
Services
In this section you will verify the ability to expose applications using a service.
Expose the nginx
deployment using a NodePort service:
1
kubectl expose deployment nginx --port 80 --type NodePort
Retrieve the node port assigned to the nginx
service:
1
NODE_PORT= $( kubectl get svc nginx --output= jsonpath= '{range .spec.ports[0]}{.nodePort}' )
Retrieve the external IP address of a worker instance:
1
EXTERNAL_IP= $( lxc info worker-0 | grep eth0 | head -1 | awk '{print $3}' )
Make an HTTP request using the external IP address and the nginx
node port:
1
curl -I http://${ EXTERNAL_IP} :${ NODE_PORT}
Output:
1
2
3
4
5
6
7
8
9
HTTP/1.1 200 OK
Server: nginx/1.17.8
Date: Mon, 03 Feb 2020 21:03:21 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 21 Jan 2020 13:36:08 GMT
Connection: keep-alive
ETag: "5e26fe48-264"
Accept-Ranges: bytes