Akemi

K8s二进制安装

2024/10/13

k8s的安装方式选择

官方提供了三种方式

minikube:快速在本地运行一个单点的k8s,主要提供给开发测试使用
kubeadm:推荐,提供kubeadm init和kubeadm join,快速部署Kubernetes,这种方式会把组件自动化部署在pod中
二进制:从官方下载二进制包,手动部署每个组件,能够更清晰了解k8s各个组件之间的关系

环境说明

1
2
3
4
5
6
7
8
9
10
11
12
CentOS Linux release 7.9.2009
4C4G 50G精简置备
etcd 3.3.10
k8s 1.13
docker 18
flannel 0.10

控制节点 192.168.10.144 k8s-master
工作节点 192.168.10.145 k8s-node01
工作节点 192.168.10.146 k8s-node02

三台是因为etcd最少需要三台来做集群,防止脑裂

k8s权限与自签发SSL证书说明

apiserver与kubelet通信通过https,etcd和flannel通信也是通过https,所以要做自签发证书

etcd与flannal
etcd——ca.pem server.pem server-key.pem
flannel——ca.pem server.pem server-key.pem

apiserver与kubelet
kube-apiserver——ca.pem server.prm server-key.pem
kubelet——ca.pem ca-key.pem

除此之外,需要apiserver对kubelet进行Rolebinding授权、定义额外的KubeletConfiguration,并且在kubelet启动时附带这些参数

环境初始化

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
#
yum -y install bash-completion
hostnamectl set-hostname k8s-master && bash
nmcli con modify ens18 ipv4.addresses 192.168.10.144/24 ipv4.method manual ipv4.gateway 192.168.8.2 ipv4.dns 8.8.8.8
nmcli con up ens18

cat > /etc/hosts << EOF
192.168.10.144 k8s-master
192.168.10.145 k8s-node01
192.168.10.146 k8s-node02
EOF

#互信
ssh-keygen
ssh-copy-id k8s-master
ssh-copy-id k8s-node01
ssh-copy-id k8s-node02

#安全
systemctl disable firewalld --now
sed -i 's/^SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config
setenforce 0

#交换分区
swapoff -a
sed -i '$ s/^/#/' /etc/fstab

#加载内核参数,允许数据转发
modprobe br_netfilter
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf

#repo源更换
cp -a /etc/yum.repos.d /etc/yum.repos.d.backup
rm -f /etc/yum.repos.d/*
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

yum -y install yum-utils
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

#时间同步
yum -y install chrony
sed -i 's/^server/#server/g' /etc/chrony.conf
sed -i '1s/^/server cn.pool.ntp.org iburst\n/' /etc/chrony.conf
systemctl enable chronyd --now
systemctl restart chronyd

#安装并禁用iptables
yum -y install iptables-services
systemctl disable iptables --now
iptables -F

#基础安装包
yum -y install device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack telnet ipvsadm

#安装docker
yum -y install docker
systemctl enable docker --now

#docker加速器
cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": [
"https://hub-mirror.c.163.com",
"https://docker.m.daocloud.io",
"https://ghcr.io",
"https://mirror.baidubce.com",
"https://docker.nju.edu.cn"],
"insecure-registries": ["192.168.10.130","harbor"]
}
EOF
systemctl daemon-reload
systemctl restart docker

部署etcd

生成ETCD自签发证书

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
#准备证书生成工具
mkdir -p /etc/etcd/etcd-cert
mkdir -p /data/work

下载cfssl-certinfo_linux-amd64 、cfssljson_linux-amd64 、cfssl_linux-amd64
curl -L https://github.com/cloudflare/cfssl/releases/download/1.2.0/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://github.com/cloudflare/cfssl/releases/download/1.2.0/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://github.com/cloudflare/cfssl/releases/download/1.2.0/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo

cd /etc/etcd/etcd-cert

cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"www": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF

cat > ca-csr.json <<EOF
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

cat > server-csr.json <<EOF
{
"CN": "etcd",
"hosts": [
"192.168.10.144",
"192.168.10.145",
"192.168.10.146"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing"
}
]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

此时生成了证书ca.pem,server.pem,server-key.pem
有了server.pem和server-key.pem,就可以让etcd使用

部署etcd集群

etcd版本etcd-v3.3.10

准备工作(三台)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#创建etcd相关文件夹
mkdir -p /opt/etcd/{cfg,bin,ssl}
mkdir -p /var/lib/etcd/default.etcd

#master1节点
tar -xf etcd-v3.3.10-linux-amd64.tar.gz
cd etcd-v3.3.10-linux-amd64/
mv etcd etcdctl /opt/etcd/bin

#pem文件放入文件夹
mv /etc/etcd/etcd-cert/* /opt/etcd/ssl/

scp -r /opt/etcd k8s-node01:/opt/etcd
scp -r /opt/etcd k8s-node02:/opt/etcd

写入配置文件

etcd.sh

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
#!/bin/bash
# example: ./etcd.sh etcd01 192.168.10.144 etcd02=https://192.168.10.145:2380,etcd03=https://192.168.10.146:2380

ETCD_NAME=$1
ETCD_IP=$2
ETCD_CLUSTER=$3
WORK_DIR=/opt/etcd

cat > ${WORK_DIR}/cfg/etcd<<EOF
#[Member]
ETCD_NAME="${ETCD_NAME}"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379"
ETCD_INITIAL_CLUSTER="${ETCD_CLUSTER}"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

#启动服务文件
cat >/usr/lib/systemd/system/etcd.service <<EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=${WORK_DIR}/cfg/etcd
ExecStart=${WORK_DIR}/bin/etcd \
--name=\${ETCD_NAME} \
--data-dir=\${ETCD_DATA_DIR} \
--listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=\${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=${WORK_DIR}/ssl/server.pem \
--key-file=${WORK_DIR}/ssl/server-key.pem \
--peer-cert-file=${WORK_DIR}/ssl/server.pem \
--peer-key-file=${WORK_DIR}/ssl/server-key.pem \
--trusted-ca-file=${WORK_DIR}/ssl/ca.pem \
--peer-trusted-ca-file=${WORK_DIR}/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable etcd
systemctl restart etcd

ETCD集群初始化与验证

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
三台同时启动etcd.sh
k8s-master
./etcd.sh etcd01 192.168.10.144 etcd01=https://192.168.10.144:2380,etcd02=https://192.168.10.145:2380,etcd03=https://192.168.10.146:2380

k8s-node01
./etcd.sh etcd02 192.168.10.145 etcd01=https://192.168.10.144:2380,etcd02=https://192.168.10.145:2380,etcd03=https://192.168.10.146:2380

k8s-node02
./etcd.sh etcd03 192.168.10.146 etcd01=https://192.168.10.144:2380,etcd02=https://192.168.10.145:2380,etcd03=https://192.168.10.146:2380

#查看状态
cd /opt/etcd/ssl
/opt/etcd/bin/etcdctl \
--ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \
--endpoints="https://192.168.10.144:2379,https://192.168.10.145:2379,https://192.168.10.146:2379" \
cluster-health

member a63794f0c8c8469b is healthy: got healthy result from https://192.168.10.145:2379
member a68a666f91b03162 is healthy: got healthy result from https://192.168.10.144:2379
member bb97bd301b35200d is healthy: got healthy result from https://192.168.10.146:2379
cluster is healthy

部署flannal网络

准备工作

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
1.在etcd中写入,分配子网给flannal
cd /opt/etcd/ssl
/opt/etcd/bin/etcdctl \
--ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \
--endpoints="https://192.168.10.144:2379,https://192.168.10.145:2379,https://192.168.10.146:2379" \
set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'

2.下载flannal的release
wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
tar -xf flannel-v0.10.0-linux-amd64.tar.gz
mkdir /opt/kubernetes/{bin,cfg,ssl} -p
mv flanneld mk-docker-opts.sh /opt/kubernetes/bin

3.准备flannald配置文件
cat >/opt/kubernetes/cfg/flanneld<<EOF
FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.10.144:2379,https://192.168.10.145:2379,https://192.168.10.146:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem"
EOF

部署flannal

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
#使用systemd管理flannal与docker
cat >/usr/lib/systemd/system/flanneld.service<<'EOF'
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

cat >/usr/lib/systemd/system/docker.service<<'EOF'
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl start flanneld
systemctl enable flanneld
systemctl restart docker

部署Master节点组件

apiserver组件

生成证书

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
mkdir /opt/kubernetes/{bin,cfg,ssl} -p
cd /opt/kubernetes/ssl

# CA证书
cat >ca-config.json<<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF

cat >ca-csr.json<<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

# apiserver证书
cat>server-csr.json<<EOF
{
"CN": "kubernetes",
"hosts": [
"10.0.0.1",
"127.0.0.1",
"192.168.10.144",
"192.168.10.164",
"192.168.10.165",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

cat >kube-proxy-csr.json<<EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

部署组件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
# 下载kubernetes-server-linux-amd64.tar.gz
tar zxvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
cp kube-apiserver kube-scheduler kube-controller-manager kubectl /opt/kubernetes/bin

# 生成apiserver的token
cd /opt/kubernetes/cfg/
BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

# 配置apiserver配置文件
cat >/opt/kubernetes/cfg/kube-apiserver <<EOF
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.10.144:2379,https://192.168.10.145:2379,https://192.168.10.146:2379 \
--bind-address=192.168.10.144 \
--secure-port=6443 \
--advertise-address=192.168.10.144 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"
EOF

# 配置apiserver的service文件
cat >/usr/lib/systemd/system/kube-apiserver.service <<EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver

ss -tunlp | grep apiserver
tcp LISTEN 0 128 192.168.10.144:6443 *:* users:(("kube-apiserver",pid=15875,fd=5))
tcp LISTEN 0 128 127.0.0.1:8080 *:* users:(("kube-apiserver",pid=15875,fd=3))

kube-scheduler组件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
# 配置参数
cat >/opt/kubernetes/cfg/kube-scheduler<<EOF
KUBE_SCHEDULER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect"
EOF

cat >/usr/lib/systemd/system/kube-scheduler.service<<'EOF'
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler

# 验证状态
cp
kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Unhealthy Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: conne ct: connection refused
scheduler Healthy ok
etcd-1 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}

kube-controller-manager组件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
# 准备配置文件
cat >/opt/kubernetes/cfg/kube-controller-manager <<EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \
--root-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem"
EOF

# systemd管理
cat >/usr/lib/systemd/system/kube-controller-manager.service <<'EOF'
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager

kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-1 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}

部署Node节点组件

权限配置(master节点做)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
1.将kubelet-bootstrap用户绑定到系统角色
因为刚刚创建的token就是给了kubelet-bootstrap用户做的权限
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap

2.创建kubeconfig文件
用于给node节点的kubelet赋予权限
cd /opt/kubernetes/ssl
BOOTSTRAP_TOKEN=d706b11bdca1a7ef457ed7913d67625c
KUBE_APISERVER="https://192.168.10.144:6443"

# 集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=./ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig

# 客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=bootstrap.kubeconfig

# 上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig

kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

kubectl config set-cluster kubernetes \
--certificate-authority=./ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy \
--client-certificate=./kube-proxy.pem \
--client-key=./kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

ls *.kubeconfig
bootstrap.kubeconfig kube-proxy.kubeconfig

# 将授权文件发送给master
scp *.kubeconfig k8s-node01:/opt/kubernetes/cfg
scp *.kubeconfig k8s-node02:/opt/kubernetes/cfg

scp /opt/kubernetes/bin/kubelet /opt/kubernetes/bin/kube-proxy k8s-node01:/opt/kubernetes/bin
scp /opt/kubernetes/bin/kubelet /opt/kubernetes/bin/kube-proxy k8s-node02:/opt/kubernetes/bin

部署kubelet

第一个node节点 192.168.10.145为例

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# 准备配置文件(kubelet启动参数)
cat >/opt/kubernetes/cfg/kubelet<<EOF
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.10.145 \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet.config \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
EOF

cat >/opt/kubernetes/cfg/kubelet.config<<EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.0.0.2"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
anonymous:
enabled: true
EOF

使用systemd管理kubelet

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
cat >/usr/lib/systemd/system/kubelet.service <<'EOF'
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet
ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet

部署kube-proxy

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
cat >/opt/kubernetes/cfg/kube-proxy <<EOF
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=k8s-node02 \
--cluster-cidr=10.0.0.0/24 \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
EOF

cat >/usr/lib/systemd/system/kube-proxy.service <<'EOF'
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy

此时在主节点使用kubectl get csr
可以看到待加入节点的信息

使用kubectl approve <节点名> 就可以对节点进行添加

CATALOG
  1. 1. k8s权限与自签发SSL证书说明
    1. 1.1. 环境初始化
  2. 2. 部署etcd
    1. 2.1. 生成ETCD自签发证书
    2. 2.2. 部署etcd集群
    3. 2.3. ETCD集群初始化与验证
  3. 3. 部署flannal网络
  4. 4. 部署Master节点组件
    1. 4.1. apiserver组件
      1. 4.1.1. 生成证书
      2. 4.1.2. 部署组件
    2. 4.2. kube-scheduler组件
    3. 4.3. kube-controller-manager组件
  5. 5. 部署Node节点组件
    1. 5.1. 部署kubelet
    2. 5.2. 部署kube-proxy