Akemi

ubuntu部署k8s v1.28

2025/09/30

这是有网络的情况下,如果离线环境下则需要准备对应版本的deb包手动安装

ubuntu 20.04 LTS
docker 1.28
containerd 1.17
k8s v1.28
calico 3.28.4

环境初始化与集群部署

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
# 环境初始化
sudo swapoff -a
注释/etc/fstab中的swap分区

sudo modprobe br_netfilter
sudo tee > /etc/sysctl.d/k8s.conf <<-'EOF'
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl -p /etc/sysctl.d/k8s.conf

# 加载模块
cat << EOF > /etc/modules-load.d/k8s.conf
overlay
br_netfilter
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
EOF
sudo apt -y install ipvsadm

sudo apt -y install chrony
sudo systemctl restart chrony
sudo systemctl enable chrony

# 安装docker与containerd.io
sudo apt-get install -y apt-transport-https ca-certificates curl gnupg-agent software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
# 添加docker稳定源
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io

# 配置docker与containerd镜像加速,略过
详见https://akemi.zj.cn/2024/12/08/Image-Repository/

# 安装nivida驱动,略过
详见https://akemi.zj.cn/2025/06/30/Nvidia-driver/

# 使用nivida驱动,略过
详见https://akemi.zj.cn/2025/06/30/Helm-nvidia-pulgin/

# 安装命令行工具
sudo tee /etc/apt/sources.list.d/kubernetes.list <<EOF
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
sudo apt update
sudo apt-get install -y kubelet=1.28.1-00 kubeadm=1.28.1-00 kubectl=1.28.1-00
sudo apt-mark hold kubelet kubeadm kubectl

# 配置containerd
sudo containerd config default > /etc/containerd/config.toml
修改
vim /etc/containerd/config.toml
SystemdCgroup = true
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.7"
config_path = "/etc/containerd/certs.d"
sudo systemctl restart containerd.service
sudo systemctl enable containerd.service

# 创建crictl.yaml,指定创建pod与调用容器的时候使用containerd
sudo tee /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF

sudo crictl config runtime-endpoint unix:///run/containerd/containerd.sock
sudo systemctl restart containerd

# 生成kubeadm配置文件
kubeadm config print init-defaults > kubeadm.yaml

修改字段:
advertiseAddress: 172.25.224.96
name: master
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kubernetesVersion: 1.28.0
networking:
podSubnet: 10.10.0.0/16
添加配置:
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd

# 部署
sudo kubeadm init --config=kubeadm.yaml

安装calico

根据官方文档,支持k8s 1.28的最高calico版本为3.28.4

https://docs.tigera.io/calico/3.28/getting-started/kubernetes/requirements

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
# 安装operator
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.4/manifests/tigera-operator.yaml

# 下载自定义清单
wget https://raw.githubusercontent.com/projectcalico/calico/v3.28.4/manifests/custom-resources.yaml

修改清单,改成CrossSubnet模式
...
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
calicoNetwork:
ipPools:
- name: default-ipv4-ippool
blockSize: 26
cidr: 10.10.0.0/16 # 与pod同网段
encapsulation: IPIPCrossSubnet # 使用效率最高的CrossSubnet
natOutgoing: Enabled
nodeSelector: all()

kubectl apply -f custom-resources.yaml

calico各模式的区别可见:
https://akemi.zj.cn/2025/09/10/Kubespray-install/#/ansible%E5%8F%98%E9%87%8F-calico%E7%9B%B8%E5%85%B3

kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
calico-apiserver calico-apiserver-6f9d56cf8f-66k5b 1/1 Running 0 20m
calico-apiserver calico-apiserver-6f9d56cf8f-7v7nq 1/1 Running 0 20m
calico-system calico-kube-controllers-87b94b69f-ktz29 1/1 Running 0 20m
calico-system calico-node-pp87j 1/1 Running 0 20m
calico-system calico-typha-596f45fd95-m5bb8 1/1 Running 0 20m
calico-system csi-node-driver-bm68h 2/2 Running 0 20m
kube-system coredns-6554b8b87f-kn2mw 1/1 Running 0 73m
kube-system coredns-6554b8b87f-s8lxt 1/1 Running 0 73m
kube-system etcd-master 1/1 Running 0 73m
kube-system kube-apiserver-master 1/1 Running 0 73m
kube-system kube-controller-manager-master 1/1 Running 0 73m
kube-system kube-proxy-t7g4n 1/1 Running 0 73m
kube-system kube-scheduler-master 1/1 Running 0 73m
tigera-operator tigera-operator-55f4b64fd7-6bbjz 1/1 Running 0 39m

CATALOG