Akemi

Helm部署redis5与使用local-path-provisioner

2024/12/04

搜索与下载chart

1
2
3
4
5
6
7
8
9
10
11
12
helm search repo redis
可以看到基本上都是redis7和redis4

通过
helm search repo redis -l
挑选一个旧一点的chart版本,是redis6的,在此基础上进行修改

helm pull bitnami/redis --version 16.13.2
mv redis-16.13.2.tgz ~
cd ~
tar -xf redis-16.13.2.tgz
mv redis helm-redis5

查看chart信息,观察yaml文件结构

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
cd helm-redis5/
进入Chart.yaml
修改appversion为5.0.5

进入value.yaml
redis的value文件,分成几个部分,分别是global,master,replica,sentinel
分别是全局,主节点,主从复制节点,哨兵节点
不同的部署模式需要修改不同的部分

我这次是只做redis单点的部署,所以只修改如下部分:
默认镜像tag是6.2.7,修改为5.0.5
默认服务类型clusterIP,端口6379
默认副本数replica.replicaCount为3,修改为0,因为我不需要主从复制
默认污点容忍没有配置,我是单点k8s,所以需要在master的tolerations里加一段污点配置:
tolerations:
- key: "node-role.kubernetes.io/master"
operator: "Exists"
effect: "NoSchedule"

默认需要一块8Gi的存储,干脆正好学一下用local-path供应商好了
在global的storageClass中加入
storageClass: "local-path"
以后部署helm就可以在他storageClass上直接标注就行,部署过程在下面

创建local-path-provisioner

我这里打了污点,使其可以在master上部署

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
存储目录就使用/data好了
cd ~
sudo tee > local-path-provisioner-deploy.yaml<<EOF
apiVersion: v1
kind: Namespace
metadata:
name: local-path-storage

---
apiVersion: v1
kind: ServiceAccount
metadata:
name: local-path-provisioner-service-account
namespace: local-path-storage

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: local-path-provisioner-role
namespace: local-path-storage
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch", "create", "patch", "update", "delete"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: local-path-provisioner-role
rules:
- apiGroups: [""]
resources: ["nodes", "persistentvolumeclaims", "configmaps", "pods", "pods/log"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "patch", "update", "delete"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: local-path-provisioner-bind
namespace: local-path-storage
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: local-path-provisioner-role
subjects:
- kind: ServiceAccount
name: local-path-provisioner-service-account
namespace: local-path-storage

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: local-path-provisioner-bind
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: local-path-provisioner-role
subjects:
- kind: ServiceAccount
name: local-path-provisioner-service-account
namespace: local-path-storage

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: local-path-provisioner
namespace: local-path-storage
spec:
replicas: 1
selector:
matchLabels:
app: local-path-provisioner
template:
metadata:
labels:
app: local-path-provisioner
spec:
tolerations:
- key: "node-role.kubernetes.io/master"
operator: "Exists"
effect: "NoSchedule"
serviceAccountName: local-path-provisioner-service-account
containers:
- name: local-path-provisioner
image: rancher/local-path-provisioner:v0.0.29
imagePullPolicy: IfNotPresent
command:
- local-path-provisioner
- --debug
- start
- --config
- /etc/config/config.json
volumeMounts:
- name: config-volume
mountPath: /etc/config/
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: CONFIG_MOUNT_PATH
value: /etc/config/
volumes:
- name: config-volume
configMap:
name: local-path-config

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-path
provisioner: rancher.io/local-path
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete

---
kind: ConfigMap
apiVersion: v1
metadata:
name: local-path-config
namespace: local-path-storage
data:
config.json: |-
{
"nodePathMap":[
{
"node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
"paths":["/data"]
}
]
}
setup: |-
#!/bin/sh
set -eu
mkdir -m 0777 -p "$VOL_DIR"
teardown: |-
#!/bin/sh
set -eu
rm -rf "$VOL_DIR"
helperPod.yaml: |-
apiVersion: v1
kind: Pod
metadata:
name: helper-pod
spec:
priorityClassName: system-node-critical
tolerations:
- key: node.kubernetes.io/disk-pressure
operator: Exists
effect: NoSchedule
- key: "node-role.kubernetes.io/master"
operator: "Exists"
effect: "NoSchedule"
containers:
- name: helper-pod
image: busybox
imagePullPolicy: IfNotPresent
EOF
kubectl apply -f local-path-provisioner-deploy.yaml
kubectl get pods -n local-path-storage
NAME READY STATUS RESTARTS AGE
local-path-provisioner-849b996c68-8vwk4 1/1 Running 0 6s
kubectl apply -f local-path-storage-class.yaml
kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-path rancher.io/local-path Delete WaitForFirstConsumer false 33s

可以看到供应商运行正常,而且已经有了一个名字叫local-path的存储类

部署redis

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
helm install redis5 ./
kubectl get pods -w
NAME READY STATUS RESTARTS AGE
mysql8-0 1/1 Running 1 (148m ago) 161m
redis5-master-0 0/1 Running 0 9s
redis5-master-0 1/1 Running 0 20s

kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-mysql8-0 Bound local-pv 8Gi RWO 7h23m
redis-data-redis5-master-0 Bound pvc-87c6f460-c592-4417-8785-f2e174c3f711 8Gi RWO local-path 30m

kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
local-pv 8Gi RWO Retain Terminating default/data-mysql8-0 6h54m
pvc-87c6f460-c592-4417-8785-f2e174c3f711 8Gi RWO Delete Bound default/redis-data-redis5-master-0 local-path 7m22s

kubectl get svc | grep redis
redis5-headless ClusterIP None <none> 6379/TCP 3m8s
redis5-master ClusterIP 10.103.71.155 <none> 6379/TCP 3m8s
redis5-replicas ClusterIP 10.107.136.246 <none> 6379/TCP 3m8s

可以看到pod运行正常,pv和pvc也能绑定,说明local-path的供应商也能正常运行

连接redis与测试

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
生成一个初始密码
export REDIS_PASSWORD=$(kubectl get secret --namespace default redis5 -o jsonpath="{.data.redis-password}" | base64 -d)

创建用于连接的pod
kubectl run --namespace default redis-client --restart='Never' --env REDIS_PASSWORD=$REDIS_PASSWORD --image docker.io/bitnami/redis:5.0.5 --command -- sleep infinity
给pod添加容忍度
kubectl patch pod redis-client -n default --type='json' -p='[{"op": "add", "path": "/spec/tolerations", "value": [{"key": "node-role.kubernetes.io/master", "operator": "Exists", "effect": "NoSchedule"}]}]'

kubectl get pods -w
NAME READY STATUS RESTARTS AGE
redis5-master-0 1/1 Running 0 12m
redis-client 0/1 Pending 0 0s
redis-client 0/1 Pending 0 0s
redis-client 0/1 Pending 0 9s
redis-client 0/1 Pending 0 9s
redis-client 0/1 ContainerCreating 0 9s
redis-client 0/1 ContainerCreating 0 10s
redis-client 1/1 Running 0 10s

可以看到redis的client已经正常运行

然后进入client
kubectl exec --tty -i redis-client --namespace default -- bash

然后进入redis
REDISCLI_AUTH="$REDIS_PASSWORD" redis-cli -h redis5-master

#
如果想从集群外部对集群进行访问
kubectl port-forward --namespace default svc/redis5-master 6379:6379 &&
REDISCLI_AUTH="$REDIS_PASSWORD" redis-cli -h 127.0.0.1 -p 6379

这条命令将容器service服务的6379端口转发到了本机的6379

#
redis配置文件位置
在进入了redis-client之后
/opt/bitnami/redis/etc/redis-default.conf

CATALOG
  1. 1. 搜索与下载chart
  2. 2. 查看chart信息,观察yaml文件结构
  3. 3. 创建local-path-provisioner
  4. 4. 部署redis
  5. 5. 连接redis与测试