k8s部署zookeeper集群

zk属于有状态服务,需要连接外部存储,吧数据存放在数据盘里面,否则容器挂了,数据没有了

准备工作

准备zk的yaml文件

1、配置zk-data文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
# cat zk-data.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: k8s-pv-zk1
annotations:
volume.beta.kubernetes.io/storage-class: "anything"
labels:
type: local
spec:
capacity:
storage: 3Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/var/lib/zookeeper"
persistentVolumeReclaimPolicy: Recycle
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: k8s-pv-zk2
annotations:
volume.beta.kubernetes.io/storage-class: "anything"
labels:
type: local
spec:
capacity:
storage: 3Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/var/lib/zookeeper"
persistentVolumeReclaimPolicy: Recycle
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: k8s-pv-zk3
annotations:
volume.beta.kubernetes.io/storage-class: "anything"
labels:
type: local
spec:
capacity:
storage: 3Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/var/lib/zookeeper"
persistentVolumeReclaimPolicy: Recycle
# cat zookeeper.yaml
apiVersion: v1
kind: Service
metadata:
name: zk-hs
labels:
app: zk
spec:
ports:
- port: 2888
name: server
- port: 3888
name: leader-election
clusterIP: None
selector:
app: zk
---
apiVersion: v1
kind: Service
metadata:
name: zk-cs
labels:
app: zk
spec:
ports:
- port: 2181
name: client
selector:
app: zk
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
spec:
selector:
matchLabels:
app: zk
maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zk
spec:
selector:
matchLabels:
app: zk
serviceName: zk-hs
replicas: 3
updateStrategy:
type: RollingUpdate
podManagementPolicy: Parallel
template:
metadata:
labels:
app: zk
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
containers:
- name: kubernetes-zookeeper
imagePullPolicy: Always
image: "xxlaila/kubernetes-zookeeper:1.0-3.4.10"
resources:
requests:
memory: "1Gi"
cpu: "0.5"
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: server
- containerPort: 3888
name: leader-election
command:
- sh
- -c
- "start-zookeeper \
--servers=3 \
--data_dir=/var/lib/zookeeper/data \
--data_log_dir=/var/lib/zookeeper/data/log \
--conf_dir=/opt/zookeeper/conf \
--client_port=2181 \
--election_port=3888 \
--server_port=2888 \
--tick_time=2000 \
--init_limit=10 \
--sync_limit=5 \
--heap=512M \
--max_client_cnxns=60 \
--snap_retain_count=3 \
--purge_interval=12 \
--max_session_timeout=40000 \
--min_session_timeout=4000 \
--log_level=INFO"
readinessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds: 10
timeoutSeconds: 5
livenessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds: 10
timeoutSeconds: 5
volumeMounts:
- name: datadir
mountPath: /var/lib/zookeeper
securityContext:
runAsUser: 1000
fsGroup: 1000
volumeClaimTemplates:
- metadata:
name: datadir
annotations:
volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 3Gi

2、执行部署

1
# kubectl create -f zookeeper.yaml -n kube-dev

3、查看部署

1
2
3
4
5
# kubectl get pod -o wide -n kube-dev
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
zk-0 1/1 Running 0 6m13s 10.254.62.4 172.21.17.15 <none> <none>
zk-1 1/1 Running 0 6m12s 10.254.21.4 172.21.16.96 <none> <none>
zk-2 1/1 Running 0 6m12s 10.254.96.4 172.21.16.193 <none> <none>

4、查看持久卷申明

1
2
3
4
5
# kubectl get pv -o wide -n kube-dev
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-d1cb6a1c-407d-11e9-9436-fa163e14c5bd 3Gi RWO Delete Bound kube-dev/datadir-zk-0 managed-nfs-storage 6m18s
pvc-d20a95ec-407d-11e9-9436-fa163e14c5bd 3Gi RWO Delete Bound kube-dev/datadir-zk-1 managed-nfs-storage 6m18s
pvc-d23577af-407d-11e9-9436-fa163e14c5bd 3Gi RWO Delete Bound kube-dev/datadir-zk-2 managed-nfs-storage 6m23s

5、查看pvc

1
2
3
4
5
# kubectl get pvc -o wide -n kube-dev
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
datadir-zk-0 Bound pvc-d1cb6a1c-407d-11e9-9436-fa163e14c5bd 3Gi RWO managed-nfs-storage 6m38s
datadir-zk-1 Bound pvc-d20a95ec-407d-11e9-9436-fa163e14c5bd 3Gi RWO managed-nfs-storage 6m37s
datadir-zk-2 Bound pvc-d23577af-407d-11e9-9436-fa163e14c5bd 3Gi RWO managed-nfs-storage 6m37s

6、验证集群是否工作正常

1
2
3
4
5
6
7
8
9
10
# for i in 0 1 2; do kubectl exec zk-$i zkServer.sh status -n kube-dev; done
ZooKeeper JMX enabled by default
Using config: /usr/bin/../etc/zookeeper/zoo.cfg
Mode: follower
ZooKeeper JMX enabled by default
Using config: /usr/bin/../etc/zookeeper/zoo.cfg
Mode: leader
ZooKeeper JMX enabled by default
Using config: /usr/bin/../etc/zookeeper/zoo.cfg
Mode: follower

7、集群的访问地址

1
2
3
server.1=zk-0.zk-hs.kube-dev.svc.cluster.local.:2888:3888
server.2=zk-1.zk-hs.kube-dev.svc.cluster.local.:2888:3888
server.3=zk-2.zk-hs.kube-dev.svc.cluster.local.:2888:3888
坚持原创技术分享,您的支持将鼓励我继续创作!
0%