helm安装kafka

本文主要记录了如何采用helm部署kafka,但不涉及k8s以及helm的安装,请自行查阅。

helm部署kafka

本文kafka采用了nfs作为持久化存储后端,首先先安装nfs。

配置NFS

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
(1)安装NFS
yum install -y nfs-utils
yum -y install rpcbind
(2)配置NFS服务端,创建数据目录
mkdir -p /nfs/k8s/
chmod 755 /nfs/k8s
(3)NFS配置文件设置目录信息
vim /etc/exports
/nfs/k8s/ *(async,insecure,no_root_squash,no_subtree_check,rw)
(4)重启nfs服务
/bin/systemctl start nfs.service
查看挂载情况
showmount -e
Export list for k8s-test2-master:
/nfs/k8s *

配置PV

这里采用StorageClass动态实现PV的方式,创建好StorageClass对象,只要在helm的kafka的value文件中指定storageClass为StorageClass名称即可动态在NFS上创建PV以及PVC。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
(1)创建Provisioner
Provisioner是NFS的自动配置程序,可以使用自动配置好的NFS服务器,来自动创建持久卷,也就是自动创建PV
cat nfs-client.yaml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: nfs-client-provisioner-new
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: default-admin
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 192.168.193.1
- name: NFS_PATH
value: /nfs/k8s
volumes:
- name: nfs-client-root
nfs:
server: 192.168.193.1
path: /nfs/k8s
根据实际情况修改NFS_SERVER, NFS_PATH以及nfs中的内容。
创建:
kubectl create -f nfs-client.yaml
deployment.extensions "nfs-client-provisioner-new" created
(2)创建ServiceAccount
给Provisioner授权,使得Provisioner拥有对NFS增删改查的权限
cat nfs-client-sa.yaml
# 创建sa和clusterRoleBinding 绑定已有的cluster-admin,快速创建集群基本权限
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: gmo
name: default-admin
namespace: default

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: default-crb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: default-admin
namespace: default
其中的ServiceAccount name需要和nfs-client.yaml中的serviceAccountName一致
创建:
kubectl create -f nfs-client-sa.yaml
(3)创建StorageClass对象
vim nfs-client-class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: kafka-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
provisioner需要和nfs-client.yaml中的PROVISIONER_NAME一致
创建:
kubectl create -f nfs-client-class.yaml
storageclass.storage.k8s.io "kafka-nfs-storage" created

查找kafka安装包

1
2
3
4
5
helm search kafka
NAME CHART VERSION APP VERSION DESCRIPTION
incubator/kafka 0.11.1 4.1.2 Apache Kafka is publish-subscribe messaging ret...
incubator/schema-registry 1.0.2 4.1.1 Schema Registry provides a serving layer for yo...
stable/schema-registry-ui 0.1.0 v0.9.4 This is a web tool for the confluentinc/schema-...

获取kafka安装包

1
2
3
helm fetch incubator/kafka
解压
tar -xvzf kafka-0.11.1.tgz

检查安装包是否正常

1
2
3
4
5
6
helm lint kafka
2018/12/11 10:05:40 warning: destination for resources is a table. Ignoring non-table value <nil>
==> Linting kafka
Lint OK

1 chart(s) linted, no failures

修改基本配置(主要是持久卷的设置)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
(1)修改zookeeper镜像(charts/zookeeper/templates/statefulset.yaml)
在- name: zookeeper下将image:后面的内容修改为"registry.cn-hangzhou.aliyuncs.com/appstore/k8szk:v2"
(2)zookeeper添加持久化存储kafka-nfs-storage(values.yaml)
persistence:
enabled: true
size: "1Gi"
mountPath: "/opt/kafka/data"
storageClass: "kafka-nfs-storage"

(3)kafka添加持久化存储(values.yaml)
zookeeper:
enabled: true
resources: ~
env:
ZK_HEAP_SIZE: "1G"
persistence:
enabled: true
size: "1Gi"
storageClass: "kafka-nfs-storage"
(4)zookeeper设置为持久化
persistence:
enabled: true
## The amount of PV storage allocated to each Zookeeper pod in the statefulset
size: "1Gi

修改kafka配置

1
2
3
4
5
6
vim values.yaml 
在configurationOverrides:下添加配置,例如:
"offsets.topic.replication.factor": 3
"log.cleaner.enable": true
"log.cleanup.policy": delete
"delete.topic.enable": true

安装kafka

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
helm install ./kafka --name kafka-dengxs --namespace kafka
(1)查看pv
### kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-b1bae86f-fde5-11e8-b22c-6c92bf830b08 1Gi RWX Delete Bound kafka/data-kafka-dengxs-zookeeper-0 kafka-nfs-storage 5m
pvc-b1ca2e70-fde5-11e8-b22c-6c92bf830b08 1Gi RWX Delete Bound kafka/datadir-kafka-dengxs-0 kafka-nfs-storage 5m
pvc-c0dfd068-fde5-11e8-b22c-6c92bf830b08 1Gi RWX Delete Bound kafka/data-kafka-dengxs-zookeeper-1 kafka-nfs-storage 4m
pvc-d3809366-fde5-11e8-b22c-6c92bf830b08 1Gi RWX Delete Bound kafka/data-kafka-dengxs-zookeeper-2 kafka-nfs-storage 4m
pvc-d51a60d9-fdbf-11e8-b22c-6c92bf830b08 300M RWX Delete Bound default/helm-grafana-data default 4h
pvc-e6203d5d-fde5-11e8-b22c-6c92bf830b08 1Gi RWX Delete Bound kafka/datadir-kafka-dengxs-1 kafka-nfs-storage 4m
pvc-fa137df4-fde5-11e8-b22c-6c92bf830b08 1Gi RWX Delete Bound kafka/datadir-kafka-dengxs-2 kafka-nfs-storage 3m
(2)查看pvc
### kubectl get pvc -n kafka
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-kafka-dengxs-zookeeper-0 Bound pvc-b1bae86f-fde5-11e8-b22c-6c92bf830b08 1Gi RWX kafka-nfs-storage 6m
data-kafka-dengxs-zookeeper-1 Bound pvc-c0dfd068-fde5-11e8-b22c-6c92bf830b08 1Gi RWX kafka-nfs-storage 5m
data-kafka-dengxs-zookeeper-2 Bound pvc-d3809366-fde5-11e8-b22c-6c92bf830b08 1Gi RWX kafka-nfs-storage 5m
datadir-kafka-dengxs-0 Bound pvc-b1ca2e70-fde5-11e8-b22c-6c92bf830b08 1Gi RWX kafka-nfs-storage 6m
datadir-kafka-dengxs-1 Bound pvc-e6203d5d-fde5-11e8-b22c-6c92bf830b08 1Gi RWX kafka-nfs-storage 4m
datadir-kafka-dengxs-2 Bound pvc-fa137df4-fde5-11e8-b22c-6c92bf830b08 1Gi RWX kafka-nfs-storage 4m
(3)查看kafka持久化信息
ll /nfs/k8s

查看服务

1
2
3
4
5
6
7
8
kubectl get pods --namespace kafka -o wide
NAME READY STATUS RESTARTS AGE IP NODE
kafka-dengxs-0 1/1 Running 0 2h 10.233.105.219 k8s-test2-node1
kafka-dengxs-1 1/1 Running 0 2h 10.233.105.242 k8s-test2-node1
kafka-dengxs-2 1/1 Running 0 2h 10.233.105.204 k8s-test2-node1
kafka-dengxs-zookeeper-0 1/1 Running 0 2h 10.233.105.209 k8s-test2-node1
kafka-dengxs-zookeeper-1 1/1 Running 0 2h 10.233.105.228 k8s-test2-node1
kafka-dengxs-zookeeper-2 1/1 Running 0 2h 10.233.105.244 k8s-test2-node1

查看配置是否修改成功

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# kubectl -n kafka exec -ti kafka-dengxs-0 -- bash
cat /etc/kafka/kafka.properties
dengxs.zookeeper.service.host=10.233.12.95
heap.opts=-Xmx1G -Xms1G
log.cleaner.enable=true
dengxs.zookeeper.port.2181.tcp.addr=10.233.12.95
dengxs.port.9092.tcp=tcp://10.233.44.224:9092
advertised.listeners=PLAINTEXT://10.233.105.215:9092
zookeeper.connect=kafka-dengxs-zookeeper:2181
log.cleanup.policy=delete
dengxs.zookeeper.port.2181.tcp=tcp://10.233.12.95:2181
dengxs.port=tcp://10.233.44.224:9092
jmx.port=5555
dengxs.zookeeper.port.2181.tcp.port=2181
dengxs.port.9092.tcp.addr=10.233.44.224
dengxs.service.port=9092
dengxs.zookeeper.service.port=2181
dengxs.service.host=10.233.44.224
dengxs.port.9092.tcp.port=9092
delete.topic.enable=true
dengxs.service.port.broker=9092
dengxs.port.9092.tcp.proto=tcp
broker.id=0
offsets.topic.replication.factor=3
dengxs.zookeeper.port.2181.tcp.proto=tcp
dengxs.zookeeper.service.port.client=2181
log.dirs=/opt/kafka/data/logs
listeners=PLAINTEXT://0.0.0.0:9092
dengxs.zookeeper.port=tcp://10.233.12.95:2181
可以看到相关配置已经成功添加

验证服务

1
2
3
4
5
6
7
8
9
10
11
12
13
(1)创建topic
kubectl -n kafka exec kafka-dengxs-0 -- /usr/bin/kafka-topics --zookeeper kafka-dengxs-zookeeper:2181 --create --replication-factor 3 --partitions 3 --topic dengxs
Created topic "dengxs".
(2)列出topic
kubectl -n kafka exec kafka-dengxs-0 -- /usr/bin/kafka-topics --zookeeper kafka-dengxs-zookeeper:2181 --list
__confluent.support.metrics
dengxs
(3)查看topic
kubectl -n kafka exec kafka-dengxs-0 -- /usr/bin/kafka-topics --zookeeper kafka-dengxs-zookeeper:2181 --describe --topic dengxs
(4)消费数据
kubectl -n kafka exec -ti kafka-dengxs-1 -- /usr/bin/kafka-console-consumer --bootstrap-server kafka-dengxs:9092 --topic dengxs --from-beginning
(5)生产数据
kubectl -n kafka exec -ti kafka-dengxs-2 -- /usr/bin/kafka-console-producer --broker-list kafka-dengxs-headless:9092 --topic dengxs
ulysses wechat
订阅+