软件需求,软件包都上传到 /usr/local/src目录:
jdk-8u101-linux-x64.tar.gz
kafka.2.11-0.8.22.tar.gz
zookeeper-3.4.9.tar.gz
kafka-manager-1.3.0.7.zip
* kafka-manager是通过scala打包获取一个编译完的项目,需要提前编译好,参考
硬件需求,四个主机:
192.168.100.100 : kafka-manager , scala
192.168.100.101 : kafka,zookeeper
192.168.100.102 : kafka,zookeeper
192.168.100.103 : kafka,zookeeper
开始设置布置环境:
192.168.100.101,192.168.100.102,192.168.100.103 环境安装包配置如下:
[root@ zookeeper-kafka-scala]# cd /usr/local/src[root@ zookeeper-kafka-scala]# lltotal 310060-rw-r--r-- 1 root root 181352138 Oct 10 18:31 jdk-8u101-linux-x64.tar.gz-rw-r--r-- 1 root root 16038031 Oct 11 14:25 kafka.2.11-0.8.22.tar.gz-rw-r--r-- 1 root root 68699247 Oct 10 18:00 kafka-manager-1.3.0.7.zip-rw-r--r-- 1 root root 28678231 Oct 10 16:10 scala-2.11.8.tgz-rw-r--r-- 1 root root 22724574 Oct 10 16:57 zookeeper-3.4.9.tar.gz[root@ zookeeper-kafka-scala]# tar -xf jdk-8u101-linux-x64.tar.gz[root@ zookeeper-kafka-scala]# tar -xf kafka.2.11-0.8.22.tar.gz[root@ zookeeper-kafka-scala]# tar -xf zookeeper-3.4.9.tar.gz[root@ zookeeper-kafka-scala]# lltotal 310072drwxr-xr-x 8 uucp 143 4096 Jun 22 18:13 jdk1.8.0_101-rw-r--r-- 1 root root 181352138 Oct 10 18:31 jdk-8u101-linux-x64.tar.gzdrwxr-xr-x 6 root root 4096 Feb 24 2016 kafka-rw-r--r-- 1 root root 16038031 Oct 11 14:25 kafka.2.11-0.8.22.tar.gz-rw-r--r-- 1 root root 68699247 Oct 10 18:00 kafka-manager-1.3.0.7.zip-rw-r--r-- 1 root root 28678231 Oct 10 16:10 scala-2.11.8.tgzdrwxr-xr-x 10 1001 1001 4096 Aug 23 15:42 zookeeper-3.4.9-rw-r--r-- 1 root root 22724574 Oct 10 16:57 zookeeper-3.4.9.tar.gz[root@ zookeeper-kafka-scala]# mv kafka /usr/local/kafka[root@ zookeeper-kafka-scala]# mv zookeeper-3.4.9 /usr/local/zookeeper[root@ zookeeper-kafka-scala]# mv jdk1.8.0_101 /usr/local/java[root@ zookeeper-kafka-scala]# echo 'export PATH=$PATH:/usr/local/java/bin/:/usr/local/kafka/bin/:/usr/local/scala/bin:/usr/local/zookeeper/' /etc/profile[root@ zookeeper-kafka-scala]# java -versionjava version "1.8.0_101"Java(TM) SE Runtime Environment (build 1.8.0_101-b13)Java HotSpot(TM) 64-Bit Server VM (build 25.101-b13, mixed mode)
开始配置zookeeper集群,三个集群机器都配置如下:
## 编辑/usr/local/zookeeper/conf/zoo.cnf文件,内容如下:tickTime=2000initLimit=10syncLimit=5dataDir=/data/zookeeperdataLogDir=/usr/local/zookeeper/logclientPort=2181maxClientCnxns=300server.1=192.168.100.101:2888:3888 server.2=192.168.100.102:2888:3888 server.3=192.168.100.103:2888:3888[root@ conf]# mkdir -p /data/zookeeper[root@ conf]# mkdir -p /usr/local/zookeeper/log#192.168.100.101 在zookeeper 中id是1 执行下面[root@ conf]# echo 1 > /data/zookeeper/myid#192.168.100.102 在zookeeper 中id是2 执行下面[root@ conf]# echo 2 > /data/zookeeper/myid#192.168.100.103 在zookeeper 中id是3 执行下面[root@ conf]# echo 3 > /data/zookeeper/myid
三台机器的防火墙放开zookeeper端口 2181,2888,3888
-A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 2181 -j ACCEPT-A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 2888 -j ACCEPT-A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 3888 -j ACCEPT
启动zookeeper,启动完,会自动推选其中一台是leader
[root@ bin]# cd /usr/local/zookeeper/bin[root@ bin]# ./zkServer.sh start[root@ bin]# ./zkServer.sh statusZooKeeper JMX enabled by defaultUsing config: /usr/local/zookeeper/bin/../conf/zoo.cfgMode: leader
开始配置kafka集群,三个集群机器都配置如下:
编辑 /usr/local/kafka/config/server.properties##broker标识,每个broker靠这个标识区别## 192.168.100.101 是 1## 192.168.100.102 是 2## 192.168.100.102 是 3broker.id=1## 用来侦听连接的端口,生产者或消费者连接该端口port=9990## 指定broker网络地址,根据kafka所在的ip填写host.name=192.168.100.101#host.name=192.168.100.102#host.name=192.168.100.103num.network.threads=4##每个分区的备份个数,默认为,值过大可能导致同步时延迟大num.partitions=3## zookeeper 集群,kafka需要zookeeper来保存meata信息zookeeper.connect=192.168.100.101:2181,192.168.100.102:2181,192.168.100.103:2181## 同CPU核数一样就可以num.io.threads=4socket.send.buffer.bytes=102400socket.receive.buffer.bytes=102400socket.request.max.bytes=104857600##日志文件保存目录log.dirs=/data/kafkanum.recovery.threads.per.data.dir=1log.segment.bytes=1073741824## 日志保留最长时间小时,以及日志策略log.retention.hours=24log.retention.check.interval.ms=300000log.cleaner.enable=truelog.cleanup.policy=deletezookeeper.connection.timeout.ms=6000controller.message.queue.size=10## 每个topic默认分片存储数量default.replication.factor=2## 允许上传topicdelete.topic.enable=trueauto.create.topics.enable=false
编辑 /usr/local/kafka/bin/kafka-server-start.sh :
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then #export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G" fi# 修改上面的内容变成,KAFKA_HEAP_OPTS这个根据自己的主机进行配置,测试机器是4核8G,所以采用下面的配置:# 这个配置成本机的IP -Djava.rmi.server.hostname=192.168.100.101# 开启jvm 的rmi配置可以让kafka-manager提供更详细的kafka操作统计数据if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then export KAFKA_HEAP_OPTS="-Xmx3G -Xms3G -Xmn1G" export KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote.port=8999 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=192.168.100.101"fi
启动kafka
[root@ bin]# /usr/local/kafka/bin/kafka-server-start.sh -daemon /usr/local/kafka/config/server.properties## 可以看到kafka进程是否启动[root@ bin]# ps -ef |grep java
开放iptables防火墙,8990,9990端口
-A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 8999 -j ACCEPT-A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 9990 -j ACCEPT
测试kafka集群
[root@ bin]# cd /usr/local/kafka/bin/[root@ bin]# ./kafka-topics.sh --zookeeper 192.168.100.101:2181,192.168.100.102:2181,192.168.100.103:2181 --create --topic test [root@ bin]# ./kafka-topics.sh --zookeeper 192.168.100.101:2181,192.168.100.102:2181,192.168.100.103:2181 --listtest
192.168.100.100 安装kafka-manager业务
[root@ src]# cd /usr/local/src[root@ src]# unzip kafka-manager-1.3.0.7.zip[root@ src]# mv kafka-manager-1.3.0.7 /usr/local/kafka-manager
配置kafka-manager的配置文件 /usr/local/kafka-manager/conf/application.conf
##修改zookeeper集群ipkafka-manager.zkhosts="192.168.100.101:2181,192.168.100.102:2181,192.168.100.103:2181"
启动kafka-manager, 防火墙放开9000端口
[root@ bin]# cd /usr/local/kafka-manager/bin[root@ bin]# ./kafka-manager
浏览器访问 可以看到下面界面,说明安装完成
添加一个集群进入,填写后,其它默认,点击save,那么就可以看到标题已经添加
由于都是web操作,比较简单,其它的功能可以自己试试,这个不是重点,这里不多说
kafka集群压力测试
[root@ bin]# cd /usr/local/kafka/bin/## 消费者压力测试,单线程压500w条消息[root@ bin]# ./kafka-consumer-perf-test.sh --zookeeper 192.168.100.101:2181,192.168.100.102:2181,192.168.100.103:2181 --messages 50000000 --topic test --threads 1## 生产者压力测试,8个线程,压500w条消息,每条消息大小100B,每批10000条[root@ bin]# ./kafka-producer-perf-test.sh --messages 5000000 --message-size 100 --batch-size 10000 --topics test --threads 8 --broker-list 192.168.100.101:2181,192.168.100.102:2181,192.168.100.103:2181
由于搭建的环境已经在生产使用,就不截图列出