范文健康探索娱乐情感热点
投稿投诉
热点动态
科技财经
情感日志
励志美文
娱乐时尚
游戏搞笑
探索旅游
历史星座
健康养生
美丽育儿
范文作文
教案论文
国学影视

Windows系统下模拟Kafka集群参数调优

  配置kafka运行环境
  kafka的包中已经包含了zookeeper但是我还是喜欢单独下载一个zookeeper 下载kafka
  https://archive.apache.org/dist/kafka/2.6.3/kafka_2.12-2.6.3.tgz 下载zookeeper
  https://www.apache.org/dyn/closer.lua/zookeeper/zookeeper-3.6.3/apache-zookeeper-3.6.3-bin.tar.gz 配置zookeeper
  解压下载的zookeeper安装包
  复制三份并修改文件夹的名字(其实可以用三份配置但是我喜欢复制三份) 修改点一 新建zkData文件夹
  在每一份的目录下新建zkData文件夹
  修改点二 新建myid文件
  在每一份的zkData文件夹中新建myid文件,并分别把内容修改为 1 2 3
  修改点三 修改配置文件
  将conf目录下的zoo_sample.cfg 文件修改为 zoo.cfg,并修改文件内容
  修改文件内容
  第一处 dataDir 修改为zkData文件夹 注意路径要用双反斜杠
  第二处 端口号 因为是在一台机器上跑多个实例所以端口号不能冲突
  第三处添加入下内容
  server.1=localhost:2881:3881server.2=localhost:2882:3882server.3=localhost:2883:3883
  配置参数说明
  server.A=B:C:D
  A是一个数字,表示这个是第几号服务器;
  集群模式下配置一个文件myid,这个文件在dataDir目录下,这个文件里面有一个数据就是A的值,Zookeeper启动时读取此文件,拿到里面的数据与zoo.cfg里面的配置信息比较从而判断到底是哪个server。
  B是这个服务器的地址;
  C是这个服务器Follower与集群中的Leader服务器交换信息的端口;
  D是万一集群中的Leader服务器挂了,需要一个端口来重新进行选举,选出一个新的Leader,而这个端口就是用来执行选举时服务器相互通信的端口。
  修改后的三个文件内容 # The number of milliseconds of each ticktickTime=2000# The number of ticks that the initial # synchronization phase can takeinitLimit=10# The number of ticks that can pass between # sending a request and getting an acknowledgementsyncLimit=5# the directory where the snapshot is stored.# do not use /tmp for storage, /tmp here is just # example sakes.dataDir=D:soft-repositoryzookeeperapache-zookeeper-3.6.3-bin-1zkData# the port at which the clients will connectclientPort=2181# the maximum number of client connections.# increase this if you need to handle more clients#maxClientCnxns=60## Be sure to read the maintenance section of the # administrator guide before turning on autopurge.## http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance## The number of snapshots to retain in dataDir#autopurge.snapRetainCount=3# Purge task interval in hours# Set to "0" to disable auto purge feature#autopurge.purgeInterval=1## Metrics Providers## https://prometheus.io Metrics Exporter#metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider#metricsProvider.httpPort=7000#metricsProvider.exportJvmInfo=trueserver.1=localhost:2881:3881server.2=localhost:2882:3882server.3=localhost:2883:3883启动zookeeper集群
  在cmd中进入每一份的zookeeper文件夹中,执行binzkServer.cmd
  配置kafka
  在D盘根目录复制解压后的kafka三份 修改configserver.properties
  broker.id=0
  log.dirs=/tmp/kafka-logs-1
  zookeeper.connect=localhost:2181,localhost:2182,localhost:2183/kafka
  port=9091
  端口号三份分别为9091 9092 9093 # Licensed to the Apache Software Foundation (ASF) under one or more# contributor license agreements.  See the NOTICE file distributed with# this work for additional information regarding copyright ownership.# The ASF licenses this file to You under the Apache License, Version 2.0# (the "License"); you may not use this file except in compliance with# the License.  You may obtain a copy of the License at##    http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.# see kafka.server.KafkaConfig for additional details and defaults############################# Server Basics ############################## The id of the broker. This must be set to a unique integer for each broker.broker.id=1############################# Socket Server Settings ############################## The address the socket server listens on. It will get the value returned from # java.net.InetAddress.getCanonicalHostName() if not configured.#   FORMAT:#     listeners = listener_name://host_name:port#   EXAMPLE:#     listeners = PLAINTEXT://your.host.name:9092#listeners=PLAINTEXT://:9092# Hostname and port the broker will advertise to producers and consumers. If not set, # it uses the value for "listeners" if configured.  Otherwise, it will use the value# returned from java.net.InetAddress.getCanonicalHostName().#advertised.listeners=PLAINTEXT://your.host.name:9092# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL# The number of threads that the server uses for receiving requests from the network and sending responses to the networknum.network.threads=3# The number of threads that the server uses for processing requests, which may include disk I/Onum.io.threads=8# The send buffer (SO_SNDBUF) used by the socket serversocket.send.buffer.bytes=102400# The receive buffer (SO_RCVBUF) used by the socket serversocket.receive.buffer.bytes=102400# The maximum size of a request that the socket server will accept (protection against OOM)socket.request.max.bytes=104857600############################# Log Basics ############################## A comma separated list of directories under which to store log fileslog.dirs=/tmp/kafka-logs-1# The default number of log partitions per topic. More partitions allow greater# parallelism for consumption, but this will also result in more files across# the brokers.num.partitions=1# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.# This value is recommended to be increased for installations with data dirs located in RAID array.num.recovery.threads.per.data.dir=1############################# Internal Topic Settings  ############################## The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.offsets.topic.replication.factor=1transaction.state.log.replication.factor=1transaction.state.log.min.isr=1############################# Log Flush Policy ############################## Messages are immediately written to the filesystem but by default we only fsync() to sync# the OS cache lazily. The following configurations control the flush of data to disk.# There are a few important trade-offs here:#    1. Durability: Unflushed data may be lost if you are not using replication.#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.# The settings below allow one to configure the flush policy to flush data after a period of time or# every N messages (or both). This can be done globally and overridden on a per-topic basis.# The number of messages to accept before forcing a flush of data to disk#log.flush.interval.messages=10000# The maximum amount of time a message can sit in a log before we force a flush#log.flush.interval.ms=1000############################# Log Retention Policy ############################## The following configurations control the disposal of log segments. The policy can# be set to delete segments after a period of time, or after a given size has accumulated.# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens# from the end of the log.# The minimum age of a log file to be eligible for deletion due to agelog.retention.hours=168# A size-based retention policy for logs. Segments are pruned from the log unless the remaining# segments drop below log.retention.bytes. Functions independently of log.retention.hours.#log.retention.bytes=1073741824# The maximum size of a log segment file. When this size is reached a new log segment will be created.log.segment.bytes=1073741824# The interval at which log segments are checked to see if they can be deleted according# to the retention policieslog.retention.check.interval.ms=300000############################# Zookeeper ############################## Zookeeper connection string (see zookeeper docs for details).# This is a comma separated host:port pairs, each corresponding to a zk# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".# You can also append an optional chroot string to the urls to specify the# root directory for all kafka znodes.zookeeper.connect=localhost:2181,localhost:2182,localhost:2183/kafka# Timeout in ms for connecting to zookeeperzookeeper.connection.timeout.ms=18000############################# Group Coordinator Settings ############################## The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.# The default value for this is 3 seconds.# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.group.initial.rebalance.delay.ms=0port=9091启动kafkabinwindowskafka-server-start.bat configserver.propertiestopic查看topicbinwindowskafka-topics.bat --bootstrap-server localhost:9091 --list创建topicbinwindowskafka-topics.bat --bootstrap-server localhost:9091 --create --partitions 1 --replication-factor 1 --topic hello-test查看topic的详情binwindowskafka-topics.bat --bootstrap-server localhost:9091 --describe --topic hello-test修改分区数
  注意:分区数只能增加,不能减少 binwindowskafka-topics.bat --bootstrap-server localhost:9091 --alter --topic hello-test --partitions 3删除topicbinwindowskafka-topics.bat --bootstrap-server localhost:9091 --delete --topic hello集群压力测试
  用 Kafka自带的脚本,对 Kafka 进行压测。
  生产者压测脚本:kafka-producer-perf-test.sh
  消费者压测脚本:kafka-consumer-perf-test.sh Kafka Producer 压力测试创建一个 hello-test topic,设置为 3 个分区 3 个副本binwindowskafka-topics.bat --bootstrap-server localhost:9091 --create --replication-factor 3 --partitions 3 --topic hello-test测试生产者binwindowskafka-producer-perf-test.bat --topic hello-test --record-size 1024 --num-records 1000000 --throughput -1 --producer-props bootstrap.servers=localhost:9091,localhost:9092,localhost:9093 batch.size=16384 linger.ms=0
  –record-size 是一条信息有多大,单位是字节,本次测试设置为 1k。
  –num-records 是总共发送多少条信息,本次测试设置为 100 万条。
  –throughput 是每秒多少条信息,设成-1,表示不限流,尽可能快的生产数据,可测出生产者最大吞吐量。
  –producer-props 后面可以配置生产者相关参数,batch.size 配置为 16k 调整 batch.size 大小
  batch.size 默认值是 16k 将 batch.size 设置为 32k binwindowskafka-producer-perf-test.bat --topic hello-test --record-size 1024 --num-records 1000000 --throughput 10000 --producer-props bootstrap.servers=localhost:9091,localhost:9092,localhost:9093 batch.size=32768 linger.ms=0
  batch.size 默认值是 16k。将 batch.size 设置为 4k。 binwindowskafka-producer-perf-test.bat --topic hello-test --record-size 1024 --num-records 1000000 --throughput 10000 --producer-props bootstrap.servers=localhost:9091,localhost:9092,localhost:9093 batch.size=4096 linger.ms=0调整linger.ms
  linger.ms 默认是 0ms。将 linger.ms 设置为 50ms。 binwindowskafka-producer-perf-test.bat --topic hello-test --record-size 1024 --num-records 1000000 --throughput 10000 --producer-props bootstrap.servers=localhost:9091,localhost:9092,localhost:9093 batch.size=4096 linger.ms=50压缩方式
  调整压缩方式 默认的压缩方式是 none。将compression.type 设置为 snappy。 binwindowskafka-producer-perf-test.bat --topic hello-test --record-size 1024 --num-records 1000000 --throughput 10000 --producer-props bootstrap.servers=localhost:9091,localhost:9092,localhost:9093 batch.size=4096 linger.ms=50 compression.type=snappy
  默认的压缩方式是 none。将compression.typee 设置为 zstd。 binwindowskafka-producer-perf-test.bat --topic hello-test --record-size 1024 --num-records 1000000 --throughput 10000 --producer-props bootstrap.servers=localhost:9091,localhost:9092,localhost:9093 batch.size=4096 linger.ms=50 compression.type=zstd
  默认的压缩方式是 none。将compression.type 设置为 gzip。 binwindowskafka-producer-perf-test.bat --topic hello-test --record-size 1024 --num-records 1000000 --throughput 10000 --producer-props bootstrap.servers=localhost:9091,localhost:9092,localhost:9093 batch.size=4096 linger.ms=50 compression.type=gzip
  默认的压缩方式是 none。将compression.type 设置为 lz4 binwindowskafka-producer-perf-test.bat --topic hello-test --record-size 1024 --num-records 1000000 --throughput 10000 --producer-props bootstrap.servers=localhost:9091,localhost:9092,localhost:9093 batch.size=4096 linger.ms=50 compression.type=lz4调整缓存大小buffer.memory
  调整缓存大小 默认生产者端缓存大小 32m。将 buffer.memory 设置为 64m。 binwindowskafka-producer-perf-test.bat --topic hello-test --record-size 1024 --num-records 1000000 --throughput -1 --producer-props bootstrap.servers=localhost:9091,localhost:9092,localhost:9093 batch.size=4096 linger.ms=50 buffer.memory=67108864测试Consumer性能默认情况的吞吐量binwindowskafka-consumer-perf-test.bat --bootstrap-server localhost:9091,localhost:9092,localhost:9093 --topic hello-test --messages 10000000 --consumer.config config/consumer.properties --timeout 10000
  –bootstrap-server 指定 Kafka 集群地址
  –topic 指定 topic 的名称
  –messages 总共要消费的消息个数
  调整 max.poll.records
  一次拉取条数为 2000
  修改config/consumer.properties 文件中的一次拉取条数为 2000 max.poll.records=2000
  一次拉取条数为 20000
  修改config/consumer.properties 文件中的一次拉取条数为 20000 max.poll.records=20000
  一次拉取条数为 200000
  修改config/consumer.properties 文件中的一次拉取条数为 200000 max.poll.records=200000
  可以看到每秒的消息数量已经不会提升了
  一次拉取条数为 100000
  修改config/consumer.properties 文件中的一次拉取条数为 100000 max.poll.records=100000
  fetch.max.bytes
  修改/opt/module/kafka/config/consumer.properties 文件中的拉取一批数据大小 100m fetch.max.bytes=104857600

肠道推粪机是它,身体小,但能量大,这种食物,清空肠道垃圾便秘在中老年群体中很常见,粪便长期堆积在肠胃中,经过二次吸收,会增加体内的毒素,胆固醇飙升。那么什么是便秘呢?1一定是在正常饮食的情况下,食量不减少的情况下,第一出现了大便干结,甚蜂王浆作为顶级滋补品,坚持喝蜂王浆对身体有哪些好处?大家可还记得前几年的畅销品都有哪些吗?我们所热衷的商品从烧烤奶茶,到口罩,到退烧药,如今到了保健滋养补品,是不是一个很神奇的过程,其中也反映出了大家追求的变化,我们从追求享乐转变为建议春天吃3样,不但排毒,还促进代谢,身体舒适更畅快春天是一个万物复苏的季节,人体也和大自然一样,处于一个活跃的时期,进入了一个代谢旺盛的状态。此时肝脏器官非常活跃,趁着春季阳气升发,把身体毒素排出去,对于保持人体的精神状态和营养均济南生态环境领域4321推动重大项目早日落地来源爱济南新闻客户端近年来,济南市生态环境局不断创新工作方法,聚焦重大项目精准保障,深化落实4类精减3项联动2个清单1套机制的服务举措,持续打造材料最简时限最短效率最高服务最优的审河湖治理带来美丽乡村环境!妇女节当天,女士们乐游胡埭踏青赏春扬子晚报网3月8日讯(记者薄云峰)江苏的河湖治理取得的成效有多大?江苏的田园乡村究竟有多美?3月8日,恰逢三八妇女节,许多市民和游客来到2023遇见胡埭踏青节暨第六届九龙湾花朝节启惊蛰这段时间如何调理自己的身体二十四节气里惊蛰这个节气又被称为启蛰,标志着仲春时节的开始。从字面意思我们不难看出就是一声惊雷万物复苏的意思。时至惊蛰,阳气上升气温回暖,人体也开始感受到暖阳的气息,此时顺应天时养售粮窗口期在即收储扩容减缓豆市压力豆农集中售粮窗口期在即,受各库点卸车条件限制,集中排队压车已是常态,以致贸易商陆续出现谨慎心理,豆制品需求减弱将令豆源转化逐渐步入淡季,政策性收储扩容或成为东北豆市的唯一支撑。随着常吃这种碱性菜!口臭没了,头发黑了,身体健康,女性要多吃!导读建议年过50,常吃这种碱性菜!口臭没了,头发黑了,身体健康,女性要多吃!春天是一个很舒适的季节,这个时候气温不会太低,也不会太高,非常舒适。春天,万物复苏,百花争艳,到户外走走气血两虚,是导致便秘的元凶,滥用番泻叶只会越泻越虚有很多人买番泻叶回家给老人泡水喝。可是老人身体多虚,这泻法,身体只会越来越受伤。尤其是对于体虚的老年人,越是强行通便,越容易伤正气。我接诊过一老年患者,他退休后一直大便干结难解,排蜂王浆副作用不容小觑?行家牢记5个服用禁忌放心吃,利大于弊!蜂王浆是一种滋补价值营养价值极高的蜂产品,很多朋友都知道如果每天吃上一两勺蜂王浆,可起到显著的抗衰防老美容养颜功效。一些长期吃蜂王浆的朋友,往往看上去要比普通同龄人明显更年轻,甚至头晕晕的,常按揉这个穴位,止晕效果好头晕目眩的病人,在临床上经常会遇到,本人除了在玉枕区松解外,还会用到一个止晕效果不错的穴位。这个穴位,就是安眠穴。安眠穴,它具体在身体的什么地方?先找到风池穴,再找到翳风穴,这两个
党史中的纪律丨红旗渠修建十年未发生一起贪污1965年12月18日,人民日报头版头条刊发党的领导无所不在,对林县(今河南林州)人民修建红旗渠工程进行了专题报道。红旗渠于1960年2月开工,修建之初正逢国家三年困难时期,但是,医生断言她活不过50岁,她气得回家制定3不原则,活到了93岁她出生在动荡异常的民国,在风波之中成长,历经坎坷,她同时也民国时期十分著名的张家四姐妹之一,我国著名的语言文字专家,她被人们称为是白发才女,她的名字叫做张允和。张允和家中的几个姐妹细节体验,iPhone11iPhone13的屏幕都比Pro版更持久耐看细节体验感受,细心的朋友发现没有原来iPhone11因为是LCD屏幕,而iPhone11Pro是OLED屏,所以11Pro看久了,因为频闪原因会让眼睛觉得不舒服,甚至疼。从iPho语言沉默期是一年?你的孩子是幸运宝宝吗?之前朋友圈发了一个启蒙周复盘,有个曾在英语机构上班的小伙伴回复启蒙十周,就能开口已经非常棒啦,要知道,正常的沉默期都要一年,小宝属于语言敏感的宝宝了,幸运。回复我是实践派,也没研究如何通过绘本帮助孩子管理情绪?孩子的情绪管理能力有多重要?有研究表明,情绪管理能力差的孩子,长大后遇到不顺心的事往往容易冲动,控制不住自己,随意释放负面情绪,甚至会产生过激行为。比如,一个10岁男孩因为贪玩被母17岁体操界奇迹少女夺世界冠军!高颜值惊艳网友甜到心化了今年的东京奥运会,让大家认识到了很多像卢玉珊,杨倩这样优秀的运动员。但很多没机会上场的运动员也同样值得被关注。比如说刚刚赢得了体操世锦赛高低杠女子冠军的广西女孩,韦筱圆。在10月2日本乒协放出狠话!豪言2031年超过中国,成为世界第一乒乓强国众所周知,日本乒乓球队近年来进步飞速,水谷隼张本智和伊藤美诚等运动员已经多次在国际赛场上突破中国队的防线,甚至夺得了一枚奥运金牌,继上世纪5080年代后再次成为了国乒的心腹大患,而他们及时发现问题,及时纠正常规赛还在继续,火箭新秀们的状态逐渐体现了出来。火箭后场之一的格林开始适应了比赛的节奏,与湖人的较量中,突破开始有节奏上的变化了,投篮也敢投了,空切的意识有了,会利用挡拆突破了。能未来5年,在外打工的708090后,将各自面临2大难题,早做准备对于很多人来说,工作和找工作,几乎是贯穿我们一生的事情。2021年5月,时隔10年之久的第七次人口普查数据终于公布在大家眼前,据统计数据显示,中国人数多达14。1亿,在老龄化和少子中国人热议要不要囤货,西方一些大国却无货可囤,怎么回事?来源朝阳少侠这两天囤物资话题引爆关注。冬储季又逢双十一,中国老百姓线上线下火热剁手,还担心家中存货不足。其实这恰是中国人幸福的烦恼。说起来你别不信,一些西方发达国家的民众可能连囤货1900万存银行,5年后却剩30多元,2次起诉银行均败诉,钱去哪了?对于很多老百姓来说,如果手中有一笔闲钱,相信大家都乐于存进银行中。根据央行发布的2021年三季度报告显示,意愿为更多储蓄的老百姓占比达到50。8,比意愿为更多消费和更多投资的老百姓