kafka總結:命令+錯誤+配置項說明

概念理解:

消息讀取方面

kafka中的消息,如果被讀取後,可以被重複讀取;

如果是被group A的用戶讀取了,其他組的用戶group B組的用戶可以讀取;

如果被組裏user1讀取了,組內其他成員B,在從頭開始讀取時,可以讀取到該數據;

Consumer group

http://www.cnblogs.com/huxi2b/p/6223228.html

個人認爲,理解consumer group記住下面這三個特性就好了:

consumer group下可以有一個或多個consumer instance,consumer instance可以是一個進程,也可以是一個線程

group.id是一個字符串,唯一標識一個consumer group

consumer group下訂閱的topic下的每個分區只能分配給某個group下的一個consumer(當然該分區還可以被分配給其他group)

http://www.cnblogs.com/huxi2b/p/6061110.html

http://www.cnblogs.com/huxi2b/p/6223228.html

本文着重討論了一下新版本的consumer group的內部設計原理,特別是consumer group與coordinator之間的交互過程,希望對各位有所幫助。

 

每次group進行rebalance之後,generation號都會加1,表示group進入到了一個新的版本,如下圖所示: Generation 1時group有3個成員,隨後成員2退出組,coordinator觸發rebalance,consumer group進入Generation 2,之後成員4加入,再次觸發rebalance,group進入Generation 3.

 

group與coordinator共同使用它來完成group的rebalance。目前kafka提供了5個協議來處理與consumer group coordination相關的問題:

 1、Heartbeat請求:consumer需要定期給coordinator發送心跳來表明自己還活着

2、LeaveGroup請求:主動告訴coordinator我要離開consumer group

3、SyncGroup請求:group leader把分配方案告訴組內所有成員

4、JoinGroup請求:成員請求加入組

5、DescribeGroup請求:顯示組的所有信息,包括成員信息,協議名稱,分配方案,訂閱信息等。通常該請求是給管理員使用

Coordinator在rebalance的時候主要用到了前面4種請求。

 

組成員崩潰(member failure)

可以說離開組是主動地發起rebalance;而崩潰則是被動地發起rebalance。okay,

 

難道每次變更,generation都是會加1?

有相同的generation時,就會觸發rebalance;

 

Kafka在zk上的存儲

slider拉起的kafka在zookeeper中的存儲:存儲在目錄/kafka/client下;

zk的目錄下:

 ./zkCli.sh   -server  dcp11

/kafka/client

[zk: dcp11(CONNECTED) 10] ls /kafka/client/kafkajiu0522

[consumers, cluster, config, controller, isr_change_notification, admin, brokers, controller_epoch]

 

ls /kafka/client/kafkajiu0522/brokers/topics   ----該kafka中的topic信息;

ls /kafka/client/kafkajiu0522/admin/delete_topics  ---該kafka中刪除的topic信息;

 

 

 

kafka_server_jaas.conf文件的配置內容:

KafkaServer {

    com.sun.security.auth.module.Krb5LoginModule required

    useKeyTab=true

    storeKey=true

    useTicketCache=false

    serviceName=kafka

    keyTab="/etc/security/keytabs/kafkadcp18.keytab"

    principal="kafka/[email protected]";

};

 

KafkaClient {

    com.sun.security.auth.module.Krb5LoginModule required

useTicketCache=true;

};

 

Client {

    com.sun.security.auth.module.Krb5LoginModule required

    useKeyTab=true

    storeKey=true

    useTicketCache=false

    serviceName=zookeeper

    keyTab="/etc/security/keytabs/kafkadcp18.keytab"

    principal="kafka/[email protected]";

};

命令

關於kafka啓動停止、topic的增查刪

創建topic:

./bin/kafka-topics --zookeeper DCP185:2181,DCP186:2181,DCP187:2181/kafka --topic test2 --replication-factor 1 --partitions 2 --create

     bin/kafka-create-topic.sh   --replica 2 --partition 8 --topic test  --zookeeper 192.168.197.170:2181,192.168.197.171:2181

創建名爲testtopic 8個分區分別存放數據,數據備份總共2

刪除topic:

/data/data1/confluent-3.0.0/bin/kafka-topics --zookeeper DCP185:2181,DCP186:2181,DCP187:2181/kafka  --delete  --topic  topicdf02175

 

查看topic列表:

/data/data1/confluent-3.0.0/bin/kafka-topics --zookeeper DCP185:2181,DCP186:2181,DCP187:2181/kafka --list

 ./kafka-topics --zookeeper  DCP187:2181/kafkakerberos   --list

 

啓動

nohup sh kafka-server-start ../etc/kafka/server.properties &

    ( 先用kerberos用戶登錄到環境中:kinit  -kt /root/kafka.keytab  [email protected]   );

 

停止:./kafka-server-stop  ../etc/kafka/server.properties &

 slider上創建topic命令注意和kafkaserver的創建有點不同:

 ./kafka-topics   --zookeeper  DCP185:2181,DCP186:2181,DCP187:2181/kafka/client/kafka04061  --topic  topic04102  --replication-factor 1 -partitions 1 -create

 

生產消費:

kerberos認證的,需要加上producer、consumer的對應配置文件:

./kafka-console-producer --broker-list DCP187:9092 --topic test2 --producer.config ../etc/kafka/producer.properties

選擇用bootstrap方式消費的:

./kafka-console-consumer.sh  --from-beginning --topic topic05221 --new-consumer --consumer.config ../config/consumer.properties --bootstrap-server  dcp11:9092

 

選擇用zookeeper方式消費的:

./kafka-console-consumer --zookeeper DCP185:2181,DCP186:2181,DCP187:2181/kafka --from-beginning --topic test2 --new-consumer --consumer.config ./etc/kafka/consumer.properties --bootstrap-server DCP187:9092

 

kafka不同消費方式後,記錄存儲的數據文件一樣:

無kerberos認證的

用zookeeper方式消費:./kafka-console-consumer.sh  --zookeeper DCP185:2181,DCP186:2181,DCP187:2181/kafka/client/kafka04112    --from-beginning --topic topic1;

用bootstrap-server方式消費:./kafka-console-consumer.sh     --bootstrap-server DCP186:39940   --topic topic1     --from-beginning,   --bootstrap-servers必須是kafka集羣中的其他一臺;

存儲的數據文件是:topic1-1和__consumer_offsets-0

存儲文件,消費到數據後,每個分區存放一個文件目錄;

關於acl權限:設置、查看、刪除

給普通用戶設置讀寫權限:

給client用戶賦producer寫權限:

./kafka-acls --authorizer-properties zookeeper.connect=DCP185:2181,DCP186:2181,DCP187:2181/kafka --add --allow-principal User:client --producer --topic test1

給client用戶賦consumer讀權限:

./kafka-acls --authorizer-properties zookeeper.connect=ai185:2181,ai186:2181,ai187:2181/kafka1017 --add --allow-principal User:client --consumer --topic test --group test-consumer-group 

刪除所有權限

./kafka-acls --authorizer-properties zookeeper.connect=dcp18:2181,dcp16:2181,dcp19:2181/kafkakerberos --remove   --producer --topic topicout05054

查看acl

./kafka-acls --authorizer-properties zookeeper.connect=DCP185:2181,DCP186:2181,DCP187:2181/kafka --list --topic test1

kafka不區分hostname;

 

常見問題及解決方法

啓動報錯

第一種錯誤

2017-02-17 17:25:29,224] FATAL Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)

kafka.common.KafkaException: Failed to acquire lock on file .lock in /var/log/kafka-logs. A Kafka instance in another process or thread is using this directory.

    at kafka.log.LogManager$$anonfun$lockLogDirs$1.apply(LogManager.scala:100)

    at kafka.log.LogManager$$anonfun$lockLogDirs$1.apply(LogManager.scala:97)

    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)

    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)

    at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)

    at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35)

    at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)

    at scala.collection.AbstractTraversable.map(Traversable.scala:104)

    at kafka.log.LogManager.lockLogDirs(LogManager.scala:97)

    at kafka.log.LogManager.<init>(LogManager.scala:59)

    at kafka.server.KafkaServer.createLogManager(KafkaServer.scala:609)

    at kafka.server.KafkaServer.startup(KafkaServer.scala:183)

    at io.confluent.support.metrics.SupportedServerStartable.startup(SupportedServerStartable.java:100)

    at io.confluent.support.metrics.SupportedKafka.main(SupportedKafka.java:49)

 

解決方法:Failed to acquire lock on file .lock in /var/log/kafka-logs.--問題原因是有其他的進程在使用kafka,ps -ef|grep kafka,殺掉使用該目錄的進程即可;

第二種錯誤:對index文件無權限

 

把文件的權限更改爲正確的用戶名和用戶組即可;

 目錄/var/log/kafka-logs/,其中__consumer_offsets-29是偏移量;

第三種生產消費報錯:jaas連接有問題

kafka_client_jaas.conf文件配置有問題

16環境上

/opt/dataload/filesource_wangjuan/conf下kafka_client_jaas.conf

 

KafkaClient {

    com.sun.security.auth.module.Krb5LoginModule required

    useKeyTab=true

    storeKey=true

    keyTab="/home/client/keytabs/client.keytab"

        serviceName="kafka"

    principal="client/[email protected]";

};

 

生產報錯

第一種:生產者向topic發送消息失敗:

[2017-03-09 09:16:00,982] [ERROR] [startJob_Worker-10] [DCPKafkaProducer.java line:62] produceR向topicdf02211發送信息出現異常

org.apache.kafka.common.KafkaException: Failed to construct kafka producer

        at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:335)

原因是配置文件:kafka_client_jaas.conf中配置有問題,keyTab的路徑不對,導致的;

第二種:生產消費報錯: Failed to construct kafka producer

報錯關鍵信息:Failed to construct kafka producer

解決方法:配置文件問題:KafkaClient中serviceName應該是kafka,之前配置成了zookeeper;重啓後,就好了;

配置文件如下:

KafkaServer {

    com.sun.security.auth.module.Krb5LoginModule required

    useKeyTab=true

    storeKey=true

    useTicketCache=false

    serviceName=kafka

    keyTab="/etc/security/keytabs/kafka.service.keytab"

    principal="kafka/[email protected]";

};

KafkaClient {

    com.sun.security.auth.module.Krb5LoginModule required

    useKeyTab=true

    storeKey=true

    serviceName=kafka

    keyTab="/etc/security/keytabs/kafka.service.keytab"

    principal="kafka/[email protected]";

};

Client {

    com.sun.security.auth.module.Krb5LoginModule required

    useKeyTab=true

    storeKey=true

    useTicketCache=false

    serviceName=zookeeper

    keyTab="/etc/security/keytabs/kafka.service.keytab"

    principal="kafka/[email protected]";

};

 

 

問題描述:

 

[kafka@DCP16 bin]$ ./kafka-console-producer   --broker-list DCP16:9092 --topic topicin050511  --producer.config ../etc/kafka/producer.properties

org.apache.kafka.common.KafkaException: Failed to construct kafka producer

    at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:335)

    at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:188)

    at kafka.producer.NewShinyProducer.<init>(BaseProducer.scala:40)

    at kafka.tools.ConsoleProducer$.main(ConsoleProducer.scala:45)

    at kafka.tools.ConsoleProducer.main(ConsoleProducer.scala)

Caused by: org.apache.kafka.common.KafkaException: java.lang.IllegalArgumentException: Conflicting serviceName values found in JAAS and Kafka configs value in JAAS file zookeeper, value in Kafka config kafka

    at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:86)

    at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:70)

    at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:83)

    at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:277)

    ... 4 more

Caused by: java.lang.IllegalArgumentException: Conflicting serviceName values found in JAAS and Kafka configs value in JAAS file zookeeper, value in Kafka config kafka

    at org.apache.kafka.common.security.kerberos.KerberosLogin.getServiceName(KerberosLogin.java:305)

    at org.apache.kafka.common.security.kerberos.KerberosLogin.configure(KerberosLogin.java:103)

    at org.apache.kafka.common.security.authenticator.LoginManager.<init>(LoginManager.java:45)

    at org.apache.kafka.common.security.authenticator.LoginManager.acquireLoginManager(LoginManager.java:68)

    at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:78)

    ... 7 more

[kafka@DCP16 bin]$ ./kafka-console-producer   --broker-list DCP16:9092 --topic topicin050511  --producer.config ../etc/kafka/producer.properties

 

消費時報錯: ERROR Unknown error when running consumer:  (kafka.tools.ConsoleConsumer$)

 

[root@DCP16 bin]# ./kafka-console-consumer --zookeeper dcp18:2181,dcp16:2181,dcp19:2181/kafkakerberos --from-beginning --topic topicout050511 --new-consumer --consumer.config ../etc/kafka/consumer.properties --bootstrap-server DCP16:9092

[2017-05-07 22:24:37,479] ERROR Unknown error when running consumer:  (kafka.tools.ConsoleConsumer$)

org.apache.kafka.common.KafkaException: Failed to construct kafka consumer

    at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:702)

    at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:587)

    at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:569)

    at kafka.consumer.NewShinyConsumer.<init>(BaseConsumer.scala:53)

    at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:64)

    at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:51)

    at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)

Caused by: org.apache.kafka.common.KafkaException: javax.security.auth.login.LoginException: Could not login: the client is being asked for a password, but the Kafka client code does not currently support obtaining a password from the user. not available to garner  authentication information from the user

    at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:86)

    at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:70)

    at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:83)

    at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:623)

    ... 6 more

Caused by: javax.security.auth.login.LoginException: Could not login: the client is being asked for a password, but the Kafka client code does not currently support obtaining a password from the user. not available to garner  authentication information from the user

    at com.sun.security.auth.module.Krb5LoginModule.promptForPass(Krb5LoginModule.java:899)

    at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:719)

    at com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:584)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

    at java.lang.reflect.Method.invoke(Method.java:606)

    at javax.security.auth.login.LoginContext.invoke(LoginContext.java:762)

    at javax.security.auth.login.LoginContext.access$000(LoginContext.java:203)

    at javax.security.auth.login.LoginContext$4.run(LoginContext.java:690)

    at javax.security.auth.login.LoginContext$4.run(LoginContext.java:688)

    at java.security.AccessController.doPrivileged(Native Method)

    at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:687)

    at javax.security.auth.login.LoginContext.login(LoginContext.java:595)

    at org.apache.kafka.common.security.authenticator.AbstractLogin.login(AbstractLogin.java:69)

    at org.apache.kafka.common.security.kerberos.KerberosLogin.login(KerberosLogin.java:110)

    at org.apache.kafka.common.security.authenticator.LoginManager.<init>(LoginManager.java:46)

    at org.apache.kafka.common.security.authenticator.LoginManager.acquireLoginManager(LoginManager.java:68)

    at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:78)

 

衍生問題:

kafka生產消息就會報錯:

[2017-05-07 23:17:16,240] ERROR Error when sending message to topic topicin050511 with key: null, value: 0 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)

org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.

 

把KafkaClient更改爲如下的配置,就可以 了:

KafkaClient {

    com.sun.security.auth.module.Krb5LoginModule required

   useTicketCache=true;

};

 

 

 

消費報錯

第一種錯誤:replication factor: 1 larger than available brokers: 0

消費時報錯:Error while executing topic command : replication factor: 1 larger than available brokers: 0

解決辦法:/confluent-3.0.0/bin  下重啓daemon

./kafka-server-stop  -daemon   ../etc/kafka/server.properties

./kafka-server-start  -daemon   ../etc/kafka/server.properties

   

然後zk重啓;sh zkCli.sh -server ai186;

/usr/hdp/2.4.2.0-258/zookeeper/bin/zkCli.sh   --腳本的目錄

如果還報錯,可以查看配置文件中下面的配置:

zookeeper.connect=dcp18:2181/kafkakerberos;  --是group名稱

 

 

第二種錯誤:TOPIC_AUTHORIZATION_FAILED

./bin/kafka-console-consumer --zookeeper DCP185:2181,DCP186:2181,DCP187:2181/kafka --from-beginning --topic wangjuan_topic1 --new-consumer --consumer.config ./etc/kafka/consumer.properties --bootstrap-server DCP187:9092

[2017-03-02 13:44:38,398] WARN The configuration zookeeper.connection.timeout.ms = 6000 was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig)

[2017-03-02 13:44:38,575] WARN Error while fetching metadata with correlation id 1 : {wangjuan_topic1=TOPIC_AUTHORIZATION_FAILED} (org.apache.kafka.clients.NetworkClient)

[2017-03-02 13:44:38,677] WARN Error while fetching metadata with correlation id 2 : {wangjuan_topic1=TOPIC_AUTHORIZATION_FAILED} (org.apache.kafka.clients.NetworkClient)

[2017-03-02 13:44:38,780] WARN Error while fetching metadata with correlation id 3 : {wangjuan_topic1=TOPIC_AUTHORIZATION_FAILED} (org.apache.kafka.clients.NetworkClient)

 

 

解決方法:配置文件中下面的參數中的User的U必須是大寫;

super.users=User:kafka

或者有可能是server.properties中的adver.listen的IP是不對的,有可能是代碼中寫死的IP;

 

第三種錯誤的可能的解決方法:

無法消費,則查看kafka的啓動日誌中的報錯信息:日誌文件的所屬組不對,應該是hadoop;

或者,查看kafka對應的zookeeper的配置後綴,是否已經更改,如果更改了,則topic需要重新生成纔行;

 

第三種錯誤:消費的tomcat報錯:

[2017-04-01 06:37:21,823] [INFO] [Thread-5] [AbstractCoordinator.java line:542] Marking the coordinator DCP187:9092 (id: 2147483647 rack: null) dead for group test-consumer-group

[2017-04-01 06:37:21,825] [WARN] [Thread-5] [ConsumerCoordinator.java line:476] Auto offset commit failed for group test-consumer-group: Commit offsets failed with retriable exception. You should retry committing offsets.

更改代碼中,tomcat的心跳超時時間如下:

 

沒有改之前的:;

./webapps/web/WEB-INF/classes/com/ai/bdx/dcp/hadoop/service/impl/DCPKafkaConsumer.class;

重啓後,日誌中顯示:

[2017-04-01 10:14:56,167] [INFO] [Thread-5] [AbstractCoordinator.java line:542] Marking the coordinator DCP187:9092 (id: 2147483647 rack: null) dead for group test-consumer-group

[2017-04-01 10:14:56,286] [INFO] [Thread-5] [AbstractCoordinator.java line:505] Discovered coordinator DCP187:9092 (id: 2147483647 rack: null) for group test-consumer-group.

 

創建topic時錯誤

創建topic時報錯:

[2017-04-10 10:32:23,776] WARN SASL configuration failed: javax.security.auth.login.LoginException: Checksum failed Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. (org.apache.zookeeper.ClientCnxn)

Exception in thread "main" org.I0Itec.zkclient.exception.ZkAuthFailedException: Authentication failure

    at org.I0Itec.zkclient.ZkClient.waitForKeeperState(ZkClient.java:946)

    at org.I0Itec.zkclient.ZkClient.waitUntilConnected(ZkClient.java:923)

    at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1230)

    at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:156)

    at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:130)

    at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:75)

    at kafka.utils.ZkUtils$.apply(ZkUtils.scala:57)

    at kafka.admin.TopicCommand$.main(TopicCommand.scala:54)

    at kafka.admin.TopicCommand.main(TopicCommand.scala)

問題定位:是jaas文件有問題:

解決方法:server.properties文件中的super.user要和jaas文件中的keytab的principle一致;

server.properties:super.users=User:client

kafka_server_jaas.conf文件改爲:

 

KafkaServer {

    com.sun.security.auth.module.Krb5LoginModule required

    useKeyTab=true

    storeKey=true

    useTicketCache=false

    serviceName=kafka

    keyTab="/data/data1/confluent-3.0.0/kafka.keytab"

    principal="[email protected]";

};

 

KafkaClient {

    com.sun.security.auth.module.Krb5LoginModule required

    useKeyTab=true

    storeKey=true

    keyTab="/home/client/client.keytab"

    principal="client/[email protected]";

};

 

Client {

    com.sun.security.auth.module.Krb5LoginModule required

    useKeyTab=true

    storeKey=true

    useTicketCache=false

    serviceName=zookeeper

    keyTab="/home/client/client.keytab"

    principal="client/[email protected]";

};

 

kafka的配置文件:appConfig.json和resources.json配置項說明

兩個文件的作用:

appConfig.json:可以覆蓋在 metainfo.json中定義的配置值;當你需要提供新的運行時間方差時,這些可能會被用到;

resources.json:規定屬於應用的每個部件類型所需要的yarn資源;

 

兩個文件的配置項含義,下面的3個配置項沒有查詢到,其他的都已經確定;

appConfig.json文件中的:

   "site.global.app_user": "${USER_NAME}",                    

----?應用中使用的用戶?我們環境中是client --官網和百度都沒有找到其意義

   "site.broker.instance.name": "${USER}/${CLUSTER_NAME}",    

---broker的實例名稱??官網上和官網都沒有;72上沒有配置具體值

   "site.server.port": "${KAFKA_BROKER.ALLOCATED_PORT}{PER_CONTAINER}",  

   ---KAFKA_BROKER.ALLOCATED_PORT}{PER_CONTAINER---官網上和百度都沒有找到;

 

appConfig.json和resources.json各個配置項的含義分別如下:

appConfig.json文件的各項配置:

appConfig.json

 

{

     "components": {

         "broker": {},

         "slider-appmaster": {

             "slider.hdfs.keytab.dir":

"/user/client/.slider/keytabs/client",    ----keytab的目錄;固定的,如果不存在,新建下即可;

             "slider.am.login.keytab.name": "client.keytab",             

        ---keytab文件,該文件必須關聯所有kafka server和slider server的principle;

             "slider.keytab.principal.name": "client/[email protected]"      

         ---本機上,該keytab對應的principle

         }

     },

     "global": {

         "site.server.log.segment.bytes": "1073741824",                  

     ---- 一個log文件最大值,如果超過則切換到新的log文件

         "system_configs": "broker",                                

----slider的變量:發送給容器的配置類型列表,例如core-site,hdfs-site,hbase-site;

         "site.global.app_user": "${USER_NAME}",                    

----應用中使用的用戶?不知道在哪裏配置的

         "site.global.kafka_version": "kafka_3.0.0",                

---kafka的版本:confluent-3.0.0

         "site.broker.instance.name": "${USER}/${CLUSTER_NAME}",    

---實例名稱??官網上沒有

         "site.server.port":

"${KAFKA_BROKER.ALLOCATED_PORT}{PER_CONTAINER}",          ---?沒有找到

         "site.global.pid_file": "${AGENT_WORK_ROOT}/app/run/koya.pid",  

      ---容器的進程id:ps -ef |grep  containername

         "site.server.num.network.threads": "3",                         

   ---處理網絡請求的線程數

         "site.server.log.retention.check.interval.ms": "300000",        

    --日誌保持檢查間隔時間,默認300000ms ,5分鐘

         "site.broker.xms_val": "819m",                                  

  

--??下面網址中沒有找到:http://slider.incubator.apache.org/docs/slider_specs/application_instance_configuration.html#variable-naming-convention

         "site.server.delete.topic.enable": "true",                      

    ---能夠刪除kafka開關,如果設置爲false,則刪除失敗;

         "java_home": "/usr/jdk64/jdk1.7.0_67",                          

     ---slider的變量:java的家目錄,和環境中的一致

         "site.server.num.recovery.threads.per.data.dir": "1",           

     ---每個用來日誌恢復的數據目錄的線程數---需要更加精簡化

         "site.server.log.dirs":

"/data/mfsdata/kafkashared/kafka022701/${@//site/global/app_container_tag}",

  ---數據存儲的目錄:kafka022701是命名隨意,而且每個服務的該目錄是唯一的,不可以重複;

         "site.server.num.partitions": "1",               

---每個topic的默認分區數,默認值爲1,需要更多時,可以改大該值;

         "site.server.num.io.threads": "8",               

----做磁盤I/O操作的線程數

         "site.broker.zookeeper": "dcp187:2181,dcp186:2181,dcp185:2181", 

    ---broker所在的zookeeper集羣

         "site.server.log.retention.hours": "168",                 

----每個日誌最短保存時間,超過可刪除

         "site.server.socket.request.max.bytes": "104857600",      

---一個socket可以接受的最大請求大小

         "site.global.app_install_dir": "${AGENT_WORK_ROOT}/app/install",

    ---應用安裝的家目錄:是應用的根目錄;其中AGENT_WORK_ROOT是:container_work_dirs,即容器的工作目錄;

         "site.global.app_root":

"${AGENT_WORK_ROOT}/app/install/kafka_2.11-0.10.1.1",    

--是應用的根目錄;其中AGENT_WORK_ROOT是:container_work_dirs,即容器的工作目錄;

         "site.server.socket.send.buffer.bytes": "102400",          

---socket 服務器使用的發送信息緩衝區

         "site.server.socket.receive.buffer.bytes": "102400",       

---socket 服務器使用的接收信息緩衝區

         "application.def":

".slider/package/KOYA/slider-kafka-app-package-0.91.0-incubating.zip"   

  ---slider的變量:應用程序定義包的位置,例如/slider/hbase_v096.zip

                      "create.default.zookeeper.node"              

-----可選配置項:應用是否需要默認的zk節點;我們配置文件中缺少該配置,該值是什麼不清楚;yes/no?還是true/false?後續待驗證;---

 

     },

     "metadata": {},

"schema": "http://example.org/specification/v2.0.0"

}

 

 

resources.json文件的各項配置項:

{

     "components": {

         "slider-appmaster": {},

         "KAFKA_BROKER": {

             "yarn.memory": "1024",          

--組件實例所需要的內存數量,單位爲MB;該值必須比分配給任何JVM的堆(heap)

大小大;因爲一個jvm會比單獨使用一個heap使用更多的內存;

             "yarn.role.priority": "1",            

--該組件的唯一的優先級:它爲獨立組件類型提供唯一的索引;

             "yarn.container.failure.threshold": "10",   

---在一個失敗窗口中,一個組件可以失敗的次數;如果設置爲0,則爲不可以失敗;

             "yarn.vcores": "1",              --所需要的虛擬核心的數量:vcore:virtual

  core;

             "yarn.component.instances": "3",          ---

             "yarn.container.failure.window.hours": "1"   

--容器失敗的窗口,默認值爲6;如果窗口大小有更改,該值則必須明確的配置下;如果失敗超過了窗口,則失敗次數需要重置;

         }

     },

     "global": {},

     "metadata": {},

"schema": "http://example.org/specification/v2.0.0"

}

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章