Kafka+sparkStreaming+Hbase(二)

一、說明

將此項目打成一個jar包telecomeAnalysis-1.0.0.jar,提交到yarn進行測試;

1、將telecomeAnalysis-1.0.0.jar放到master的/home/test/sparkpro/目錄下

2、phoenix先查看錶數據爲空

3、查看yarn正在運行的任務列表爲空

[root@master ~]# yarn application -appStates running -list
20/06/24 15:08:45 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.230.21:8032
Total number of applications (application-types: [] and states: [RUNNING]):0
                Application-Id	    Application-Name	    Application-Type	      User	     Queue	             State           Final-State	       Progress	                       Tracking-URL
[root@master ~]# 

二、將spark程序提交到yarn

1、後臺提交kafka2sparkStreaming2Hbase程序

[root@master bin]# nohup ./spark-submit --master yarn --class com.cn.sparkStreaming.kafka2sparkStreaming2Hbase /home/test/sparkpro/telecomeAnalysis-1.0.0.jar & 
[1] 4938

2、tail -f nohup.out觀察後臺日誌

3、查看yarn任務列表

[root@master ~]# yarn application -appStates running -list
20/06/24 17:20:30 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.230.21:8032
Total number of applications (application-types: [] and states: [RUNNING]):1
                Application-Id	    Application-Name	    Application-Type	      User	     Queue	             State           Final-State	       Progress	                       Tracking-URL
application_1592989983675_0001	kafka2sparkStreaming2Hbase	               SPARK	      root	   default	           RUNNING	         UNDEFINED	            10%	         http://192.168.230.21:4040

三、IDEA運行模擬數據

1、idea運行造數據程序

2、查看phoenix表數據

3、爲什麼不在spark運行程序的linux上打包運行此程序

由於虛擬機內存有限問題,因爲spark程序就已經很喫內存了,如果再運行此程序在同一個linux上,會出現組件由於內存有限的錯誤信息,導致模擬無法進行;

四、在另一臺服務器上運行造數據程序

1、先清除表數據

0: jdbc:phoenix:master,slaves1,slaves2:2181> delete from "location_sure";
5 rows affected (0.012 seconds)
0: jdbc:phoenix:master,slaves1,slaves2:2181> select * from "location_sure";
+--------------+-------------+-------------+
| rowkey_name  | count_time  | walk_place  |
+--------------+-------------+-------------+
+--------------+-------------+-------------+
No rows selected (0.049 seconds)

2、將程序jar包拷貝到slaves1的某個目錄下

3、運行

[root@slaves1 bin]# ./spark-submit --class com.cn.util.KafkaEventProducer /home/test/telecomeAnalysis-1.0.0.jar 
20/06/24 17:29:29 INFO producer.ProducerConfig: ProducerConfig values: 
	acks = 1
	batch.size = 16384
	bootstrap.servers = [192.168.230.21:6667, 192.168.230.22:6667, 192.168.230.23:6667]
	buffer.memory = 33554432
	client.id = 
	compression.type = none
	connections.max.idle.ms = 540000
	enable.idempotence = false
	interceptor.classes = null
	key.serializer = class org.apache.kafka.common.serialization.StringSerializer
	linger.ms = 0
	max.block.ms = 60000
	max.in.flight.requests.per.connection = 5
	max.request.size = 1048576
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
	receive.buffer.bytes = 32768
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 30000
	retries = 0
	retry.backoff.ms = 100
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = null
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	transaction.timeout.ms = 60000
	transactional.id = null
	value.serializer = class org.apache.kafka.common.serialization.StringSerializer

20/06/24 17:29:30 WARN producer.ProducerConfig: The configuration 'metadata.broker.list' was supplied but isn't a known config.
20/06/24 17:29:30 INFO utils.AppInfoParser: Kafka version : 1.0.2
20/06/24 17:29:30 INFO utils.AppInfoParser: Kafka commitId : 2a121f7b1d402825
{"user":"zhangSan","count_time":"2020-06-24 17:29:30","walk_place":"操場西北門"}
{"user":"liSi","count_time":"2020-06-24 17:29:35","walk_place":"操場東南北門"}
{"user":"wangWu","count_time":"2020-06-24 17:29:40","walk_place":"操場南門"}
{"user":"xiaoQiang","count_time":"2020-06-24 17:29:45","walk_place":"操場東門"}
^C[root@slaves1 bin]# 

4、檢驗效果

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章