文件
application.properties:
# 外部配置打開
# logging.config=./logback.xml
#業務日期
mock.date=2020-04-01
#模擬數據發送模式
mock.type=http
#http模式下,發送的地址
mock.url=http://localhost:8080/applog
#啓動次數
mock.startup.count=10000
#設備最大值
mock.max.mid=50
#會員最大值
mock.max.uid=500
#商品最大值
mock.max.sku-id=10
#頁面平均訪問時間
mock.page.during-time-ms=20000
#錯誤概率 百分比
mock.error.rate=3
#每條日誌發送延遲 ms
mock.log.sleep=10
#商品詳情來源 用戶查詢,商品推廣,智能推薦, 促銷活動
mock.detail.source-type-rate=40:25:15:20
logback.xml(寫日誌文件):
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<property name="LOG_HOME" value="/applog/gmall2020" />
<appender name="console" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%msg%n</pattern>
</encoder>
</appender>
<appender name="rollingFile" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${LOG_HOME}/app.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${LOG_HOME}/app.%d{yyyy-MM-dd}.log</fileNamePattern>
</rollingPolicy>
<encoder>
<pattern>%msg%n</pattern>
</encoder>
</appender>
<appender name="errorRollingFile" class="ch.qos.logback.core.rolling.RollingFileAppender">
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${LOG_HOME}/error.%d{yyyy-MM-dd}.%i.log</fileNamePattern>
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
<maxFileSize>200mb</maxFileSize>
</timeBasedFileNamingAndTriggeringPolicy>
<maxHistory>10</maxHistory>
</rollingPolicy>
<encoder>
<pattern>%date{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
</encoder>
<filter class="ch.qos.logback.classic.filter.LevelFilter">
<level>ERROR</level>
<onMatch>ACCEPT</onMatch>
<onMismatch>DENY</onMismatch>
</filter>
</appender>
<appender name="async-rollingFile" class="ch.qos.logback.classic.AsyncAppender">
<appender-ref ref="rollingFile" />
<discardingThreshold>0</discardingThreshold>
<queueSize>512</queueSize>
</appender>
<appender name="dao-rollingFile" class="ch.qos.logback.core.rolling.RollingFileAppender">
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>./logs/dao.%d{yyyy-MM-dd}.%i.log</fileNamePattern>
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
<maxFileSize>500mb</maxFileSize>
</timeBasedFileNamingAndTriggeringPolicy>
<maxHistory>10</maxHistory>
</rollingPolicy>
<encoder>
<pattern>%date{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
</encoder>
</appender>
<appender name="async-daoRollingFile" class="ch.qos.logback.classic.AsyncAppender">
<appender-ref ref="dao-rollingFile" />
<includeCallerData>true</includeCallerData>
</appender>
<!-- 將某一個包下日誌單獨打印日誌 -->
<logger name="com.atgugu.gmall2020.mock.log.Mocker"
level="INFO" additivity="true">
<appender-ref ref="rollingFile" />
<appender-ref ref="console" />
</logger>
<root level="error" additivity="true">
<appender-ref ref="console" />
<!-- <appender-ref ref="async-rollingFile" /> -->
</root>
</configuration>
path.json:
[
{"path":["home","good_list","good_detail","cart","trade","payment"],"rate":20 },
{"path":["home","search","good_list","good_detail","login","good_detail","cart","trade","payment"],"rate":50 },
{"path":["home","mine","orders_unpaid","trade","payment"],"rate":10 },
{"path":["home","mine","orders_unpaid","good_detail","good_spec","comments","trade","payment"],"rate":10 },
{"path":["home","mine","orders_unpaid","good_detail","good_spec","comments","home"],"rate":10 }
]
拷貝到spark-log文件夾中
在linux得/home/atguigu/目錄中新建文件夾spark-log,將文件發送到此目錄中
更改application
先打開cmd輸入ipconfig /all查看自己本機IP地址,一般windows都是.1結尾,只要查看網段即可
更改文件
創建日誌目錄並給權限
創建項目
創建GAV
小工具
web項目
對接kafka日誌
project name
等待下載依賴
編寫LoggerController類
運行
打印日誌
將logback.xml拷貝到resources中 並修改日誌保存路徑
安裝插件
安裝插件lombok,安裝完成後重啓IDEA
編寫代碼
查看結果
將數據發送到Kafka
- 添加依賴
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>fastjson</artifactId>
<version>1.2.56</version>
</dependency>
-
修改application.properties中kafka地址
-
編寫代碼
- 啓動kafka
- 創建kafkatopic:若是已經存在就不需要了
kafka-topics.sh --create --topic GMALL_START --zookeeper hadoop102:2181,hadoop103:2181,hadoop104:2181 --partitions 12 --replication-factor 1
- 查看消費情況
/opt/module/kafka_2.11-0.11.0.2/bin/kafka-console-consumer.sh --bootstrap-server hadoop102:9092,hadoop103:9092,hadoop104:9092 --topic GMALL_START