應用系統如何接入Apollo,接入方式比較多,針對使用的開發模式不同而不同
> 項目是純java
> 項目是springboot
> 項目是springcloud,因爲數據自生產系統都是基於微服務,所以我們就用這個進行詳細介紹
下面以校驗系統爲實例。進行說明
1 增加配置文件如下圖所示
配置文件說明
application.properties:配置系統所有用到的各類屬性參數配置 application-local.yml:配置apollo的本地環境 application-dve.yml:配置apollo的線下環境 application-pre.yml:配置apollo的預發佈環境 application-prod.yml:配置apollo的生產環境 bootstrap.yml:讀取系統的環境變量,Apollo根據不同的環境讀取對應的環境配置 |
---|
注意:bootstrap這個文件名稱一定不能寫錯,寫錯了spring加載不到
配置文件內容
bootstrap.yml
spring: profiles: active: '@spring.profiles.active@' |
---|
application-dev.yml和application-local.yml
apollo: bootstrap: enabled: true namespaces: application,0003.eureka_ns,0003.feign.ns,0003.ribbon.ns,0003.datasource.ns,0003.redis.ns,0003.kafka.ns,0003.hubber.ns,0003.sso.ns,0003.rbac.ns,0003.mongodb.ns,0003.fireman.ns meta: http://10.15.255.61:8080 app: id: dcp-service-eccs |
---|
application-pre.yml
apollo: bootstrap: enabled: true namespaces: application,0003.eureka_ns,0003.feign.ns,0003.ribbon.ns,0003.datasource.ns,0003.redis.ns,0003.kafka.ns,0003.hubber.ns,0003.sso.ns,0003.rbac.ns,0003.mongodb.ns,0003.fireman.ns meta: http://config.analyst.ai:8081 app: id: dcp-service-eccs |
---|
application-prod.yml
apollo: bootstrap: enabled: true namespaces: application,0003.eureka_ns,0003.feign.ns,0003.ribbon.ns,0003.datasource.ns,0003.redis.ns,0003.kafka.ns,0003.hubber.ns,0003.sso.ns,0003.rbac.ns,0003.mongodb.ns,0003.fireman.ns meta: http://config.analyst.ai:8082 app: id: dcp-service-eccs |
---|
上面唯一要變的就是不同的系統
1 namespaces 不同,所以實際接入的時候要改爲自己系統的namespace
2 id 不同,不同系統的id配置自己對應的即可
應用加入啓動apollo的註解
配置各類參數
一般一個系統會牽涉到mysql,mong,kafka...等等,這個放在一個配置文件就可以,以校驗系統爲例
application.properties
server.port=${server.port} spring.application.name=${spring.application.name} spring.cloud.loadbalancer.retry.enabled=${spring.cloud.loadbalancer.retry.enabled} spring.mvc.throwExceptionIfNoHandlerFound=${spring.mvc.throwExceptionIfNoHandlerFound} spring.resources.addMappings=${spring.resources.addMappings} spring.cache.type=${spring.cache.type} spring.profiles.active=${spring.profiles.active} management.security.enabled=${management.security.enabled} #開啓微服務的優雅關閉 endpoints.shutdown.enabled=${endpoints.shutdown.enabled} #禁用密碼驗證 endpoints.shutdown.sensitive=${endpoints.shutdown.sensitive} #hystrix配置 hystrix.command.default.execution.isolation.thread.timeoutInMilliseconds=${hystrix.command.default.execution.isolation.thread.timeoutInMilliseconds} #job配置 job.dictDbId.default=${job.dictDbId.default} #eccsExecutor配置 eccsExecutor.name=${eccsExecutor.name} #eureka配置 eureka.environment=${eureka.environment} eureka.instance.instanceId=${eureka.instance.instanceId} eureka.instance.hostname=${eureka.instance.hostname} eureka.instance.preferIpAddress=${eureka.instance.preferIpAddress} eureka.instance.leaseRenewalIntervalInSeconds=${eureka.instance.leaseRenewalIntervalInSeconds} eureka.instance.leaseExpirationDurationInSeconds=${eureka.instance.leaseExpirationDurationInSeconds} eureka.client.registryFetchIntervalSeconds=${eureka.client.registryFetchIntervalSeconds} eureka.client.healthcheck.enabled=${eureka.client.healthcheck.enabled} eureka.client.registerWithEureka=${eureka.client.registerWithEureka} eureka.client.fetchRegistry=${eureka.client.fetchRegistry} eureka.client.serviceUrl.defaultZone=${eureka.client.serviceUrl.defaultZone} #feign配置 feign.hystrix.enabled=${feign.hystrix.enabled} feign.okhttp.enabled=${feign.okhttp.enabled} feign.compression.request.enabled=${feign.compression.request.enabled} feign.compression.request.mimeTypes=${feign.compression.request.mimeTypes} feign.compression.request.minRequestSize=${feign.compression.request.minRequestSize} feign.compression.response.enabled=${feign.compression.response.enabled} feign.client.dispatcher.name=${feign.client.dispatcher.name} feign.client.dispatcher.url=${feign.client.dispatcher.url} #ribbon配置 ribbon.ConnectTimeout=${ribbon.ConnectTimeout} ribbon.ReadTimeout=${ribbon.ReadTimeout} ribbon.OkToRetryOnAllOperations=${ribbon.OkToRetryOnAllOperations} ribbon.MaxAutoRetries=${ribbon.MaxAutoRetries} ribbon.MaxAutoRetriesNextServer=${ribbon.MaxAutoRetriesNextServer} ribbon.ServerListRefreshInterval=${ribbon.ServerListRefreshInterval} #數據源配置 spring.datasource.datacenter.driver-class-name=${spring.datasource.datacenter.driver-class-name} spring.datasource.datacenter.username=${spring.datasource.datacenter.username} spring.datasource.datacenter.password=${spring.datasource.datacenter.password} spring.datasource.datacenter.jdbc-url=${spring.datasource.datacenter.jdbc-url} spring.datasource.datacenter.timeoutSeconds=${spring.datasource.datacenter.timeoutSeconds} spring.datasource.eccs.driver-class-name=${spring.datasource.eccs.driver-class-name} spring.datasource.eccs.username=${spring.datasource.eccs.username} spring.datasource.eccs.password=${spring.datasource.eccs.password} spring.datasource.eccs.jdbc-url=${spring.datasource.eccs.jdbc-url} spring.datasource.eccs.timeoutSeconds=${spring.datasource.eccs.timeoutSeconds} #redis配置 spring.redis.pool.max-active=${spring.redis.pool.max-active} spring.redis.pool.max-wait=${spring.redis.pool.max-wait} spring.redis.pool.max-idle=${spring.redis.pool.max-idle} spring.redis.default.host=${spring.redis.default.host} spring.redis.default.port=${spring.redis.default.port} spring.redis.default.password=${spring.redis.default.password} spring.redis.default.database=${spring.redis.default.database} spring.redis.default.timeout=${spring.redis.default.timeout} #kafka配置 spring.kafka.bootstrap-servers=${spring.kafka.bootstrap-servers} spring.kafka.producer.acks=${spring.kafka.producer.acks} spring.kafka.producer.batch-size=${spring.kafka.producer.batch-size} spring.kafka.producer.buffer-memory=${spring.kafka.producer.buffer-memory} spring.kafka.producer.retries=${spring.kafka.producer.retries} spring.kafka.producer.compression-type=${spring.kafka.producer.compression-type} spring.kafka.consumer.group-id=${spring.kafka.consumer.group-id} spring.kafka.consumer.enable-auto-commit=${spring.kafka.consumer.enable-auto-commit} spring.kafka.consumer.auto-offset-reset=${spring.kafka.consumer.auto-offset-reset} kafkaTopic.eccsCheckPre=${kafkaTopic.eccsCheckPre} kafkaTopic.eccsCheckPost=${kafkaTopic.eccsCheckPost} #hubber配置 hubber.job.accessToken=${hubber.job.accessToken} hubber.job.admin.addresses=${hubber.job.admin.addresses} hubber.job.executor.appname=${hubber.job.executor.appname} hubber.job.executor.ip=${hubber.job.executor.ip} hubber.job.executor.port=${hubber.job.executor.port} hubber.job.executor.logpath=${hubber.job.executor.logpath} hubber.job.executor.logretentiondays=${hubber.job.executor.logretentiondays} #sso配置 sso.url=${sso.url} sso.api.verifyToken=${sso.api.verifyToken} sso.api.getUserInfo=${sso.api.getUserInfo} #rbac配置 rbac.serviceHost=${rbac.serviceHost} rbac.accessIdListByUserId=${rbac.accessIdListByUserId} rbac.judgeAuthority=${rbac.judgeAuthority} rbac.client_key=${rbac.client_key} rbac.app_secret=${rbac.app_secret} rbac.modules=${rbac.modules} #mongodb配置 mongodb.oneLevelMarket.host=${mongodb.oneLevelMarket.host} mongodb.oneLevelMarket.port=${mongodb.oneLevelMarket.port} mongodb.oneLevelMarket.database=${mongodb.oneLevelMarket.database} mongodb.oneLevelMarket.username=${mongodb.oneLevelMarket.username} mongodb.oneLevelMarket.password=${mongodb.oneLevelMarket.password} mongodb.oneLevelMarket.uri=${mongodb.oneLevelMarket.uri} #fireman配置 fireman.componentId=${fireman.componentId} fireman.receives=${fireman.receives} log.send.env=${log.send.env} #日誌配置 logging.level.com.abcft.dcapi.eccs.dao=debug |
---|
備註:配置的時候都是通過el表達式獲取值,建議key就用name來配置,這樣可讀性比較好
系統程序中獲取apollo的屬性的值
spring結合使用
1 通過使用 @Value("${check.topic_pre}" 2 @ApolloConfig private Config config
|
---|
api使用 這個不依賴spring框架,最簡單的使用方式
Config config = ConfigService.getAppConfig(); String someKey = "someKeyFromDefaultNamespace"; String someDefaultValue = "someDefaultValueForTheKey"; String value = config.getProperty(someKey, someDefaultValue); |
---|
實時監聽參數值的變化
1 通過純api實現
Config config = ConfigService.getAppConfig(); config.addChangeListener(new ConfigChangeListener() { @Override public void onChange(ConfigChangeEvent changeEvent) { System.out.println("Changes for namespace " + changeEvent.getNamespace()); for (String key : changeEvent.changedKeys()) { ConfigChange change = changeEvent.getChange(key); System.out.println(String.format("Found change - key: %s, oldValue: %s, newValue: %s, changeType: %s", change.getPropertyName(), change.getOldValue(), change.getNewValue(), change.getChangeType())); } } }); |
---|
2 通過結合spring實現+@ApolloConfigChangeListener,參考校驗系統
import com.ctrip.framework.apollo.Config; import javax.annotation.PostConstruct; @Service @PostConstruct @ApolloConfigChangeListener private void refresher(Set<String> changedKeys) { @Override |
---|
對於一般系統的使用,基本上上面就可以滿足了,如果滿足不了,還可以去github上查看相關文檔,文檔非常詳細 https://github.com/ctripcorp/apollo