Flume NG configuration sample

1.  配置source 爲 exec

agent1.sources = tail
agent1.channels = MemoryChannel-2
agent1.sinks = HDFS

agent1.sources.tail.type = exec
agent1.sources.tail.command = tail -f /var/log/apache2/access.log.1
agent1.sources.tail.channels = MemoryChannel-2

agent1.sinks.HDFS.channel = MemoryChannel-2
agent1.sinks.HDFS.type = hdfs
agent1.sinks.HDFS.hdfs.path = hdfs://localhost:9000/flume
agent1.sinks.HDFS.hdfs.file.Type = DataStream

agent1.channels.MemoryChannel-2.type = memory

and the command I issuing is -
bin/flume-ng agent -n agent1 -c conf/ -f conf/agent1.conf


2. 配置source爲 seq


3. source exec, 並且組建source group

tail1.sources = src1
tail1.channels = ch1
tail1.sinks = sink1 sink2
tail1.sinkgroups = sg1

tail1.sources.src1.type = exec
tail1.sources.src1.command = tail -F /tmp/acess_log
tail1.sources.src1.channels = ch1

tail1.channels.ch1.type = memory
tail1.channels.ch1.capacity = 500

tail1.sinks.sink1.type = avro
tail1.sinks.sink1.hostname = localhost
tail1.sinks.sink1.port = 6000
tail1.sinks.sink1.batch-size = 1
tail1.sinks.sink1.channel = ch1

tail1.sinks.sink2.type = avro
tail1.sinks.sink2.hostname = localhost
tail1.sinks.sink2.port = 6001
tail1.sinks.sink2.batch-size = 1
tail1.sinks.sink2.channel = ch1

tail1.sinkgroups.sg1.sinks = sink1 sink2
tail1.sinkgroups.sg1.processor.type = failover
tail1.sinkgroups.sg1.processor.priority.sink1 = 1
tail1.sinkgroups.sg1.processor.priority.sink2 = 2

######################################################

collector1.sources = src1
collector1.channels = ch1
collector1.sinks = sink1

collector1.sources.src1.type = avro
collector1.sources.src1.bind = localhost
collector1.sources.src1.port = 6000
collector1.sources.src1.channels = ch1

collector1.channels.ch1.type = memory
collector1.channels.ch1.capacity = 500

collector1.sinks.sink1.type = hdfs
collector1.sinks.sink1.hdfs.path = collector1
collector1.sinks.sink1.hdfs.filePrefix = access_log
collector1.sinks.sink1.channel = ch1

######################################################

collector2.sources = src1
collector2.channels = ch1
collector2.sinks = sink1

collector2.sources.src1.type = avro
collector2.sources.src1.bind = localhost
collector2.sources.src1.port = 6001
collector2.sources.src1.channels = ch1

collector2.channels.ch1.type = memory
collector2.channels.ch1.capacity = 500

collector2.sinks.sink1.type = hdfs
collector2.sinks.sink1.hdfs.path = collector2
collector2.sinks.sink1.hdfs.filePrefix = access_log
collector2.sinks.sink1.channel = ch1

4. Sample configuration and Client

What I am seeing is that for every event that I send a new file in hadoop is being created. I was expecting that file handle would just write to existing file until it gets rolled over as specified in the configs. Am I doing something wrong?
 
 12/06/15 17:28:52 INFO hdfs.BucketWriter: Creating hdfs://dsdb1:54310/flume/'dslg1'/FlumeData.1339806027956.tmp
12/06/15 17:28:52 INFO hdfs.BucketWriter: Renaming hdfs://dsdb1:54310/flume/'dslg1'/FlumeData.1339806027956.tmp to hdfs://dsdb1:54310/flume/'dslg1'/FlumeData.1339806027956
12/06/15 17:28:52 INFO hdfs.BucketWriter: Creating hdfs://dsdb1:54310/flume/'dslg1'/FlumeData.1339806027957.tmp
12/06/15 17:28:52 INFO hdfs.BucketWriter: Renaming hdfs://dsdb1:54310/flume/'dslg1'/FlumeData.1339806027957.tmp to hdfs://dsdb1:54310/flume/'dslg1'/FlumeData.1339806027957
12/06/15 17:28:52 INFO hdfs.BucketWriter: Creating hdfs://dsdb1:54310/flume/'dslg1'/FlumeData.1339806027958.tmp


foo.sources = avroSrc
foo.channels = memoryChannel
foo.sinks = hdfsSink
# For each one of the sources, the type is defined
foo.sources.avroSrc.type = avro
# The channel can be defined as follows.
foo.sources.avroSrc.channels = memoryChannel
foo.sources.avroSrc.bind = 0.0.0.0
foo.sources.avroSrc.port = 41414
# Each sink's type must be defined
foo.sinks.hdfsSink.type = hdfs
foo.sinks.hdfsSink.hdfs.path = hdfs://dsdb1:54310/flume/'%{host}'
foo.sinks.hdfsSink.file.Prefix = web
foo.sinks.hdfsSink.file.rollInterval  = 600
foo.sinks.hdfsSink.file.Type  = SequenceFile
#Specify the channel the sink should use
foo.sinks.hdfsSink.channel = memoryChannel
 
code:
 
 public void sendDataToFlume(String data) {
  // Create flume event object
  Event event = EventBuilder.withBody(data, Charset.forName("UTF-8"));
  Map<String,String> headers = new HashMap<String,String>();
  headers.put("host", hostName);
  event.setHeaders(headers);
  try {
   rpcClient.append(event);
  } catch (EventDeliveryException e) {
   connect();
  }
 


 @Test
 public void testAvroClient() throws InterruptedException{
  AvroClient aClient = new AvroClient();
  int i = 0;
  int j = 500;
  while(i++ < j){
   aClient.sendDataToFlume("Hello");
   if(i == j/2){
    //Thread.sleep(30000);
   }
  }
  
 }
 }




After I changed my config to this it worked. It looks like flume creates new file for any of the conditions that matches first. Since there is a default of 10 for rollCount it was creating a new document. But I think it causes lot of problem because I need to now keep track and estimate all these variables. I think it should just do based on what's specified in the config, so if I only specify rollSize then it souldn't consider any other options for it's logic to create a new file.
 
foo.sinks.hdfsSink.type = hdfs
foo.sinks.hdfsSink.hdfs.path = hdfs://dsdb1:54310/flume/%{host}
foo.sinks.hdfsSink.hdfs.filePrefix = web
foo.sinks.hdfsSink.hdfs.rollInterval  = 600
foo.sinks.hdfsSink.hdfs.rollCount  = 200000000
foo.sinks.hdfsSink.hdfs.rollSize  = 5000000000
foo.sinks.hdfsSink.hdfs.fileType  = SequenceFile


5.  example: http://mapredit.blogspot.de/2012/03/flumeng-evolution.html


發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章