logstash收集nginx日志写入kafka

logstash收集nginx日志写入kafka

安装logstash

下载

建议到官网下载最新版
https://www.elastic.co/cn/downloads/logstash
本文使用logstash7.0.0
https://artifacts.elastic.co/downloads/logstash/logstash-7.0.0.tar.gz

wget https://artifacts.elastic.co/downloads/logstash/logstash-7.0.0.tar.gz
tar -xzvf logstash-7.0.0.tar.gz
cd logstash-7.0.0

读取文件直接发送到es

  • 修改./conf/logstash-sample.conf
# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.

input {
  #beats {
   # port => 5044
  #}
  file {
    path => "/var/log/httpd/access_log"
    start_position => beginning
  }
}

output {
  elasticsearch {
    hosts => ["http://localhost:9200"]
    index => "%{[@metadata][logstash]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    #user => "elastic"
    #password => "changeme"
  }
}
  • 检查配置文件是否正确:(假设当前目录为/usr/local/logstash/config/)
../bin/logstash -t -f logstash-sample.conf
启动:
../bin/logstash -f logstash-sample.conf
加载本文件夹所有配置文件启动:
../bin/logstash -f ./
或后台启动:
nohup ../bin/logstash -f config/ &
  • 常用命令参数
    -f:通过这个命令可以指定Logstash的配置文件,根据配置文件配置logstash
    -e:后面跟着字符串,该字符串可以被当做logstash的配置(如果是“” 则默认使用stdin作为输入,stdout作为输出)
    -l:日志输出的地址(默认就是stdout直接在控制台中输出)
    -t:测试配置文件是否正确,然后退出。

1.1.1 编写logstash配置文件

[root@localhost ~]# cat /etc/logstash/conf.d/nginx-kafka.conf
 input {                                             
       file {
           path => "/opt/vhosts/fatai/logs/access_json.log"
           start_position => "beginning"
           type => "nginx-accesslog"
           codec => "json"
           stat_interval => "2"
           }
}
output {

    kafka {
         bootstrap_servers => "192.168.10.10:9092"
         topic_id => 'nginx-access-kafkaceshi'
         codec => "json"
        }

}

1.1.2 验证并重启logstash

[root@localhost ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/nginx-kafka.conf -t
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
Configuration OK
[root@localhost ~]# systemctl restart logstash.service 
[root@DNS-Server tools]# /tools/kafka/bin/kafka-topics.sh --list  --zookeeper 192.168.10.10:2181,192.168.10.167:2181,192.168.10.171:2181
nginx-access-kafkaceshi

二 logstash收集kafka日志并写入elk

1.1.1 编写logstash配置文件

[root@Docker ~]# cat /etc/logstash/conf.d/nginx_kafka.conf
input {
    kafka {
      bootstrap_servers => "192.168.10.10:9092"   #kafka地址
      topics => "nginx-access-kafkaceshi"         #定义主题
      group_id => "nginx-access-kafkaceshi"       #自定义
      codec => "json"                             #指定编码
      consumer_threads => 1                       #消费者线程
      decorate_events => true                     #要不要加kafka标记
    }
}
output {
  if [type] == "nginx-accesslog"{                 #type 是收集时候logstash定义的
    elasticsearch {
      hosts => ["192.168.10.10:9200"]
      index=> "nginx-accesslog-kafka-test-%{+YYYY.MM.dd}"
    }
  }
}

1.1.2 检测并重启

[root@Docker ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/nginx_kafka.conf -t
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
Configuration OK
[root@Docker ~]# nohup ../bin/logstash -f config/ &

1.1.3 elasticsearch验证

© 版权声明
THE END
喜欢就支持一下吧
点赞0 分享
评论 抢沙发

请登录后发表评论