企业🤖AI Agent构建引擎,智能编排和调试,一键部署,支持私有化部署方案 广告
## EFK、ELK **author:xiak** **last update: 2022-10-15 10:12:22** ---- [TOC=3,8] ### 介绍 #### Elasticsearch 安装 无论您正在查找来自特定 IP 地址的活动,还是正在分析交易请求数量为何突然飙升,或者正在方圆一公里内搜寻美食店,我们尝试解决的**这些问题归根结底都是搜索问题**。通过 Elasticsearch,您可以快速存储、搜索和分析大量数据。[Elastic Stack:Elasticsearch、Kibana、Beats 和 Logstash | Elastic](https://www.elastic.co/cn/elastic-stack/) ```shell cd /opt mkdir elastic cd elastic wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-8.5.1-linux-x86_64.tar.gz tar -xvzf elasticsearch-8.5.1-linux-x86_64.tar.gz cd elasticsearch-8.5.1 bin/elasticsearch ``` ~~~shell vi config/jvm.options -Xms500m -Xmx500m ~~~ ```shell groupadd elsearch useradd elsearch -g elsearch -p 123456 chown -R elsearch:elsearch elasticsearch-8.5.1 su elsearch cd elasticsearch-8.5.1 bin/elasticsearch [-d] ``` https://blog.csdn.net/liuxiangke0210/article/details/113992511 ~~~ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ✅ Elasticsearch security features have been automatically configured! ✅ Authentication is enabled and cluster connections are encrypted. ℹ️ Password for the elastic user (reset with `bin/elasticsearch-reset-password -u elastic`): vw0soI_6WZM2FDi6e1+I ℹ️ HTTP CA certificate SHA-256 fingerprint: b1f6743383728b4d20397c8ab3e14480deb56d48bb77d1c88c1f40e7831f5458 ℹ️ Configure Kibana to use this cluster: • Run Kibana and click the configuration link in the terminal when Kibana starts. • Copy the following enrollment token and paste it into Kibana in your browser (valid for the next 30 minutes): eyJ2ZXIiOiI4LjUuMSIsImFkciI6WyIxNzIuMTguNzcuMjI6OTIwMCJdLCJmZ3IiOiJiMWY2NzQzMzgzNzI4YjRkMjAzOTdjOGFiM2UxNDQ4MGRlYjU2ZDQ4YmI3N2QxYzg4YzFmNDBlNzgzMWY1NDU4Iiwia2V5IjoiNUF6TnZJUUJDaEU0MnluTEFreE86ZTNNdDJOMjJROUc4Z0hFYkNEblpzZyJ9 ℹ️ Configure other nodes to join this cluster: • On this node: ⁃ Create an enrollment token with `bin/elasticsearch-create-enrollment-token -s node`. ⁃ Uncomment the transport.host setting at the end of config/elasticsearch.yml. ⁃ Restart Elasticsearch. • On other nodes: ⁃ Start Elasticsearch with `bin/elasticsearch --enrollment-token <token>`, using the enrollment token that you generated. ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ~~~ #### 配置 使用端口:`9200` ~~~ bin/elasticsearch-reset-password -u elastic Password for the [elastic] user successfully reset. New value: ******* ~~~ ~~~ http://106.15.127.163:9200/ curl http://106.15.127.163:9200 -k name: elastic password: ****** ~~~ ~~~json { "name": "iZuf6918brm8qovci6qai3Z", "cluster_name": "elasticsearch", "cluster_uuid": "dcWP2asXTQeM8VCrNDW2Eg", "version": { "number": "8.5.1", "build_flavor": "default", "build_type": "tar", "build_hash": "c1310c45fc534583afe2c1c03046491efba2bba2", "build_date": "2022-11-09T21:02:20.169855900Z", "build_snapshot": false, "lucene_version": "9.4.1", "minimum_wire_compatibility_version": "7.17.0", "minimum_index_compatibility_version": "7.0.0" }, "tagline": "You Know, for Search" } ~~~ ~~~ curl http://106.15.127.163:9200 { "name" : "iZuf6918brm8qovci6qai3Z", "cluster_name" : "elasticsearch", "cluster_uuid" : "dcWP2asXTQeM8VCrNDW2Eg", "version" : { "number" : "8.5.1", "build_flavor" : "default", "build_type" : "tar", "build_hash" : "c1310c45fc534583afe2c1c03046491efba2bba2", "build_date" : "2022-11-09T21:02:20.169855900Z", "build_snapshot" : false, "lucene_version" : "9.4.1", "minimum_wire_compatibility_version" : "7.17.0", "minimum_index_compatibility_version" : "7.0.0" }, "tagline" : "You Know, for Search" } ~~~ ---- vi config/elasticsearch.yml ~~~ # Enable security features xpack.security.enabled: false xpack.security.enrollment.enabled: true # Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents xpack.security.http.ssl: enabled: false keystore.path: certs/http.p12 network.host: 0.0.0.0 http.port: 9200 ~~~ ---- ### kibana 安装 使用端口:`5601` https://www.elastic.co/cn/downloads/kibana ```shell wget https://artifacts.elastic.co/downloads/kibana/kibana-8.5.2-linux-x86_64.tar.gz tar -xvzf kibana-8.5.2-linux-x86_64.tar.gz cd kibana-8.5.2 bin/kibana --allow-root # 后台启动 nohup ./bin/kibana & ``` ~~~ http://106.15.127.163:5601/?code=433185 ~~~ - [Index Management - Elastic](http://106.15.127.163:5601/app/management/data/index_management/indices) - [Data Views - Elastic](http://106.15.127.163:5601/app/management/kibana/dataViews) - [Discover - Elastic](http://106.15.127.163:5601/app/discover) - [Logs | Stream - Kibana](http://106.15.127.163:5601/app/logs/stream) - [Console - Dev Tools - Elastic](http://106.15.127.163:5601/app/dev_tools#/console) ---- ### Logstash 安装 使用端口:`5044` https://www.elastic.co/cn/downloads/logstash ~~~ wget https://artifacts.elastic.co/downloads/logstash/logstash-8.5.2-linux-x86_64.tar.gz tar -xvzf logstash-8.5.2-linux-x86_64.tar.gz cd logstash-8.5.2 # 测试是否能够正常工作 bin/logstash -e 'input { stdin { } } output { stdout {} }' ~~~ vim config/sc.conf ~~~ input { stdin {} } output { stdout { codec => rubydebug {} } elasticsearch { hosts => "127.0.0.1:9200" } } ~~~ ```shell nohup bin/logstash -f ../config/sc.conf > sc.log 2>&1 & ``` vi config/logstash-sample2.conf ~~~ input { file { path => ['/opt/elastic/elasticsearch-8.5.1/logs/*.log'] type => 'es_log' start_position => "beginning" } } output { elasticsearch { hosts => ["http://localhost:9200"] index => "elasticsearch_log-%{+YYYY.MM.dd}" #user => "elastic" #password => "changeme" } } ~~~ ```shell bin/logstash -f /opt/elastic/logstash-8.5.2/config/logstash-sample2.conf nohup bin/logstash -f /opt/elastic/logstash-8.5.2/config/logstash-sample3.conf > /opt/logstash.log 2>&1 & ``` > 但是这样会有一个问题,logstash如果添加插件,全部的都要进行添加,会给运维人员造成很大的问题,所以就有了上边提到的FileBeat,占用资源少,只负责采集日志,不做其他的事情,这样就轻量级,把Logstash抽出来,做一些滤处理之类的工作。[ELK详细安装教程_壹升茉莉清的博客-CSDN博客_elk安装](https://blog.csdn.net/weixin_40920359/article/details/126240405) ---- ### Filebeat 安装 https://www.elastic.co/cn/downloads/beats/filebeat ```shell wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.5.2-linux-x86_64.tar.gz tar -xvzf filebeat-8.5.2-linux-x86_64.tar.gz cd filebeat-8.5.2-linux-x86_64 ./filebeat -e -c filebeat.yml ``` ```shell nohup ./filebeat -e -c filebeat3.yml > /opt/filebeat.log 2>&1 & ``` > 解决 nohup 还是无法常驻后台,终端关闭自动退出的问题。 https://www.cnblogs.com/luoyunfei99/articles/16188714.html ~~~ vi /etc/systemd/system/filebeat.service [Unit] Description=Filebeat is a lightweight shipper for metrics. Documentation=https://www.elastic.co/products/beats/filebeat Wants=network-online.target After=network-online.target [Service] Environment="LOG_OPTS=-e" Environment="CONFIG_OPTS=-c /opt/filebeat-8.5.2-linux-x86_64/filebeat3.yml" ExecStart=/opt/filebeat-8.5.2-linux-x86_64/filebeat_0.4.1_linux_amd64 $LOG_OPTS $CONFIG_OPTS Restart=always StandardOutput=/opt/filebeat.log [Install] WantedBy=multi-user.target ~~~ ~~~ chmod +x /etc/systemd/system/filebeat.service systemctl daemon-reload systemctl enable filebeat systemctl start filebeat systemctl restart filebeat systemctl stop filebeat systemctl status filebeat ps -ef | grep filebeat ~~~ ~~~ vi filebeat.yml filebeat.inputs: - type: log enabled: true paths: - /home/myweb/apps_share_data/admin.api.test.xxx.cn/runtime/log/admin/smartpark/202212/*.log # ---------------------------- Elasticsearch Output ---------------------------- #output.elasticsearch: # Array of hosts to connect to. #hosts: ["106.15.127.163:9200"] # Protocol - either `http` (default) or `https`. #protocol: "http" # Authentication credentials - either API key or username/password. #api_key: "id:api_key" #username: "elastic" #password: "****" # ------------------------------ Logstash Output ------------------------------- output.logstash: # The Logstash hosts hosts: ["106.15.127.163:5044"] # Optional SSL. By default is off. # List of root certificates for HTTPS server verifications #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] # Certificate for SSL client authentication #ssl.certificate: "/etc/pki/client/cert.pem" # Client Certificate Key #ssl.key: "/etc/pki/client/cert.key" ~~~ ---- ### 安装 logstash-input-pulsar 和 pulsar-beat-output [filebeat数据重复和截断及丢失问题分析 - NYC's Blog](http://niyanchun.com/filebeat-truncate-bug.html) [logstash(filebeat)重复推送数据问题 - Elastic 中文社区](https://elasticsearch.cn/question/4622) > 原因找到了,是我filebeat直接给logstash发送数据,而logstash的IO到100%,接收数据阻塞了,导致filebeat没有收到发送事件的确认消息,导致filebeat重复发送,结果就这样重复了。您知不知道filebeat怎么关闭确认机制,只发送不确认? 安装 logstash-input-pulsar: ```shell cd /opt/elastic wget https://github.com/streamnative/logstash-input-pulsar/releases/download/2.7.1/logstash-input-pulsar-2.7.1.zip cd /opt/elastic/logstash-8.5.2 # bin/logstash-plugin install file:///opt/elastic/logstash-input-pulsar-2.7.1.zip bin/logstash-plugin install file:///opt/elastic/logstash-input-pulsar-2.10.0.0.zip ``` 安装 pulsar-beat-output: ``` wget https://github.com/streamnative/pulsar-beat-output/releases/download/v0.4.1/filebeat_0.4.1_linux_amd64 mv filebeat filebeat_old mv filebeat_0.4.1_linux_amd64 filebeat ``` 安装 Go Language: https://go.dev/doc/install ~~~ wget https://go.dev/dl/go1.19.4.linux-amd64.tar.gz rm -rf /usr/local/go && tar -C /usr/local -xzf go1.19.4.linux-amd64.tar.gz vi /etc/profile export PATH=$PATH:/usr/local/go/bin source /etc/profile go version ~~~ https://goproxy.cn/ 七牛云 - Goproxy.cn 镜像 ~~~ go env -w GO111MODULE=on go env -w GOPROXY=https://goproxy.cn,direct ~~~ ---- ### 日志格式 > 这些日志都是多行格式的 主应用:tp web、daemon、command ~~~ $runtime = /home/myweb/apps_share_data/admin.api.test.xxx.cn/runtime $runtime/log/admin/smartpark/202212/01.log $runtime/log/screen/202212/01.log $runtime/daemon-pulsar-workerman.log $runtime/log/daemon/iotscene/dispatcher-flow/202212/01_cli.log $runtime/log/command/system/Apifox/202212/01_cli.log ~~~ gatewayworker ~~~ $runtime = /home/myweb/apps_share_data/yf_iot_gatewayworker/runtime $runtime/gatewayworker-DeviceApp-gateway-workerman.log $runtime/gatewayworker-DeviceApp-register-workerman.log $runtime/gatewayworker-DeviceApp-worker-workerman.log ~~~ ~~~ {appname}.api.[test].xxx.cn/[module]/v1.{controller}/{action} 环境: [test].xxx.cn xxx.net ---- tp web: /home/myweb/apps_share_data/admin*/runtime/log/**/*.log tp daemon: /home/myweb/apps_share_data/admin*/runtime/{workerman}.log /home/myweb/apps_share_data/admin*/runtime/log/daemon/{module}/{worker}/*/*_cli.log tp command: /home/myweb/apps_share_data/admin*/runtime/log/command/{module}/{command}/*/*_cli.log tp pay: /home/myweb/apps_share_data/admin*/runtime/log/yansongda-pay-log/{channel}-{mch_id}.log tp sms: /home/myweb/apps_share_data/admin*/runtime/log/sms/{appname}/easy-sms.log ---- gatewayworker: /home/myweb/apps_share_data/*gatewayworker/runtime/{gatewayworker}.log ~~~ 所有日志可分为两类日志:web、cli **web:** (请求)host(域名、ip),时间,请求方法,url(appname、模块、控制器、方法),日志level,请求id,设备id,请求ip,请求 referer **cli:** host(域名、ip),时间,PID,日志level,文件名(daemon-pulsar-workerman.log、gatewayworker-DeviceApp-gateway-workerman.log),目录(daemon/iotscene/dispatcher-flow 、command/system/Apifox) > 按应用来分 ~~~ elasticsearch index: test-web-2022-12 test-cli-2022-12 test-pay-2022-12 test-sms-2022-12 kf-web-2022-12 kf-cli-2022-12 ---- Data Views: test-web test-cli kf-web kf-cli ~~~ ~~~ web: ^[ YYYY-MM-DD ignore: --------------------------------------------------------------- ---- cli: ^[ ---- workerman: ^YYYY-MM-DD HH:ii:ss ~~~ ---- [ELK详细安装教程_壹升茉莉清的博客-CSDN博客_elk安装](https://blog.csdn.net/weixin_40920359/article/details/126240405) [ELK详细安装部署_妙轩cc的博客-CSDN博客_elk安装部署](https://blog.csdn.net/song12345xiao/article/details/125991833) [超详细的ELK安装部署 - 墨天轮](https://www.modb.pro/db/109893) [Logstash配置详解_fyygree的博客-CSDN博客_logstash配置](https://blog.csdn.net/fengyuyeguirenenen/article/details/124036098) [Filebeat + Logstash 配置_sparks.fly的博客-CSDN博客_filebeat 配置logstash](https://blog.csdn.net/m0_60491538/article/details/121636766) [filebeat+logstash配置_'煎饼侠的博客-CSDN博客_filebeat logstash](https://blog.csdn.net/Baron_ND/article/details/109351279) > Logstash依赖于JVM,很慢,但功能丰富,支持对数据做预处理。而filebeat很轻量,Golang开发的,但功能很少,不支持对数据做预处理。因此一般都是组合使用,在每个节点部署filbeat,然后将监控的日志推送到logstash集群内,流量大时通常配合redis或kafka做数据缓冲层来使用。 [logstash中date的时间处理方式总结 - fat_girl_spring - 博客园](https://www.cnblogs.com/fat-girl-spring/p/13044570.html) [logstash关于date时间处理的几种方式总结 - 峰哥ge - 博客园](https://www.cnblogs.com/FengGeBlog/p/10559034.html) [logstash神器之grok - 简书](https://www.jianshu.com/p/d3042a08eb5e) [regex - 使用grok将日志文件名添加为logstash中的字段 - IT工具网](https://www.coder.work/article/6685349) [logstash匹配filebeat传递的log.file.path_禅剑一如的博客-CSDN博客_log.file.path](https://blog.csdn.net/zsx18273117003/article/details/106383636/) [elasticsearch 修改磁盘比例限制 - luzhouxiaoshuai - 博客园](https://www.cnblogs.com/kebibuluan/p/14077043.html) [Elasticsearch提示low disk watermark [85%] exceeded on [UTyrLH40Q9uIzHzX-yMFXg][Sonofelice][/Users/baidu/Documents/work/soft/data/nodes/0] free: 15.2gb[13.4%], replicas will not be assigned to this node - SonoFelice - 博客园](https://www.cnblogs.com/sonofelice/p/8554887.html) ~~~ [2022-12-04T16:30:02,307][INFO ][o.e.c.r.a.DiskThresholdMonitor] [iZuf6918brm8qovci6qai3Z] low disk watermark [85%] no longer exceeded on [9s7L8PRWRSiDgvnU8LvUYQ][iZuf6918brm8qovci6qai3Z][/opt/elastic/elasticsearch-8.5.1/data] free: 12.5gb[15.9%] ~~~ ~~~ https://www.jianshu.com/p/4e4a7450c305 parsers: - multiline: type: pattern pattern: '^\[' 表示两层含义: 1. 事件首行以 [ 开头 2. 将所有不以 [ 开始的行与之前的行进行合并 ~~~ ~~~ https://zhuanlan.zhihu.com/p/141439013 nohup ./filebeat -e -c filebeat.yml -path.data=/opt/data/filebeat >/dev/null 2>&1 & ./filebeat -e -c filebeat3.yml -path.data=/opt/data/filebeat 停止运行 FileBeat 进程 ps -ef | grep filebeat Kill -9 线程号 ~~~ ~~~ https://discuss.elastic.co/t/filebeat-filestream-input-rereading-rotated-log-files/300038/6 We had a similar issue when files were resent when Filebeat was restarted and multiple inputs were configured. The trick was to set an ID. But this issue seems completely unrelated unfortunately. id 必须配置,否则 会影响 registry 造成冲突,丢失状态,重复采集 ~~~ [Configure inputs | Filebeat Reference [8.5] | Elastic](https://www.elastic.co/guide/en/beats/filebeat/current/configuration-filebeat-options.html) [Grok filter plugin | Logstash Reference [8.5] | Elastic](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html)