ELK-7.15环境安装配置详解

ELK-7.15环境安装配置详解

环境

Server IP Components System
k8s-master01 192.168.126.21 Logstash、Apache CentOS7.4(64 位)
k8s-worker01 192.168.126.22 Filebeat、ElasticSearch 、Elasticsearch-head CentOS7.4(64 位)
k8s-worker02 192.168.126.23 ElasticSearch、kibana CentOS7.4(64 位)

前置条件关闭防火墙和selinux,配置hosts解析

java环境查看
[root@k8s-worker01 ~]# java -version
openjdk version “1.8.0_402”
OpenJDK Runtime Environment (build 1.8.0_402-b06)
OpenJDK 64-Bit Server VM (build 25.402-b06, mixed mode)

系统性能调优

以下参数根据系统性能来

#优化最大内存大小和最大文件描述符的数量
vim /etc/security/limits.conf
添加如下内容,

*  soft    nofile          65536
*  hard    nofile          65536
*  soft    nproc           32000
*  hard    nproc           32000
*  soft    memlock         unlimited
*  hard    memlock         unlimited

vim /etc/systemd/system.conf

DefaultLimitNOFILE=65536
DefaultLimitNPROC=32000
DefaultLimitMEMLOCK=infinity
  • DefaultLimitNOFILE 表示每个进程可以打开的最大文件描述符数量。
  • DefaultLimitNPROC 表示每个用户可以创建的最大进程数量。
  • DefaultLimitMEMLOCK 表示每个进程可以锁定的内存量。

vim /etc/sysctl.conf
#一个进程可以拥有的最大内存映射区域数,参考数据(分配 2g/262144,4g/4194304,8g/8388608)一般给一半内存,大于64G的话,留32G以内给系统,其他都给lucene使用
最少是262144,elasticsearch软件硬性规定至少是这么大

vm.max_map_count=262144 

重启系统让配置生效

日志收集方案ELK简介

Elasticsearch + Logstash + Kibana

1、收集日志:Logstash
是一个数据实时传输管道:能够将数据实时地从输入端传输到输出端,还能够根据实际的需求在传输过程中加入过滤器来筛选数据。在日志系统中,常被作为日志采集工具使用

2、存储日志:Elasticsearch
是一个开源的全文搜索和分析引擎:能够快速地存储、搜集和分析数据。可以被看成一个非关系型数据库。在日志系统中,常被作为存储和搜索日志的工具使用。

3、展示日志:Kibana
开源的分析与可视化平台。在日志系统中,常作为Elasticsearch的输出端使用。用户可以使用Kibana搜索、查看Elasticsearch中的数据,并以不同的方式(如图表、表格、地

Elasticsearch安装配置

Elasticsearch官方下载地址 https://www.elastic.co/cn/downloads/elasticsearch

java版本和Elasticsearch的版本对应关系
https://www.elastic.co/cn/support/matrix#matrix_jvm

jdk1.8可以支持到Elasticsearch 7.17.x
所以下载一个7版本的就可以了

wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.15.0-x86_64.rpm

rpm -ivh elasticsearch-7.15.0-x86_64.rpm

注意防止9100端口被占用,如果安装过node-exporter的要注意一下

sudo systemctl daemon-reload
sudo systemctl enable  --now elasticsearch.service
[root@k8s-worker01 ~]# systemctl status elasticsearch.service
● elasticsearch.service - Elasticsearch
   Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
   Active: active (running) since 日 2024-05-19 16:50:36 CST; 21s ago
     Docs: https://www.elastic.co
 Main PID: 981 (java)
   CGroup: /system.slice/elasticsearch.service
           ├─ 981 /usr/share/elasticsearch/jdk/bin/java -Xshare:auto -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negativ...
           └─2053 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller

5月 19 16:49:57 k8s-worker01 systemd[1]: Starting Elasticsearch...
5月 19 16:50:36 k8s-worker01 systemd[1]: Started Elasticsearch.

备份配置文件

cp /etc/elasticsearch/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml.bak

ELK集群部署

[root@k8s-worker01 ~]# cat  /etc/elasticsearch/elasticsearch.yml | grep -Ev "^(#|$)"
cluster.name: my-elastic     #自定义集群名称
node.name: node-1            #自定义node名称
node.master: true             #默认为master,非master改为false
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: false   #禁止锁定进程内存
network.host: 192.168.126.22   #默认表示单个节点127.0.0.1,不能设置0.0.0.0
http.port: 9200                # 监听的 HTTP 通信端口号
transport.tcp.port: 9300       #节点间通信使用的 TCP 端口号
# 发现初始主机节点的地址列表,用于集群自动发现
discovery.seed_hosts: ["192.168.126.22", "192.168.126.23"]
# 初始主节点列表,用于集群自动发现 
cluster.initial_master_nodes: ["node-1", "node-2"]

[root@k8s-worker01 ~]# scp /etc/elasticsearch/elasticsearch.yml root@192.168.126.23:/etc/elasticsearch/

修改内容

network.host: 192.168.126.23
node.name: node-2

systemctl restart elasticsearch
重启生效

浏览器访问节点信息
http://192.168.126.22:9200/
http://192.168.126.23:9200/
《ELK-7.15环境安装配置详解》

《ELK-7.15环境安装配置详解》
查看集群信息
http://192.168.126.22:9200/_cluster/health?pretty

《ELK-7.15环境安装配置详解》

以上方式查看elasticsearch太费劲了

安装 Elasticsearch-head 插件

安装 Elasticsearch-head浏览器插件

《ELK-7.15环境安装配置详解》

《ELK-7.15环境安装配置详解》
《ELK-7.15环境安装配置详解》

#通过命令插入一个测试索引,索引为 index-demo,类型为 test。

[root@k8s-worker01 software]# curl -X PUT '192.168.126.22:9200/index-demo/test/1?pretty&pretty' -H 'content-Type: application/json' -d '{"user":"zhangsan","mesg":"hello world"}'
{
  "_index" : "index-demo",
  "_type" : "test",
  "_id" : "1",
  "_version" : 1,
  "result" : "created",
  "_shards" : {
    "total" : 2,
    "successful" : 1,
    "failed" : 0
  },
  "_seq_no" : 0,
  "_primary_term" : 1
}

《ELK-7.15环境安装配置详解》

《ELK-7.15环境安装配置详解》
注意到,index相当于mysql的数据库,type相当于表,剩下的是字段组成一行数据

两个不同type下的两个user_name,在ES同一个索引下其实被认为是同一个filed,你必须在两个不同的type中定义相同的filed映射。否则,不同type中的相同字段名称就会在处理中出现冲突的情况,导致Lucene处理效率下降。因此,高版本取消了type的概念。

在 5.X 版本中,一个 index 下可以创建多个 type;
在 6.X 版本中,一个 index 下只能存在一个 type;
在 7.X 版本中,直接去除了 type 的概念,就是说 index 不再会有 type。非要写的话,可以写成_doc.
在 8.X 版本中,不再支持URL中的type参数。

节点无法加入集群,只有单个节点

关闭所有节点。
通过删除其数据文件夹的内容来完全擦除每个节点 。
rm -fr /var/lib/elasticsearch/*
cluster.initial_master_nodes按上面的配置。
重新启动所有节点,并验证它们已形成单个群集。

curl http://192.168.126.22:9200/_cat/nodes
192.168.126.22 22 79  1 0.28 0.30 0.27 cdfhilmrstw - node-1
192.168.126.23 15 74 84 0.56 0.23 0.22 cdfhilmrstw * node-2

原因是network.host: 0.0.0.0
很多教程让设置监听所有的网络端口,一但这么配置就会导致无法构建集群,原因未知,但处于安全考虑也不能这么配置

安装 Elasticsearch-head 服务

node环境查看

[root@k8s-worker01 software]# node -v
v10.16.3
[root@k8s-worker01 software]# npm -v
6.9.0

安装Elasticsearch-head

tar -xJf node-v10.16.3-linux-x64.tar.xz -C /usr/local

vim /etc/profile

export NODE_HOME=/usr/local/node-v10.16.3-linux-x64
export PATH=$NODE_HOME/bin:$PATH
source /etc/profile

git clone https://github.com/mobz/elasticsearch-head.git
cd elasticsearch-head/

安装
npm install -g

启动
npm run start

《ELK-7.15环境安装配置详解》

集群健康值,此时还是连接不上的,添加配置
vim /etc/elasticsearch/elasticsearch.yml

http.cors.enabled: true
http.cors.allow-origin: "*"

重启即可
systemctl restart elasticsearch.service
《ELK-7.15环境安装配置详解》
感觉界面没有浏览器插件版好

安装Apache和Logstash

java环境
[root@k8s-master01 role]# java -version
openjdk version “1.8.0_402”
OpenJDK Runtime Environment (build 1.8.0_402-b06)
OpenJDK 64-Bit Server VM (build 25.402-b06, mixed mode)

安装apache
yum -y install httpd
systemctl start httpd
 systemctl status httpd
![20240520115424](http://img.shadowwu.club/20240520115424.png)

Logstash官方地址 https://www.elastic.co/cn/logstash

下载安装包
wget https://artifacts.elastic.co/downloads/logstash/logstash-7.15.0-x86_64.rpm
最好下载下来上传安装
rpm -ivh logstash-7.15.0-x86_64.rpm

查看状态
systemctl status logstash.service
![20240520120823](http://img.shadowwu.club/20240520120823.png)
#使用 Logstash 将信息写入 Elasticsearch 中
logstash没有被放入环境变量

 /usr/share/logstash/bin/logstash  -e 'input { stdin{} } output { elasticsearch { hosts=>["192.168.126.22:9200"] } }'
在此输入内容,要稍等一会才会传输过去

《ELK-7.15环境安装配置详解》

《ELK-7.15环境安装配置详解》

《ELK-7.15环境安装配置详解》

《ELK-7.15环境安装配置详解》

logstash配置简单介绍

logstash的配置文件主要三部分组成,输入,过滤器,输出

input {
  # 输入的配置
  有很多种读取方式
  1、file {} 文件类型
  2、stdin {} 标准输入
  3、beats { #其他beat或者filebeat发送的event
        port => 5044
    }
   4、tcp { 监听网络数据
    port => 41414
   }
   5、redis {
    data_type => "list" #logstash redis插件工作方式
    key => "logstash-test-list" #监听的键值
    host => "127.0.0.1" #redis地址
    port => 6379 #redis端口号
   }
   6、kafka {}
   7、 syslog{}
}

filter {
  # 过滤器的配置
  grok正则捕获
    Logstash提供120个常用正则表达式可供安装使用,安装之后你可以通过名称调用它们
    语法
    %{SYNTAX:SEMANTIC}
    SYNTAX:表示已经安装的正则表达式的名称
    SEMANTIC:表示从Event中匹配到的内容的名称
}

output {
  # 输出的配置
   1、elasticsearch {} #结果输出到elasticsearch
   2、redis {} #写入redis缓存
   3、file {} #输出到文件
   4、kafka{} 
   5、stdout {} #标准输出
}

apache日志搜集配置

修改配置文件
vim /etc/logstash/logstash.yml

#path.config:
path.config: /etc/logstash/conf.d/*.conf

修改配置文件,将用户和属组改为root,不然权限不够,配置无法读取
vim /etc/systemd/system/logstash.service
《ELK-7.15环境安装配置详解》
重新加载 systemctl 服务
systemctl daemon-reload
重启 logstash
systemctl restart logstash.service

grok的默认表达式。grok预定义好的一些表达式,可以匹配常见的字符串。位置如下:
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-patterns-core-4.3.1/patterns/legacy/httpd

《ELK-7.15环境安装配置详解》

查看logstash支持的插件

[root@k8s-master01 software]# /usr/share/logstash/bin/logstash-plugin list
Using bundled JDK: /usr/share/logstash/jdk
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
logstash-codec-avro
logstash-codec-cef
logstash-codec-collectd
logstash-codec-dots
logstash-codec-edn
。。。。
input {
    file {
        type => "apache_access"  
        path => ["/var/log/httpd/access_log"]
        start_position => beginning
        ignore_older => 0
    }
    file {
        type => "apache_error"  
        path => ["/var/log/httpd/error_log"]
        start_position => beginning
        ignore_older => 0
    }
}

filter {
    if [type] == "apache_access"{
        grok {
            match => { "message" => "%{HTTPD_COMMONLOG}"}
        }
        date {
            match => [ "timestamp" , "dd/MMM/YYYY:HH:mm:ss Z" ]
        }

        useragent {
            source => "agent"
            target => "useragent"
        }
    } else if [type] == "apache_error"{
        grok {
            match => { "message" => "\[(?<mytimestamp>%{DAY:day} %{MONTH:month} %{MONTHDAY} %{TIME} %{YEAR})\] \[%{WORD:module}:%{LOGLEVEL:loglevel}\] \[pid %{NUMBER:pid}:tid %{NUMBER:tid}\]( \(%{POSINT:proxy_errorcode}\)%{DATA:proxy_errormessage}:)?( \[client %{IPORHOST:client}:%{POSINT:clientport}\])? %{DATA:errorcode}: %{GREEDYDATA:message}" }
        }
        date {
            match => [ "mytimestamp" , "EEE MMM dd HH:mm:ss.SSSSSS yyyy" ]
        }
    }
}

output {
    if [type] == "apache_access"{
        elasticsearch {
            hosts => [ "192.168.126.22:9200" ]
            index => "apache-access-log-%{+YYYY.MM}"
        }
    } else if [type] == "apache_error"{
        elasticsearch {
            hosts => [ "192.168.126.22:9200" ]
            index => "apache-error-log"
        }
    }
}

源日志access_log

192.168.126.1 - - [20/May/2024:13:40:02 +0800] "GET /noindex/css/fonts/Bold/OpenSans-Bold.ttf HTTP/1.1" 404 238 "http://192.168.126.21/noindex/css/open-sans.css" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/121.0.0.0 Safari/537.36 Edg/121.0.0.0"
192.168.126.1 - - [20/May/2024:13:40:03 +0800] "GET /favicon.ico HTTP/1.1" 404 209 "http://192.168.126.21/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/121.0.0.0 Safari/537.36 Edg/121.0.0.0"

error_log

[Mon May 20 11:53:59.656977 2024] [suexec:notice] [pid 45440] AH01232: suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 192.168.126.21. Set the 'ServerName' directive globally to suppress this message
[Mon May 20 11:53:59.667019 2024] [lbmethod_heartbeat:notice] [pid 45440] AH02282: No slotmem from mod_heartmonitor
[Mon May 20 11:53:59.668522 2024] [mpm_prefork:notice] [pid 45440] AH00163: Apache/2.4.6 (CentOS) configured -- resuming normal operations
[Mon May 20 11:53:59.668537 2024] [core:notice] [pid 45440] AH00094: Command line: '/usr/sbin/httpd -D FOREGROUND'
[Mon May 20 13:40:02.477416 2024] [autoindex:error] [pid 45443] [client 192.168.126.1:12199] AH01276: Cannot serve directory /var/www/html/: No matching DirectoryIndex (index.html) found, and server-generated directory index forbidden by Options directive

《ELK-7.15环境安装配置详解》

《ELK-7.15环境安装配置详解》

Kiabana安装配置

Kiabana官网 https://www.elastic.co/cn/downloads/kibana

wget https://artifacts.elastic.co/downloads/kibana/kibana-7.15.0-x86_64.rpm
rpm -ivh kibana-7.15.0-x86_64.rpm

修改配置文件
vim /etc/kibana/kibana.yml

#2行 取消注释,Kiabana 服务的默认监听端口为5601
server.port: 5601

#7行 取消注释,设置 Kiabana 的监听地址,0.0.0.0代表所有地址
server.host: "0.0.0.0"

#32行 取消注释,配置es服务器的ip,如果是集群则配置该集群中master节点的ip
elasticsearch.hosts: ["http://192.168.126.22:9200","http://192.168.126.23:9200"] 

#37行 取消注释,设置在 elasticsearch 中添加.kibana索引
kibana.index: ".kibana"

#96行 取消注释,配置kibana的日志文件路径(需手动创建),不然默认是messages里记录日志
logging.dest: /var/log/kibana.log

创建日志文件并修改属组

touch /var/log/kibana.log
chown kibana:kibana /var/log/kibana.log
systemctl enable --now kibana.service

《ELK-7.15环境安装配置详解》
《ELK-7.15环境安装配置详解》
《ELK-7.15环境安装配置详解》
《ELK-7.15环境安装配置详解》

Kiabana中文设置

《ELK-7.15环境安装配置详解》
配置文件添加中文配置,重启即可

i18n.locale: "zh-CN"

《ELK-7.15环境安装配置详解》
提示集群任何人都可以访问
方法一:修改elasticsearch的配置文件,禁用安全后重启即可

xpack.security.enabled: false

方法二:配置 xpack

ElasticSearch 启用 xpack

配置这个首先要有节点间通讯的证书,然后再修改配置

1、生成节点间安全策略使用的证书

cd /usr/share/elasticsearch/bin/
./elasticsearch-certutil ca

《ELK-7.15环境安装配置详解》
文件默认生成再/usr/share/elasticsearch/elastic-stack-ca.p12
cd bin/
./elasticsearch-certutil cert –ca elastic-stack-ca.p12
生成节点的证书

《ELK-7.15环境安装配置详解》
1、 此位置需要输入elastic-stack-ca.p12 CA授权证书的密码。
2、 此位置为需要输出证书位置。
3、 此位置为节点证书的密码。使用空密码可以直接回车结束。
默认情况下,elasticsearch-certutil生成的证书中没有主机名信息。这意味着可以为集群中的任意节点使用此证书,但是必须关闭主机名验证。

mv /usr/share/elasticsearch/elastic-* /etc/elasticsearch/
chown -R elasticsearch:elasticsearch /etc/elasticsearch/

让elasticsearch知道证书密码
./elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
./elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password
./elasticsearch-keystore add xpack.security.http.ssl.keystore.secure_password
./elasticsearch-keystore add xpack.security.http.ssl.truststore.secure_password

2、修改各个节点的安全配置

开启安全配置
vim /etc/elasticsearch/elasticsearch.yml

xpack.security.enabled: true
xpack.license.self_generated.type: basic
xpack.security.transport.ssl.enabled: true

systemctl restart elasticsearch.service

http://192.168.126.22:9200/
此时提示要用户名密码
《ELK-7.15环境安装配置详解》

设置用户密码,有两种方式,自动生成和手动生成,自动生成时系统会给每个用户生成随机密码,这个一定要单独记住;手动生成,根据提示给每个用户手动输入密码。
auto自动生成,interactive手动生成

手动生成密码
./elasticsearch-setup-passwords interactive

自动生成密码
./elasticsearch-setup-passwords auto

 ./elasticsearch-setup-passwords interactive

Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]y


Enter password for [elastic]:
Reenter password for [elastic]:
Enter password for [apm_system]:
Reenter password for [apm_system]:
Enter password for [kibana_system]:
Reenter password for [kibana_system]:
Enter password for [logstash_system]:
Reenter password for [logstash_system]:
Enter password for [beats_system]:
Reenter password for [beats_system]:
Enter password for [remote_monitoring_user]:
Reenter password for [remote_monitoring_user]:
Changed password for user [apm_system]
Changed password for user [kibana_system]
Changed password for user [kibana]
Changed password for user [logstash_system]
Changed password for user [beats_system]
Changed password for user [remote_monitoring_user]
Changed password for user [elastic]
  • elastic: 超级用户。
  • kibana: 供Kibana工具(第三方)通过该用户连接Elasticsearch。
  • logstash_system: Logstash将监控信息存储到Elasticsearch中时使用。
  • beats_system: Beats在Elasticsearch中存储监视信息时使用。
  • apm_system: APM服务器在Elasticsearch中存储监视信息时使用。
  • remote_monitoring_user: Metricbeat用户在Elasticsearch中收集和存储监视信息时使用。

修改配置文件


xpack.security.enabled: true xpack.license.self_generated.type: basic xpack.security.transport.ssl.enabled: true xpack.security.transport.ssl.verification_mode: certificate xpack.security.transport.ssl.keystore.path: /etc/elasticsearch/elastic-certificates.p12 xpack.security.transport.ssl.truststore.path: /etc/elasticsearch/elastic-certificates.p12 xpack.security.http.ssl.enabled: true xpack.security.http.ssl.keystore.path: /etc/elasticsearch/elastic-certificates.p12 xpack.security.http.ssl.truststore.path: /etc/elasticsearch/elastic-certificates.p12 #http.cors.allow-headers: Authorization #这个配置改为下面的 http.cors.allow-headers: Authorization,X-Requested-With,Content-Length,Content-Type

systemctl restart elasticsearch.service

复制证书到其他节点,同样修改配置文件
scp /etc/elasticsearch/elastic-certificates.p12 root@192.168.126.23:/etc/elasticsearch/
chmod 777 /etc/elasticsearch/elastic-certificates.p12
[root@k8s-worker02 software]# chown -R elasticsearch:elasticsearch /etc/elasticsearch

让elasticsearch知道证书密码,这一步很关键,卡住我很久

./elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
./elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password
./elasticsearch-keystore add xpack.security.http.ssl.keystore.secure_password
./elasticsearch-keystore add xpack.security.http.ssl.truststore.secure_password

[root@k8s-worker02 software]# systemctl restart elasticsearch.service

Kibana连接带SSL配置的elasticsearch

根据证书文件导出一份CA公钥文件,用于后续各应用配置文件中引用CA公钥时使用
[root@k8s-worker01 elasticsearch]# openssl pkcs12 -in /etc/elasticsearch/elastic-stack-ca.p12 -out newfile.crt.pem -clcerts -nokeys
Enter Import Password:
MAC verified OK

[root@k8s-worker01 elasticsearch]# scp newfile.crt.pem root@192.168.126.23:/etc/kibana

elasticsearch.hosts: ["https://192.168.126.22:9200","https://192.168.126.23:9200"]
elasticsearch.username: "kibana"
elasticsearch.password: "123456"
elasticsearch.ssl.verificationMode: none
 # 此处pem文件路径必须为绝对路径,否则无法正常启动Kibana
elasticsearch.ssl.certificateAuthorities: ["/etc/kibana/newfile.crt.pem"]

systemctl restart kibana.service

Kibana连接不带SSL配置的elasticsearch

elasticsearch.username: "kibana"
elasticsearch.password: "123456"

注意,ssl要注释掉,不然kibana会一直卡再not ready 界面进不去

#xpack.security.http.ssl.enabled: true
#xpack.security.http.ssl.keystore.path: /etc/elasticsearch/elastic-certificates.p12
#xpack.security.http.ssl.truststore.path: /etc/elasticsearch/elastic-certificates.p12

《ELK-7.15环境安装配置详解》
《ELK-7.15环境安装配置详解》

不能以kiana用户登录,用elastic用户才行,权限不够

可以创建用户分配角色
《ELK-7.15环境安装配置详解》
以下配置去除ssl,有时间再出一篇完全的TLS的

filebeat安装配置

官方下载地址 https://www.elastic.co/cn/downloads/beats/filebeat
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.15.0-x86_64.rpm
rpm -ivh filebeat-7.15.0-x86_64.rpm

systemctl restart filebeat.service

[root@k8s-worker01 software]# cat /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
#修改为true启用
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    #- /var/log/*.log
    - /var/log/messages
    #- c:\programdata\elasticsearch\logs\*
  tags: ["sys"]         #设置索引标签
  fields:           #可以使用 fields 配置选项设置一些参数字段添加到 output 中
    service_name: filebeat
    log_type: syslog
    from: 192.168.126.22
# ------------------------------ Logstash Output -------------------------------
output.logstash:
  hosts: ["192.168.126.21:5044"]

在logstash主机上配置
mkdir /etc/logstash/pipeconf.d

cat > /etc/logstash/pipeconf.d/filebeat.conf <<EOF
input {
    beats {
        port => "5044"
    }
}

#filebeat发送给logstash的日志内容会放到message字段里面,logstash使用grok插件正则匹配message字段内容进行字段分割
#Kibana自带grok的正则匹配的工具:http://<your kibana IP>:5601/app/kibana#/dev_tools/grokdebugger
# %{IPV6}|%{IPV4} 为 logstash 自带的 IP 常量
filter {
  grok {
    match => ["message", "(?<remote_addr>%{IPV6}|%{IPV4})[\s\-]+\[(?<logTime>.*)\]\s+\"(?<method>\S+)\s+(?<url_path>.+)\"\s+(?<rev_code>\d+) \d+ \"(?<req_addr>.+)\" \"(?<content>.*)\""]
  }
}

output {
    elasticsearch {
        hosts => ["192.168.126.22:9200","192.168.162.23:9200"]
        index => "%{[fields][service_name]}-%{+YYYY.MM.dd}"
    }
    stdout {
        codec => rubydebug
    }
}
EOF

注释掉默认管道
vim /etc/logstash/logstash.yml

# path.config:
#path.config: /etc/logstash/conf.d/*.conf

修改pipeline配置如下,添加管道
vim /etc/logstash/pipelines.yml


- pipeline.id: main path.config: "/etc/logstash/conf.d/*" pipeline.workers: 2 - pipeline.id: filebeat path.config: "/etc/logstash/pipeconf.d/filebeat.conf" queue.type: persisted

解释一下,logstash默认是一个管道,也就是cond.d下的配置都是在一个管道中的,也就是流水线作业处理数据
如果想新配置一个数据处理,必须添加管道。方法就是修改pipeline配置文件

systemctl restart logstash.service

《ELK-7.15环境安装配置详解》

问题及解决办法

Kibana server is not ready yet

原因是elasticsearch配置了SSL而kibana没有配置,或者kiban没有完全启动,稍等一会

[root@k8s-worker01 bin]# ./elasticsearch-setup-passwords interactive

Failed to determine the health of the cluster running at http://192.168.126.22:9200
Unexpected response code [503] from calling GET http://192.168.126.22:9200/_cluster/health?pretty
Cause: master_not_discovered_exception
关闭其他节点
vim /etc/elasticsearch/elasticsearch.yml

xpack.security.enabled: true
xpack.license.self_generated.type: basic
xpack.security.transport.ssl.enabled: true

systemctl restart elasticsearch.service

because of \\”x509: ECDSA verification failure\\” while trying to verify candidate authority certificate \\”swarm-ca\\”)\”” module=nod

原因是没有对elastic-certificates.p12的权限,将证书放到/etc/elasticsearch目录下,并修改属组用户即可解决
相应的配置文件中证书的位置也要修改
/etc/elasticsearch/elastic-certificates.p12

failed to load SSL

org.elasticsearch.ElasticsearchSecurityException: failed to load SSL configuration [xpack.security.transport.ssl]
at org.elasticsearch.xpack.core.ssl.SSLService.lambda$loadSSLConfigurations$5(SSLService.java:530) ~[?:?]
at java.util.HashMap.forEach(HashMap.java:1425) ~[?:?]

从节点复制过来的证书如果有密码也要如下配置

让elasticsearch知道证书密码
./elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
./elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password
./elasticsearch-keystore add xpack.security.http.ssl.keystore.secure_password
./elasticsearch-keystore add xpack.security.http.ssl.truststore.secure_password

Elasticsearch-head npm安装报错

npm install
报错:

Phantom installation failed { [Error: EACCES: permission denied, link '/tmp/phantomjs/phantomjs-2.1.1-linux-x86_64.tar.bz2-extract-1716126018158/phantomjs-2.1.1-linux-x86_64' -> '/root/software/elasticsearch-head/node_modules/phantomjs-prebuilt/lib/phantom']
  errno: -13,
  code: 'EACCES',
  syscall: 'link',
  path:
   '/tmp/phantomjs/phantomjs-2.1.1-linux-x86_64.tar.bz2-extract-1716126018158/phantomjs-2.1.1-linux-x86_64',
  dest:
   '/root/software/elasticsearch-head/node_modules/phantomjs-prebuilt/lib/phantom' } Error: EACCES: permission denied, link '/tmp/phantomjs/phantomjs-2.1.1-linux-x86_64.tar.bz2-extract-1716126018158/phantomjs-2.1.1-linux-x86_64' -> '/root/software/elasticsearch-head/node_modules/phantomjs-prebuilt/lib/phantom'
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@^1.0.0 (node_modules/chokidar/node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents@1.2.13: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
npm WARN elasticsearch-head@0.0.0 license should be a valid SPDX license expression

npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! phantomjs-prebuilt@2.1.16 install: `node install.js`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the phantomjs-prebuilt@2.1.16 install script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     /root/.npm/_logs/2024-05-19T13_40_21_311Z-debug.log

我用的root用户,改为以下命令即可
npm install -g

启动还有报错,不过不影响使用就不管了
可以参考
Elasticsearch-head插件安装_安装elasticsearch-head,2024年最新分享我在软件测试开发中走的一些弯路

logstash无法读取 /etc/logstash/conf.d/配置

权限不够,修改service文件为root执行

[Service]
Type=simple
User=root
Group=root

Warning: 299 Elasticsearch-7.15.0-79d65f6e357953a5b3cbcc5e2c7c21073d89aa29 “Elasticsearch built-in security features are not enabled. Without authentication, your cluster could be accessible to anyone

《ELK-7.15环境安装配置详解》

修改elasticsearch的配置文件,禁用安全后重启即可

xpack.security.enabled: false

参考

ELK 使用指南 1】ELK + Filebeat 分布式日志管理平台部署
kubernetes中使用ELK进行日志收集
LOGSTASH配置文件介绍及常用插件的配置
Elasticsearch 开启安全访问
Filebeat与Logstash两个工具之间怎样配置SSL加密通信
官方TLS配置

点赞

发表回复

您的电子邮箱地址不会被公开。 必填项已用*标注