第224-225天:ELK系统&日志采集分析&Yara规则&样本识别&特征提取&规则编写

部署ELK日志系统(docker搭建)

部署docker的环境

这里我使用的是centos8的系统,然后装有docker环境。

docker搭建:

1
2
3
4
5
6
7
8
9
sudo curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-8.repo

yum clean all && yum makecache

yum install -y yum-utils
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sudo yum install docker-ce docker-ce-cli containerd.io -y
systemctl start docker
systemctl enable docker

配置加速源:

1
2
3
4
5
6
7
8
9
10
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<EOF
{
"registry-mirrors": [
"https://docker.xuanyuan.me"
]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

输入:

1
docker version		#能正常显示版本就说明没有问题

image-20250708103154635

部署elk

1. 创建docker网络

1
docker network create -d bridge elastic

2. 拉取elasticsearch 8.4.3版本

1
docker pull elasticsearch:8.4.3

3. 第一次执行docker脚本

1
2
3
4
5
6
7
8
9
10
docker run -it \
-p 9200:9200 \
-p 9300:9300 \
--name elasticsearch \
--net elastic \
-e ES_JAVA_OPTS="-Xms1g -Xmx1g" \
-e "discovery.type=single-node" \
-e LANG=C.UTF-8 \
-e LC_ALL=C.UTF-8 \
elasticsearch:8.4.3

注意第一次执行脚本不要加-d这个参数,否则看不到服务首次运行时生成的随机密码和随机 enrollment token

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
✅ Elasticsearch security features have been automatically configured!
✅ Authentication is enabled and cluster connections are encrypted.

ℹ️ Password for the elastic user (reset with `bin/elasticsearch-reset-password -u elastic`):
_wvBo+EkYCSTIRWRzIvP

ℹ️ HTTP CA certificate SHA-256 fingerprint:
651ddf191a2167f3ccc92d3bff05a50ef99f8a62a4a9c06e40ce1f7a7860925b

ℹ️ Configure Kibana to use this cluster:
• Run Kibana and click the configuration link in the terminal when Kibana starts.
• Copy the following enrollment token and paste it into Kibana in your browser (valid for the next 30 minutes):
eyJ2ZXIiOiI4LjQuMyIsImFkciI6WyIxNzIuMTguMC4yOjkyMDAiXSwiZmdyIjoiNjUxZGRmMTkxYTIxNjdmM2NjYzkyZDNiZmYwNWE1MGVmOTlmOGE2MmE0YTljMDZlNDBjZTFmN2E3ODYwOTI1YiIsImtleSI6ImVsRG41NWNCaEltaFRjaTh6NVhqOndHVXo2TEZqUUhXTHpEWmdDQlFSWHcifQ==

ℹ️ Configure other nodes to join this cluster:
• Copy the following enrollment token and start new Elasticsearch nodes with `bin/elasticsearch --enrollment-token <token>` (valid for the next 30 minutes):
eyJ2ZXIiOiI4LjQuMyIsImFkciI6WyIxNzIuMTguMC4yOjkyMDAiXSwiZmdyIjoiNjUxZGRmMTkxYTIxNjdmM2NjYzkyZDNiZmYwNWE1MGVmOTlmOGE2MmE0YTljMDZlNDBjZTFmN2E3ODYwOTI1YiIsImtleSI6ImZGRG41NWNCaEltaFRjaTh6NVhsOkJJNW8zdWIyUngtVGx1d0RnS0hGSHcifQ==

If you're running in Docker, copy the enrollment token and run:
`docker run -e "ENROLLMENT_TOKEN=<token>" docker.elastic.co/elasticsearch/elasticsearch:8.4.3`

image-20250708104039676

4. 创建相应目录并复制配置文件到主机

1
2
3
4
5
mkdir -p /data/apps/elk8.4.3/elasticsearch
docker cp elasticsearch:/usr/share/elasticsearch/config /data/apps/elk8.4.3/elasticsearch/
docker cp elasticsearch:/usr/share/elasticsearch/data /data/apps/elk8.4.3/elasticsearch/
docker cp elasticsearch:/usr/share/elasticsearch/plugins /data/apps/elk8.4.3/elasticsearch/
docker cp elasticsearch:/usr/share/elasticsearch/logs /data/apps/elk8.4.3/elasticsearch/

5. 删除容器

1
docker rm -f elasticsearch

6. 修改/data/apps/elk8.4.3/elasticsearch/config/elasticsearch.yml

1
#增加:xpack.monitoring.collection.enabled: true

image-20250708104400402

7. 启动elasticsearch

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
docker run -it \
-d \
-p 9200:9200 \
-p 9300:9300 \
--name elasticsearch \
--net elastic \
-e ES_JAVA_OPTS="-Xms1g -Xmx1g" \
-e "discovery.type=single-node" \
-e LANG=C.UTF-8 \
-e LC_ALL=C.UTF-8 \
-v /data/apps/elk8.4.3/elasticsearch/config:/usr/share/elasticsearch/config \
-v /data/apps/elk8.4.3/elasticsearch/data:/usr/share/elasticsearch/data \
-v /data/apps/elk8.4.3/elasticsearch/plugins:/usr/share/elasticsearch/plugins \
-v /data/apps/elk8.4.3/elasticsearch/logs:/usr/share/elasticsearch/logs \
elasticsearch:8.4.3

8. 启动后可以访问https://IP:9200来验证是否成功

用户名:elastic
密码在第一次启动时保存下来的信息中查找

image-20250708104649396

9. 安装Kibana

1
docker pull kibana:8.4.3

image-20250708105030754

10. 启动

不要有参数 -d ,否则看不见初始化链接

1
2
3
4
5
6
7
8
9
docker run -it \
--restart=always \
--log-driver json-file \
--log-opt max-size=100m \
--log-opt max-file=2 \
--name kibana \
-p 5601:5601 \
--net elastic \
kibana:8.4.3

11. 初始化Kibana鉴权凭证

根据上一步日志返回的url 访问http://IP:5601/,会出现以下画面

image-20250708105241309

在textarea中填入之前elasticsearch生成的相关信息,注意这个token只有30分钟的有效期,如果过期了只能进入容器重置token,进入容器执行 /bin/elasticsearch-create-enrollment-token -s kibana --url "[https://127.0.0.1:9200](https://127.0.0.1:9200/)"

image-20250708105423226

image-20250708105350391

image-20250708105434881

12. 创建kibana目录并copy相关配置信息

1
2
3
4
5
6
mkdir -p /data/apps/elk8.4.3/kibana
docker cp kibana:/usr/share/kibana/config /data/apps/elk8.4.3/kibana/
docker cp kibana:/usr/share/kibana/data /data/apps/elk8.4.3/kibana/
docker cp kibana:/usr/share/kibana/plugins /data/apps/elk8.4.3/kibana/
docker cp kibana:/usr/share/kibana/logs /data/apps/elk8.4.3/kibana/
sudo chown -R 1000:1000 /data/apps/elk8.4.3/kibana

13. 修改/data/apps/elk8.4.3/kibana/config/kibana.yml

1
#增加i18n.locale: "zh-CN"

image-20250708105639269

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#删除容器并重新启动
docker rm -f kibana


docker run -it \
-d \
--restart=always \
--log-driver json-file \
--log-opt max-size=100m \
--log-opt max-file=2 \
--name kibana \
-p 5601:5601 \
--net elastic \
-v /data/apps/elk8.4.3/kibana/config:/usr/share/kibana/config \
-v /data/apps/elk8.4.3/kibana/data:/usr/share/kibana/data \
-v /data/apps/elk8.4.3/kibana/plugins:/usr/share/kibana/plugins \
-v /data/apps/elk8.4.3/kibana/logs:/usr/share/kibana/logs \
kibana:8.4.3

15. Logstash拉取镜像

1
docker pull logstash:8.4.3

16. 执行脚本

1
2
3
4
5
6
7
docker run -it \
-d \
--name logstash \
-p 9600:9600 \
-p 5044:5044 \
--net elastic \
logstash:8.4.3

17. 创建目录并同步配置文件

1
2
3
4
5
mkdir -p /data/apps/elk8.4.3/logstash
docker cp logstash:/usr/share/logstash/config /data/apps/elk8.4.3/logstash/
docker cp logstash:/usr/share/logstash/pipeline /data/apps/elk8.4.3/logstash/
sudo cp -rf /data/apps/elk8.4.3/elasticsearch/config/certs /data/apps/elk8.4.3/logstash/config/certs
sudo chown -R 1000:1000 /data/apps/elk8.4.3/logstash

18. 修改配置/data/apps/elk8.4.3/logstash/config/logstash.yml

1
2
3
4
5
6
7
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "https://172.18.0.2:9200" ]
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: "elastic"
xpack.monitoring.elasticsearch.password: "_wvBo+EkYCSTIRWRzIvP"
xpack.monitoring.elasticsearch.ssl.certificate_authority: "/usr/share/logstash/config/certs/http_ca.crt"
xpack.monitoring.elasticsearch.ssl.ca_trusted_fingerprint: "651ddf191a2167f3ccc92d3bff05a50ef99f8a62a4a9c06e40ce1f7a7860925b"

image-20250708110842660

19. 修改配置/data/apps/elk8.4.3/logstash/pipeline/logstash.conf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
input {
beats {
port => 5044
}
}


filter {
date {
# 因为我的日志里,我的time字段格式是2024-03-14T15:34:03+08:00 ,所以要使用以下两行配置
match => [ "time", "ISO8601" ]
target => "@timestamp"
}
json {
source => "message"
}
mutate {
remove_field => ["message", "path", "version", "@version", "agent", "cloud", "host", "input", "log", "tags", "_index", "_source", "ecs", "event"]
}
}


output {
elasticsearch {
hosts => ["https://172.18.0.2:9200"]
index => "douyin-%{+YYYY.MM.dd}"
ssl => true
ssl_certificate_verification => false
cacert => "/usr/share/logstash/config/certs/http_ca.crt"
ca_trusted_fingerprint => "651ddf191a2167f3ccc92d3bff05a50ef99f8a62a4a9c06e40ce1f7a7860925b"
user => "elastic"
password => "_wvBo+EkYCSTIRWRzIvP"
}
}

20. 删除容器并重新启动

1
2
3
4
5
6
7
8
9
10
11
12
docker rm -f logstash


docker run -it \
-d \
--name logstash \
-p 9600:9600 \
-p 5044:5044 \
--net elastic \
-v /data/apps/elk8.4.3/logstash/config:/usr/share/logstash/config \
-v /data/apps/elk8.4.3/logstash/pipeline:/usr/share/logstash/pipeline \
logstash:8.4.3

21. Filebeat 拉取镜像

1
sudo docker pull elastic/filebeat:8.4.3

22. 启动脚本

1
2
3
4
5
6
7
docker run -it \
-d \
--name filebeat \
--network host \
-e TZ=Asia/Shanghai \
elastic/filebeat:8.4.3 \
filebeat -e -c /usr/share/filebeat/filebeat.yml

23. 创建目录并同步配置

1
2
3
4
5
mkdir -p /data/apps/elk8.4.3/filebeat
docker cp filebeat:/usr/share/filebeat/filebeat.yml /data/apps/elk8.4.3/filebeat/
docker cp filebeat:/usr/share/filebeat/data /data/apps/elk8.4.3/filebeat/
docker cp filebeat:/usr/share/filebeat/logs /data/apps/elk8.4.3/filebeat/
sudo chown -R 1000:1000 /data/apps/elk8.4.3/filebeat

24. 修改/data/apps/elk8.4.3/filebeat/filebeat.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
filebeat.config:
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false


processors:
- add_cloud_metadata: ~
- add_docker_metadata: ~


output.logstash:
enabled: true
# 因为filebeat启动时候network是host,所以这里直接设置成localhost,就可以访问logstash的服务了
hosts: ["localhost:5044"]


filebeat.inputs:
- type: log
enabled: true
paths:
- /usr/share/filebeat/target/test.log # 这个路径是需要收集的日志路径,是docker容器中的路径
scan_frequency: 10s
exclude_lines: ['HEAD']
exclude_lines: ['HTTP/1.1']
multiline.pattern: '^[[:space:]]+(at|\.{3})\b|Exception|捕获异常'
multiline.negate: false
multiline.match: after

25. 删除镜像并重新启动

1
2
3
4
5
6
7
8
9
10
11
12
13
14
docker rm -f filebeat


docker run -it \
-d \
--name filebeat \
--network host \
-e TZ=Asia/Shanghai \
-v /data/apps/douyin/logs:/usr/share/filebeat/target \
-v /data/apps/elk8.4.3/filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml \
-v /data/apps/elk8.4.3/filebeat/data:/usr/share/filebeat/data \
-v /data/apps/elk8.4.3/filebeat/logs:/usr/share/filebeat/logs \
elastic/filebeat:8.4.3 \
filebeat -e -c /usr/share/filebeat/filebeat.yml

image-20250708124329298

做完之后访问IP:5601就可以访问了

image-20250708124412333

使用elk

image-20250708153436854

这里使用本地上传的方式进行分析日志。也可以使用集成的方式进行日志自动化采集。

image-20250708154204576

案例二:Yara 规则使用-规则检测&分析特征&自写规则

1
2
https://github.com/VirusTotal/yara
部分规则: https://github.com/Yara-Rules/rules

略~

日志自动采集&日志自动查看&日志自动化分析Web安全&内网攻防&工具项目

观星

image-20250709190048245

结果 保存在output目录下:

image-20250709190130961

日志保存在如下目录:

image-20250709190356610

360星图

image-20250709084610225

1
2
3
4
5
6
使用说明:
第一步:打开配置文件/conf/config.ini:填写日志路径[log_file配置项],其他配置项可以选择配置
第二步:点击start.bat,运行程序;
第三部:运行完毕,分析结果在当前程序根目录下的/result/文件夹下。

这里注意这个日志的地址前面不能有空格,然后不能用引号

image-20250709084830886

系统日志自动查看-LastActivityView -windows活动记录分析

1
https://www.nirsoft.net/utils/computer_activity_view.html

image-20250709084951414

系统日志自动分析-Windows登录日志

1
https://github.com/spaceman-911/WindowsLocalLogAnalysis

本地检索版:

image-20250709085550247

提取日志版:

image-20250709085741065

Windows_Log_check

1
https://github.com/Fheidt12/Windows_Log

image-20250709103805012

系统日志自动分析-识别Windows日志中的威胁信息

项目地址:https://github.com/countercept/chainsaw
使用Sigma规则搜索所有evtx文件以了解检测逻辑并以CSV格式输出到结果文件夹

1
chainsaw_x86_64-pc-windows-msvc.exe hunt output/ -s sigma/ --mapping mappings/sigma-event-logs-all.yml -r rules/ --csv --output results

image-20250709102540552

image-20250709102604914

需要在这个目录下有这个日志文件,那么如何将计算机中的这个文件导出呢?

image-20250709102850253

分析之后会将结果保存到result目录

image-20250709103333455

会将登录攻击的自动帮你进行筛选

image-20250709103433486

Linux自动化项目

https://github.com/grayddq/GScan
https://github.com/enomothem/Whoamifuck
https://github.com/Ashro-one/Ashro_linux