Logstash is a light-weight, open-source, server-side data processing pipeline that allows you to collect data from a variety of sources, transform it on the fly, and send it to your desired destination. It is most often used as a data pipeline for Elasticsearch, an open-source analytics and search engine.
Filebeat is a lightweight shipper for forwarding and centralizing log data. Installed as an agent on your servers, Filebeat monitors the log files or locations that you specify, collects log events, and forwards them either to Elasticsearch or Logstash for indexing.
Stages:
1. Install Logstash & Configuring Logstash
2. Install Filebeat (on the Client Servers) & Configuring Filebeat
Stage 1:
Install Logstash
$ sudo wget https://artifacts.elastic.co/downloads/logstash/logstash-7.6.0.rpm
$ sudo rpm –ivh logstash-7.6.0.rpm
Configuring Logstash
As part of configuring the Logstash , we need to generate the Selfsigned certs. Basically we can generate certs in two methods i). Based on Private IP ii). Based on Domain.
Option 1: Based on Private IP
Add a SSL certificate based on the IP address of the ELK server. Add the ELK server’s private IP in /etc/pki/tls/openssl.cnf.
$ sudo vi /etc/pki/tls/openssl.cnf
Add the following line with private Ip just below [ v3_ca ] section
subjectAltName = IP: 192.168.20.158
GENERATE A SELF-SIGNED CERTIFICATE VALID FOR 365 DAYS
$ cd /etc/pki/tls
$ sudo openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/ssl-logstash-forwarder.key -out certs/ssl-logstash-forwarder.crt
Option 2 – Based on domain (FQDN)
$ cd /etc/pki/tls
$ sudo openssl req -subj '/CN=ELK_server_fqdn/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/ssl-logstash-forwarder.key -out certs/ssl-logstash-forwarder.crt
LOGSTASH INPUT, FILTER, OUTPUT FILES
Logstash configuration files are in the JSON-format, and reside in /etc/logstash/conf.d. The configuration consists of three sections: inputs, filters, and outputs – usually specified as separate files.
INPUT.CONF
Insert the following lines into it. This is necessary for Logstash to “learn” how to process beats coming from clients. Make sure the path to the certificate and key match the right paths as outlined in the previous step: This specifies that Logstash will listen on tcp port 5044 i.e. log-forwarder will connect at this port to send logs.
$ sudo vi /etc/logstash/conf.d/01-beats-input.conf
input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/ssl-logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/ssl-logstash-forwarder.key"
}
}
FILTER.CONF
This filter looks for logs that are labeled as “syslog” type (by Filebeat), and it will try to use grok to parse incoming syslog logs to make it structured and query-able.
$ sudo vi /etc/logstash/conf.d/01-beats-filter.conf
filter {
if [type] == "syslog" {
grok {
match => { "message"
=> "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname}
%{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?:
%{GREEDYDATA:syslog_message}" }
add_field => [
"received_at", "%{@timestamp}" ]
add_field => [
"received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [
"syslog_timestamp", "MMM d HH:mm:ss", "MMM dd
HH:mm:ss" ]
}
}
}
OUTPUT.CONF
In output.conf basically we configured for Logstash to store the beats data in Elasticsearch which is running at localhost:9200, in an index named after the beat used (filebeat, in our case).
$ sudo vi /etc/logstash/conf.d/01-beats-output.conf
output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
START AND ENABLE LOGSTASH
$ sudo systemctl daemon-reload
$ sudo systemctl start logstash
$ sudo systemctl enable logstash
Stage 2:
Install Filebeat (on the Client Servers)
We need to install the Filebeat in Client servers(means from which server we are planning to get the logs) can be one or more servers, the below process need to follow for all client servers.
COPY THE SSL CERTIFICATE FROM THE ELK SERVER TO THE CLIENT(S)
$ sudo scp /etc/pki/tls/certs/logstash-forwarder.crt root@<client server IP>:/etc/pki/tls/certs/
NOTE
Once copied the generated certs to client servers, as a next we need to Perform the following steps on the client machine (ones sending logs to ELK server)
IMPORT THE ELASTICSEARCH PUBLIC GPG KEY
$ sudo rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch
CREATE AND EDIT A NEW YUM REPOSITORY FILE FOR FILEBEAT
$ sudo vi /etc/yum.repos.d/filebeat.repo
Add the following repository configuration to filebeat.repo file
[elastic-7.x]
name=Elastic repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
INSTALL THE FILEBEAT PACKAGE
$ sudo yum -y install filebeat
CONFIGURE FILEBEAT
Edit Filebeat configuration file & Replace ‘elk_server_private_ip‘ with the private IP of your ELK server i.e. hosts: [“IP:5044”]
$ sudo vi /etc/filebeat/filebeat.yml
filebeat:
prospectors:
-
paths:
- /var/log/secure
- /var/log/messages
# - /var/log/*.loginput_type: logdocument_type: syslogregistry_file: /var/lib/filebeat/registryoutput:
logstash:
hosts: ["elk_server_private_ip:5044"]
bulk_max_size: 1024tls:
certificate_authorities: ["/etc/pki/tls/certs/ssl-logstash-forwarder.crt"]shipper:logging:
files:
rotateeverybytes: 10485760 # = 10MB
START AND ENABLE FILEBEAT
$ systemctl start filebeat
$ systemctl enable filebeat
Test Filebeat
On your ELK Server, verify that Elasticsearch is receiving the data by querying for the Filebeat index with this command:
$ curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'