Showing posts with label Elasticsearch. Show all posts
Showing posts with label Elasticsearch. Show all posts

Wednesday, July 20, 2022

Logstash Installation and configuration in CentOS

Logstash is a light-weight, open-source, server-side data processing pipeline that allows you to collect data from a variety of sources, transform it on the fly, and send it to your desired destination. It is most often used as a data pipeline for Elasticsearch, an open-source analytics and search engine.

Filebeat is a lightweight shipper for forwarding and centralizing log data. Installed as an agent on your servers, Filebeat monitors the log files or locations that you specify, collects log events, and forwards them either to Elasticsearch or Logstash for indexing.

Stages:

 1. Install Logstash  & Configuring Logstash

 2. Install Filebeat (on the Client Servers) & Configuring Filebeat

Stage 1:

Install Logstash

$ sudo wget https://artifacts.elastic.co/downloads/logstash/logstash-7.6.0.rpm

$ sudo rpm –ivh logstash-7.6.0.rpm

Configuring Logstash

As part of configuring the Logstash , we need to generate the Selfsigned certs. Basically we can generate certs in two methods i). Based on Private IP ii). Based on Domain.

Option 1: Based on Private IP

Add a SSL certificate based on the IP address of the ELK server. Add the ELK server’s private IP in /etc/pki/tls/openssl.cnf.

$ sudo vi /etc/pki/tls/openssl.cnf

Add the following line with private Ip just below [ v3_ca ] section

subjectAltName = IP: 192.168.20.158

 GENERATE A SELF-SIGNED CERTIFICATE VALID FOR 365 DAYS

$ cd /etc/pki/tls

$ sudo openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/ssl-logstash-forwarder.key -out certs/ssl-logstash-forwarder.crt

Option 2 – Based on domain (FQDN)

$ cd /etc/pki/tls

$ sudo openssl req -subj '/CN=ELK_server_fqdn/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/ssl-logstash-forwarder.key -out certs/ssl-logstash-forwarder.crt


LOGSTASH INPUT, FILTER, OUTPUT FILES

Logstash configuration files are in the JSON-format, and reside in /etc/logstash/conf.d. The configuration consists of three sections: inputs, filters, and outputs – usually specified as separate files.

INPUT.CONF 

Insert the following lines into it. This is necessary for Logstash to “learn” how to process beats coming from clients. Make sure the path to the certificate and key match the right paths as outlined in the previous step:  This specifies that Logstash will listen on tcp port 5044 i.e. log-forwarder will connect at this port to send logs.

$ sudo vi /etc/logstash/conf.d/01-beats-input.conf

input {

beats {

port => 5044

ssl => true

ssl_certificate => "/etc/pki/tls/certs/ssl-logstash-forwarder.crt"

ssl_key => "/etc/pki/tls/private/ssl-logstash-forwarder.key"

}

}

FILTER.CONF

This filter looks for logs that are labeled as “syslog” type (by Filebeat), and it will try to use grok to parse incoming syslog logs to make it structured and query-able.

$ sudo vi /etc/logstash/conf.d/01-beats-filter.conf

filter {

if [type] == "syslog" {

grok {

match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }

add_field => [ "received_at", "%{@timestamp}" ]

add_field => [ "received_from", "%{host}" ]

}

syslog_pri { }

date {

match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]

}

}

}

OUTPUT.CONF

In output.conf basically we configured for Logstash to store the beats data in Elasticsearch which is running at localhost:9200, in an index named after the beat used (filebeat, in our case).

$ sudo vi /etc/logstash/conf.d/01-beats-output.conf

output {

elasticsearch {

hosts => ["localhost:9200"]

sniffing => true

manage_template => false

index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"

document_type => "%{[@metadata][type]}"

}

}

START AND ENABLE LOGSTASH

$ sudo systemctl daemon-reload

$ sudo systemctl start logstash

$ sudo systemctl enable logstash

Stage 2:

Install Filebeat (on the Client Servers)

We need to install the Filebeat in Client servers(means from which server we are planning to get the logs) can be one or more servers, the below process need to follow for all client servers. 

COPY THE SSL CERTIFICATE FROM THE ELK SERVER TO THE CLIENT(S)

$ sudo scp /etc/pki/tls/certs/logstash-forwarder.crt root@<client server IP>:/etc/pki/tls/certs/

NOTE

Once copied the generated certs to client servers, as a next we need to Perform the following steps on the client machine (ones sending logs to ELK server)

IMPORT THE ELASTICSEARCH PUBLIC GPG KEY

$ sudo rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch

CREATE AND EDIT A NEW YUM REPOSITORY FILE FOR FILEBEAT

$ sudo vi /etc/yum.repos.d/filebeat.repo

Add the following repository configuration to filebeat.repo file

[elastic-7.x]

name=Elastic repository for 7.x packages

baseurl=https://artifacts.elastic.co/packages/7.x/yum

gpgcheck=1

gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch

enabled=1

autorefresh=1

type=rpm-md

INSTALL THE FILEBEAT PACKAGE

$ sudo yum -y  install filebeat

CONFIGURE FILEBEAT

Edit Filebeat configuration file & Replace ‘elk_server_private_ip‘ with the private IP of your ELK server i.e. hosts: [“IP:5044”]

$ sudo vi /etc/filebeat/filebeat.yml

filebeat:

prospectors:

-

paths:

- /var/log/secure

- /var/log/messages

#  - /var/log/*.loginput_type: logdocument_type: syslogregistry_file: /var/lib/filebeat/registryoutput:

logstash:

hosts: ["elk_server_private_ip:5044"]

bulk_max_size: 1024tls:

certificate_authorities: ["/etc/pki/tls/certs/ssl-logstash-forwarder.crt"]shipper:logging:

files:

rotateeverybytes: 10485760 # = 10MB

START AND ENABLE FILEBEAT

$ systemctl start filebeat

$ systemctl enable filebeat

Test Filebeat

On your ELK Server, verify that Elasticsearch is receiving the data by querying for the Filebeat index with this command:

$ curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'


Ref: https://www.elastic.co/logstash/

Ref: https://www.elastic.co/beats/filebeat

Sunday, September 6, 2020

Fluentd Installation in CentOS and RHEL

Fluentd

Fluentd is a cross platform open-source data collection software project originally developed at Treasure Data. It is written primarily in the Ruby programming language.

Fluentd having two different configurations parts, which will be doing from Elasticsearch and Kibana server side that is known as Fluentd Aggregator Configuration and one more will be application side fluentd configuration which will forward the application logs from app server/webserver to Elasticstack which is known as Fluentd Forwarder Configuration. For Elasticstack configuration with Elasticsearch click here and for kibana click here.

Fluentd Aggregator configuration

Step 1:  Install the td-agent

         # curl -L https://toolbelt.treasuredata.com/sh/install-redhat-td-agent3.sh | sh

         #  yum -y install gcc libcurl-devel

         # yum groupinstall "Development Tools" kernel-devel kernel-headers -y

         # sudo /opt/td-agent/embedded/bin/fluent-gem install fluent-plugin-elasticsearch

         # wget https://rubygems.org/gems/fluent-plugin-elasticsearch/versions/3.3.0

Step 2:  Edit the /etc/td-agent/td-agent.conf file. Remove the existing lines and replace with the below code.

      # vim /etc/td-agent/td-agent.conf

           <source>

             @type forward

             port 24224

          </source>

          <match *.log>

           @type copy

          <store>

           @type file

            path /var/log/td-agent/httpd/access_forward.log

            time_slice_format %Y%m%d

            time_slice_wait 10m

            time_format %Y%m%dT%H%M%S%z

            compress gzip

            utc

         </store>

         <store>

          @type elasticsearch_dynamic

           host 192.168.0.34

           port 9200

           index_name fluentd-${tag_parts[1]+ "-" + Time.at(time).getlocal("+05:30").strftime(@logstash_dateformat)}

          logstash_format true

          time_format %Y-%m-%dT%H:%M:%S

          timezone +0530

          include_timestamp true

          type_name fluentd

        <buffer>

           flush_interval 5s

           flush_thread_count 3

           chunk_limit_size 64m

        </buffer>

        </store>

        </match>

Step 3:  Enable and start the td-agent.service

        #  systemctl enable td-agent.service

        #  systemctl start td-agent.service

        #  systemctl status td-agent.service

Step 4:  Check the td-agent log file.

        # tail -f /var/log/td-agent/td-agent.log


Fluentd Forwarder Configuration:

Step 5:  Install the td-agent

         # curl -L https://toolbelt.treasuredata.com/sh/install-redhat-td-agent3.sh | sh

         #  yum -y install gcc libcurl-devel

         # yum groupinstall "Development Tools" kernel-devel kernel-headers -y

Step 6:  Edit the Log file permissions

               i. Change the httpd log directory permissions to “og+rx”

               ii. Change the  log file permissions to “og+r”  in httpd directory

Step 7:  Edit the /etc/td-agent/td-agent.conf file. Remove the existing lines and replace with the below code.

        # vim /etc/td-agent/td-agent.conf

            <match td.*.*>

             @type tdlog

             apikey YOUR_API_KEY

             auto_create_table

             buffer_type file

             buffer_path /var/log/td-agent/buffer/td

          <secondary>

           @type file

            path /var/log/td-agent/failed_records

         </secondary>

         </match>

         <match debug.**>

          @type stdout

         </match>

         <source>

          @type forward

           port 24224

         </source>

         <source>

         @type http

          port 8888

         </source>

         <source>

         @type debug_agent

          bind 192.168.0.22

          port 24230

        </source>

        <source>

         @type tail

         path /var/log/httpd/*.log

         pos_file /var/log/td-agent/access.log.pos

         tag access.log

         format none

         time_format %Y-%m-%d %H:%M:%S,%L %z

         timezone +0530

         time_key time

         keep_time_key true

         types time:time

       </source>

       <match *.log>

         @type copy

       <store>

         @type file

          path /var/log/td-agent/httpd/access_forward.log

       </store>

       <store>

        @type forward

         heartbeat_type tcp

       <server>

          host 192.168.0.34

       </server>

          flush_interval 5s

       </store>

       </match>

 Step 8:  Enable and start the td-agent.service

        #  systemctl enable td-agent.service

        #  systemctl start td-agent.service

        #  systemctl status td-agent.service

Step 11:  Check the td-agent log file.

        # tail -f /var/log/td-agent/td-agent.log


Ref: https://www.fluentd.org/


Saturday, September 5, 2020

Kibana Installation in CentOS and RHEL

Kibana

Kibana is an open source data visualization dashboard for Elasticsearch. It provides visualization capabilities on top of the content indexed on an Elasticsearch cluster. Users can create bar, line and scatter plots, or pie charts and maps on top of large volumes of data.

As we are going to create the Kibana dashboard with Nginx webserver for better security. Before setting up the Kibana, we have to install and setup the Elasticsearch in the same server. For Elasticsearch setup please click here.

Step 1:  Download the Kibana RPM file

         $ sudo wget https://artifacts.elastic.co/downloads/kibana/kibana-7.9.1-x86_64.rpm

Step 2:  Install the downloaded RPM file.

         $ sudo rpm -ivh kibana-7.9.1-x86_64.rpm

Step 3:  Edit the Kibana configuration file. Enable the below lines

         $ sudo vim /etc/kibana/kibana.yml
              server.port: 5601
              server.host: "localhost"
              elasticsearch.hosts: ["http://localhost:9200"]

Step 4:  Enable and start the Kibana

         $  sudo systemctl enable kibana
          $  sudo systemctl start kibana

Step 5:  Check the kibana is running or not by checking the listening port in below command 
        
          $ netstat -lntp

NGINX Installation with Kibana 

Step 6:  Nginx is available in the Epel repository, install epel-release with yum

         $ sudo yum -y install epel-release   

Step 7: Next, install the Nginx and httpd-tools package

         $ sudo yum -y install nginx httpd-tools  

Step 8:  Edit the Nginx configuration file and remove the 'server { }' block, so we can add a new virtual host configuration.

Step 9: Now we need to create a new virtual host configuration file in the conf.d directory.   Create the new file 'kibana.conf'

        $ sudo  vim /etc/nginx/conf.d/kibana.conf
               server {
                      listen 80;
                      proxy_connect_timeout       600;
                      proxy_send_timeout          600;
                      proxy_read_timeout          600;
                      send_timeout                600;
                      index index.php index.html index.htm;
                      server_name localhost or ip;
                      auth_basic "Restricted Access";
                      auth_basic_user_file /etc/nginx/htpasswd.users;
               location / {
                      proxy_pass http://localhost:5601;
                      proxy_http_version 1.1;
                      proxy_set_header Upgrade $http_upgrade;
                      proxy_set_header Connection 'upgrade';
                      proxy_set_header Host $host;
                      proxy_cache_bypass $http_upgrade;
                      }
              location /nested {
                     alias /var/www/html;
                     #try_files $uri $uri/ @nested;      
                      }
            }

Step 10: Then create a new basic authentication file with the htpasswd command.

         $ sudo htpasswd -c /etc/nginx/htpasswd-user kibanaadmin
                    12345678

Step 11: Test the Nginx configuration and make sure there is no error. Then add Nginx to run at the boot time and start Nginx.

         $ sudo nginx -t

          $ sudo systemctl enable nginx

          $ sudo systemctl start nginx



Thursday, September 3, 2020

ElasticSearch Installation in CentOS and RHEL

#Elasticsearch  #Installation #CentOS #RHEL

What is Elasticsearch?

      Elasticsearch is a distributed, RESTful search and analytics engine capable of addressing a growing number of use cases. As the heart of the Elastic Stack, it centrally stores your data for lightning fast search, fine tuned relevancy, and powerful analytics that scale with ease. 

Now, we will see how to install the Elasticsearch in CentOS and RHEL servers. Basically, we have to install JAVA has to be prerequisite for Elasticsearch. So first we will install Java and setup the environment variables and then install the Elasticsearch with respect to basic production configuration.

Prerequisite:

To setup the Elasticsearch we require Java need to installed in Elasticsearch stack, using following steps we will be able to install the java.

Step 1:  Installing the openjdk 1.8 in the elasticsearch server
    
     a) update the server and install java with below command.

          $ sudo yum update 
          $ sudo yum install java-1.8.0-openjdk -y

     b) Once installed the java, check the java version using below command

          $ sudo java -version

     c) After validating the java version , configure the java home to use in run time in .bashrc file

          $ sudo update-alternatives --config java

          $ sudo vi .bashrc
           export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-1.el7_6.x86_64/jre/bin/java
 
     d) Execute the .bashrc to enable the java home 

          $ source .bashrc

Elasticsearch Setup:

Once the Java setup is completed, we need to setup the elasticsearch using below steps

Step 2: Before installing Elasticsearch, add the elastic.co key to the server.

         $ rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

Step 3: Now, Download the Latest RPM of Elasticsearch


Step 4: Install the Downloaded RPM

       $ rpm -ivh elasticsearch-7.9.1-x86_64.rpm

Step 5:  Now go to the configuration directory and edit the elasticsearch.yml configuration file. Enable the below lines in configuration file

      $ cd /etc/elasticsearch/
      $ vim elasticsearch.yml
             bootstrap.memory_lock: true
             xpack.monitoring.enabled: true
             network.host: localhost
             http.port: 9200
             http.max_content_length: 1.6gb

     And Save the file.

Step 6: Now edit the elasticsearch.service file for the memory lock configuration. Uncomment LimitMEMLOCK line or if it is not there please add this line to limit session

     $ vim /usr/lib/systemd/system/elasticsearch.service

           LimitMEMLOCK=infinity

Step 7: Edit the sysconfig configuration file for Elasticsearch.  Uncomment line 60 and make sure the value is 'unlimited'.

     $ vim /etc/sysconfig/elasticsearch
               MAX_LOCKED_MEMORY=unlimited    
       MAX_OPEN_FILES=65535
               MAX_MAP_COUNT=262144
Step 8: Reload systemd, enable Elasticsearch to start at boot time, then start the service.

     $ sudo systemctl daemon-reload
     $ sudo systemctl enable elasticsearch
     $ sudo systemctl start elasticsearch

Step 9: To check the elasticsearch is running or not. Check the listening port with 9200

     $ netstat -lntp 

Step 10: Then check the memory lock to ensure that mlockall is enabled,

     $ curl -XGET 'localhost:9200/_nodes?filter_path=**.mlockall&pretty'
     $ curl -XGET 'localhost:9200/?pretty'




                                                                                                            Next: Kibana Installation in CentOS

Monday, March 25, 2019

EFK(Elasticsearch, Fluentd, Kibana) installation in CentOS machine

Elasticsearch:
               Elasticsearch is a distributed, RESTful search and analytics engine capable of addressing a growing number of use cases. As the heart of the Elastic Stack, it centrally stores your data for lightning fast search, fine tuned relevancy, and powerful analytics that scale with ease. 

Kibana:
                 Kibana is an open source data visualization dashboard for Elasticsearch. It provides visualization capabilities on top of the content indexed on an Elasticsearch cluster. Users can create bar, line and scatter plots, or pie charts and maps on top of large volumes of data.

Fluentd:
             Fluentd is a cross platform open-source data collection software project originally developed at Treasure Data. It is written primarily in the Ruby programming language. Fluentd having two different configurations parts, which will be doing from Elasticsearch and Kibana server side that is known as Fluentd Aggregator Configuration and one more will be application side fluentd configuration which will forward the application logs from app server/webserver to Elasticstack which is known as Fluentd Forwarder Configuration.

Pre-requisites:

               System requirements,  two virtual machines
                   
                  1. elasticsearch, kibana, fluentd aggregator (one system) :192.168.0.34

                  2. In Application Node need to install fluentd forwarder  : 192.168.0.22

ElasticSearch Installation:

Step 1:  Before installing Elasticsearch, add the elastic.co key to the server.

$ sudo  rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

Step 2: Now I am downloading the latest rpm of Elastic Search

$ sudo wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.2.0-x86_64.rpm

Step 3:  Install the Downloaded RPM.

$ sudo rpm -ivh elasticsearch-7.2.0-x86_64.rpm

Step 4:  Now go to the configuration directory and edit the elasticsearch.yml configuration file. Enable the below lines in configuration file

$ sudo cd /etc/elasticsearch/
$ sudo vim elasticsearch.yml
        # bootstrap.memory_lock: true
        # network.host: 192.168.0.34
        # http.port: 9200

Step 5:  Now edit the elasticsearch.service file for the memory lock configuration. Uncomment LimitMEMLOCK line or if it is not there please add this line to limit session

$ sudo vim /usr/lib/systemd/system/elasticsearch.service
          # LimitMEMLOCK=infinity

Step 6:  Edit the sysconfig configuration file for Elasticsearch.  Uncomment line 60 and make sure the value is 'unlimited'.

$ sudo vim /etc/sysconfig/elasticsearch
            # MAX_LOCKED_MEMORY=unlimited     

The Elasticsearch configuration is finished.

Step 7:  Reload systemd, enable Elasticsearch to start at boot time, then start the service.

$ sudo systemctl daemon-reload
$ sudo systemctl enable elasticsearch
$ sudo systemctl start elasticsearch

Step 8:  To check the elasticsearch is running or not. Check the listening port with 9200

$ sudo  netstat -lntpu

Step 9:  Then check the memory lock to ensure that mlockall is enabled,

$ sudo curl -XGET '192.168.0.34:9200/_nodes?filter_path=**.mlockall&pretty'

           Result:     Check in the output that mentions ""mlockall" : true

$ sudo  curl -XGET '192.168.0.34:9200/?pretty'

               Check the tagline in output. That should be  "tagline" : "You Know, for Search"


Now, browse the elasticsearh with your localhost with port 9200. That will be "https://localhost:9200"



Installing Kibana with Nginx:

Step 10:  Download the Kibana RPM file

$ sudo wget https://artifacts.elastic.co/downloads/kibana/kibana-7.2.0-x86_64.rpm

Step 11:  Install the RPM downloaded rpm file.

$ sudo rpm -ivh kibana-7.2.0-x86_64.rpm

Step 12:  edit the Kibana configuration file. Enable the below lines

$ sudo vim /etc/kibana/kibana.yml
         # server.port: 5601
         # server.host: "192.168.0.34"
         # elasticsearch.url: "http://192.168.0.34:9200"

Step 13:  Enable and start the Kibana

$ sudo systemctl enable kibana
$ sudo systemctl start kibana

Step 14:  Check the kibana is running or not.

$ sudo netstat -lntp

The Kibana installation is finished.


Now we need to install Nginx and configure it as reverse proxy to be able to access Kibana from the public IP address.

Step 15:  Nginx is available in the Epel repository, install epel-release with yum.

$ sudo yum -y install epel-release 

Step 16:  Next, install the Nginx and httpd-tools package.

$ sudo yum -y install nginx httpd-tools

Step 17:  Edit the Nginx configuration file and remove the 'server { }' block, so we can add a new virtual host configuration.

 cd /etc/nginx/
$ sudo vim nginx.conf \         
                Remove the server { } block. comment the Server section
                    Remove Server Block on Nginx configuration

           Save the file and exit vim.

Step 18:  Now we need to create a new virtual host configuration file in the conf.d directory.   Create the new file 'kibana.conf'
$ sudo vim /etc/nginx/conf.d/kibana.conf
   Paste the configuration below.
     server {
         listen 80;
              server_name <192.168.0.34 or any server name>;
             auth_basic "Restricted Access";
             auth_basic_user_file /etc/nginx/.kibana-user;

    location / {
        proxy_pass http://192.168.0.34:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}

Step 19:  Then create a new basic authentication file with the htpasswd command.

$ sudo htpasswd -c /etc/nginx/.kibana-user admin    
                   <TYPE YOUR PASSWORD>

Step 20:  Test the Nginx configuration and make sure there is no error. Then add Nginx to run at the boot time and start Nginx.

$ sudo nginx -t
$ sudo systemctl enable nginx
$ sudo systemctl start nginx

Fluentd Installation:

Fluentd Aggregator configuration:

Step 21:  Install the td-agent

$ sudo curl -L https://toolbelt.treasuredata.com/sh/install-redhat-td-agent3.sh | sh
$ sudo yum -y install gcc libcurl-devel
$ sudo yum groupinstall "Development Tools" kernel-devel kernel-headers -y
$ sudo /opt/td-agent/embedded/bin/fluent-gem install fluent-plugin-elasticsearch
$ sudo wget https://rubygems.org/gems/fluent-plugin-elasticsearch/versions/3.3.0

Step 22:  Edit the /etc/td-agent/td-agent.conf file. Remove the existing lines and replace with the below code.

$ sudo vim /etc/td-agent/td-agent.conf
<source>
  @type forward
   port 24224
</source>

<match *.log>
  @type copy
    <store>
    @type file
    path /var/log/td-agent/httpd/access_forward.log
    time_slice_format %Y%m%d
    time_slice_wait 10m
    time_format %Y%m%dT%H%M%S%z
    compress gzip
    utc
  </store>

  <store>
    @type elasticsearch_dynamic
    host 192.168.0.34
    port 9200
    index_name fluentd-${tag_parts[1]+ "-" + Time.at(time).getlocal("+05:30").strftime(@logstash_dateformat)}

    logstash_format true
    time_format %Y-%m-%dT%H:%M:%S
    timezone +0530
    include_timestamp true
    type_name fluentd
    <buffer>
    flush_interval 5s
    flush_thread_count 3
    chunk_limit_size 64m
    </buffer>
  </store>
</match>

Step 23:  Enable and start the td-agent.service

$ sudo  systemctl enable td-agent.service
$ sudo systemctl start td-agent.service
$ sudo  systemctl status td-agent.service

Step 24:  Check the td-agent log file.

$ sudo tail -f /var/log/td-agent/td-agent.log


Fluentd Forwarder Configuration:

Step 25:  Install the td-agent

$ sudo curl -L https://toolbelt.treasuredata.com/sh/install-redhat-td-agent3.sh | sh
$ sudo yum -y install gcc libcurl-devel
$ sudo yum groupinstall "Development Tools" kernel-devel kernel-headers -y

Step 26:  Edit the Log file permissions

1.     Change the httpd log directory permissions to “og+rx”
2.     Change the  log file permissions to “og+r”  in httpd directory


Step 25:  Edit the /etc/td-agent/td-agent.conf file. Remove the existing lines and replace with the below code.

$ sudo vim /etc/td-agent/td-agent.conf
<match td.*.*>
  @type tdlog
  apikey YOUR_API_KEY
  auto_create_table
  buffer_type file
  buffer_path /var/log/td-agent/buffer/td

  <secondary>
    @type file
    path /var/log/td-agent/failed_records
  </secondary>
</match>

<match debug.**>
  @type stdout
</match>

<source>
  @type forward
  port 24224
</source>

<source>
  @type http
  port 8888
</source>

<source>
  @type debug_agent
  bind 192.168.0.22
  port 24230
</source>

<source>
  @type tail
  path /var/log/httpd/*.log
  pos_file /var/log/td-agent/access.log.pos
  tag access.log
  format none

  time_format %Y-%m-%d %H:%M:%S,%L %z
  timezone +0530
  time_key time
  keep_time_key true
  types time:time
</source>

<match *.log>
   @type copy
   <store>
    @type file
    path /var/log/td-agent/httpd/access_forward.log
  </store>

  <store>
    @type forward
    heartbeat_type tcp
    <server>
    host 192.168.0.34
    </server>
    flush_interval 5s
  </store>
</match>

 Step 10:  Enable and start the td-agent.service

$ sudo systemctl enable td-agent.service
$ sudo systemctl start td-agent.service
$ sudo systemctl status td-agent.service

Step 11:  Check the td-agent log file.

$ sudo tail -f /var/log/td-agent/td-agent.log    





Featured Post

Ansible Tool Introduction

                                                                                                                                    Next ...