Showing posts with label YUM. Show all posts
Showing posts with label YUM. Show all posts

Wednesday, July 20, 2022

Setting Up Azure CLI on Linux for Seamless Azure Management

Introduction

The Azure CLI (Command-Line Interface) is a powerful tool that enables seamless interaction with Azure resources. In this blog post, we'll walk through the process of setting up Azure CLI on a Linux machine. Whether you are an administrator, developer, or an Azure enthusiast, having Azure CLI locally on your Linux computer allows you to execute administrative commands efficiently.

Prerequisites

Before we start, ensure you have the following prerequisites:

  • A Linux computer
  • Administrative privileges on the Linux machine

Step-by-Step Guide

Step 1: Import Microsoft Repository Key

Open your terminal and run the following command to import the Microsoft repository key:

bash
$ sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc
Step 2: Create Local Azure-CLI Repository Information

Create and configure the local Azure CLI repository information by running the following commands:

bash
$ sudo sh -c 'echo -e "[azure-cli]
name=Azure CLI
baseurl=https://packages.microsoft.com/yumrepos/azure-cli
enabled=1
gpgcheck=1
gpgkey=https://packages.microsoft.com/keys/microsoft.asc" > /etc/yum.repos.d/azure-cli.repo'


Step 3: Install Azure CLI

Install Azure CLI using the yum install command:

bash
$ sudo yum install azure-cli -y

Step 4: Check the Installed Version

Verify the successful installation by checking the installed version of Azure CLI:

bash
$ az --version

Conclusion

Congratulations! You've successfully set up Azure CLI on your Linux machine. With Azure CLI, you can now execute a wide range of administrative commands, manage Azure resources, and streamline your interactions with Azure services directly from the command line.

For more details and additional options, you can refer to the official documentation: Install Azure CLI on Linux.

Now you're ready to explore and harness the power of Azure CLI for efficient Azure management on your Linux environment. Happy coding!

Logstash Installation and configuration in CentOS

Logstash is a light-weight, open-source, server-side data processing pipeline that allows you to collect data from a variety of sources, transform it on the fly, and send it to your desired destination. It is most often used as a data pipeline for Elasticsearch, an open-source analytics and search engine.

Filebeat is a lightweight shipper for forwarding and centralizing log data. Installed as an agent on your servers, Filebeat monitors the log files or locations that you specify, collects log events, and forwards them either to Elasticsearch or Logstash for indexing.

Stages:

 1. Install Logstash  & Configuring Logstash

 2. Install Filebeat (on the Client Servers) & Configuring Filebeat

Stage 1:

Install Logstash

$ sudo wget https://artifacts.elastic.co/downloads/logstash/logstash-7.6.0.rpm

$ sudo rpm –ivh logstash-7.6.0.rpm

Configuring Logstash

As part of configuring the Logstash , we need to generate the Selfsigned certs. Basically we can generate certs in two methods i). Based on Private IP ii). Based on Domain.

Option 1: Based on Private IP

Add a SSL certificate based on the IP address of the ELK server. Add the ELK server’s private IP in /etc/pki/tls/openssl.cnf.

$ sudo vi /etc/pki/tls/openssl.cnf

Add the following line with private Ip just below [ v3_ca ] section

subjectAltName = IP: 192.168.20.158

 GENERATE A SELF-SIGNED CERTIFICATE VALID FOR 365 DAYS

$ cd /etc/pki/tls

$ sudo openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/ssl-logstash-forwarder.key -out certs/ssl-logstash-forwarder.crt

Option 2 – Based on domain (FQDN)

$ cd /etc/pki/tls

$ sudo openssl req -subj '/CN=ELK_server_fqdn/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/ssl-logstash-forwarder.key -out certs/ssl-logstash-forwarder.crt


LOGSTASH INPUT, FILTER, OUTPUT FILES

Logstash configuration files are in the JSON-format, and reside in /etc/logstash/conf.d. The configuration consists of three sections: inputs, filters, and outputs – usually specified as separate files.

INPUT.CONF 

Insert the following lines into it. This is necessary for Logstash to “learn” how to process beats coming from clients. Make sure the path to the certificate and key match the right paths as outlined in the previous step:  This specifies that Logstash will listen on tcp port 5044 i.e. log-forwarder will connect at this port to send logs.

$ sudo vi /etc/logstash/conf.d/01-beats-input.conf

input {

beats {

port => 5044

ssl => true

ssl_certificate => "/etc/pki/tls/certs/ssl-logstash-forwarder.crt"

ssl_key => "/etc/pki/tls/private/ssl-logstash-forwarder.key"

}

}

FILTER.CONF

This filter looks for logs that are labeled as “syslog” type (by Filebeat), and it will try to use grok to parse incoming syslog logs to make it structured and query-able.

$ sudo vi /etc/logstash/conf.d/01-beats-filter.conf

filter {

if [type] == "syslog" {

grok {

match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }

add_field => [ "received_at", "%{@timestamp}" ]

add_field => [ "received_from", "%{host}" ]

}

syslog_pri { }

date {

match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]

}

}

}

OUTPUT.CONF

In output.conf basically we configured for Logstash to store the beats data in Elasticsearch which is running at localhost:9200, in an index named after the beat used (filebeat, in our case).

$ sudo vi /etc/logstash/conf.d/01-beats-output.conf

output {

elasticsearch {

hosts => ["localhost:9200"]

sniffing => true

manage_template => false

index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"

document_type => "%{[@metadata][type]}"

}

}

START AND ENABLE LOGSTASH

$ sudo systemctl daemon-reload

$ sudo systemctl start logstash

$ sudo systemctl enable logstash

Stage 2:

Install Filebeat (on the Client Servers)

We need to install the Filebeat in Client servers(means from which server we are planning to get the logs) can be one or more servers, the below process need to follow for all client servers. 

COPY THE SSL CERTIFICATE FROM THE ELK SERVER TO THE CLIENT(S)

$ sudo scp /etc/pki/tls/certs/logstash-forwarder.crt root@<client server IP>:/etc/pki/tls/certs/

NOTE

Once copied the generated certs to client servers, as a next we need to Perform the following steps on the client machine (ones sending logs to ELK server)

IMPORT THE ELASTICSEARCH PUBLIC GPG KEY

$ sudo rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch

CREATE AND EDIT A NEW YUM REPOSITORY FILE FOR FILEBEAT

$ sudo vi /etc/yum.repos.d/filebeat.repo

Add the following repository configuration to filebeat.repo file

[elastic-7.x]

name=Elastic repository for 7.x packages

baseurl=https://artifacts.elastic.co/packages/7.x/yum

gpgcheck=1

gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch

enabled=1

autorefresh=1

type=rpm-md

INSTALL THE FILEBEAT PACKAGE

$ sudo yum -y  install filebeat

CONFIGURE FILEBEAT

Edit Filebeat configuration file & Replace ‘elk_server_private_ip‘ with the private IP of your ELK server i.e. hosts: [“IP:5044”]

$ sudo vi /etc/filebeat/filebeat.yml

filebeat:

prospectors:

-

paths:

- /var/log/secure

- /var/log/messages

#  - /var/log/*.loginput_type: logdocument_type: syslogregistry_file: /var/lib/filebeat/registryoutput:

logstash:

hosts: ["elk_server_private_ip:5044"]

bulk_max_size: 1024tls:

certificate_authorities: ["/etc/pki/tls/certs/ssl-logstash-forwarder.crt"]shipper:logging:

files:

rotateeverybytes: 10485760 # = 10MB

START AND ENABLE FILEBEAT

$ systemctl start filebeat

$ systemctl enable filebeat

Test Filebeat

On your ELK Server, verify that Elasticsearch is receiving the data by querying for the Filebeat index with this command:

$ curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'


Ref: https://www.elastic.co/logstash/

Ref: https://www.elastic.co/beats/filebeat

Sunday, September 6, 2020

Fluentd Installation in CentOS and RHEL

Fluentd

Fluentd is a cross platform open-source data collection software project originally developed at Treasure Data. It is written primarily in the Ruby programming language.

Fluentd having two different configurations parts, which will be doing from Elasticsearch and Kibana server side that is known as Fluentd Aggregator Configuration and one more will be application side fluentd configuration which will forward the application logs from app server/webserver to Elasticstack which is known as Fluentd Forwarder Configuration. For Elasticstack configuration with Elasticsearch click here and for kibana click here.

Fluentd Aggregator configuration

Step 1:  Install the td-agent

         # curl -L https://toolbelt.treasuredata.com/sh/install-redhat-td-agent3.sh | sh

         #  yum -y install gcc libcurl-devel

         # yum groupinstall "Development Tools" kernel-devel kernel-headers -y

         # sudo /opt/td-agent/embedded/bin/fluent-gem install fluent-plugin-elasticsearch

         # wget https://rubygems.org/gems/fluent-plugin-elasticsearch/versions/3.3.0

Step 2:  Edit the /etc/td-agent/td-agent.conf file. Remove the existing lines and replace with the below code.

      # vim /etc/td-agent/td-agent.conf

           <source>

             @type forward

             port 24224

          </source>

          <match *.log>

           @type copy

          <store>

           @type file

            path /var/log/td-agent/httpd/access_forward.log

            time_slice_format %Y%m%d

            time_slice_wait 10m

            time_format %Y%m%dT%H%M%S%z

            compress gzip

            utc

         </store>

         <store>

          @type elasticsearch_dynamic

           host 192.168.0.34

           port 9200

           index_name fluentd-${tag_parts[1]+ "-" + Time.at(time).getlocal("+05:30").strftime(@logstash_dateformat)}

          logstash_format true

          time_format %Y-%m-%dT%H:%M:%S

          timezone +0530

          include_timestamp true

          type_name fluentd

        <buffer>

           flush_interval 5s

           flush_thread_count 3

           chunk_limit_size 64m

        </buffer>

        </store>

        </match>

Step 3:  Enable and start the td-agent.service

        #  systemctl enable td-agent.service

        #  systemctl start td-agent.service

        #  systemctl status td-agent.service

Step 4:  Check the td-agent log file.

        # tail -f /var/log/td-agent/td-agent.log


Fluentd Forwarder Configuration:

Step 5:  Install the td-agent

         # curl -L https://toolbelt.treasuredata.com/sh/install-redhat-td-agent3.sh | sh

         #  yum -y install gcc libcurl-devel

         # yum groupinstall "Development Tools" kernel-devel kernel-headers -y

Step 6:  Edit the Log file permissions

               i. Change the httpd log directory permissions to “og+rx”

               ii. Change the  log file permissions to “og+r”  in httpd directory

Step 7:  Edit the /etc/td-agent/td-agent.conf file. Remove the existing lines and replace with the below code.

        # vim /etc/td-agent/td-agent.conf

            <match td.*.*>

             @type tdlog

             apikey YOUR_API_KEY

             auto_create_table

             buffer_type file

             buffer_path /var/log/td-agent/buffer/td

          <secondary>

           @type file

            path /var/log/td-agent/failed_records

         </secondary>

         </match>

         <match debug.**>

          @type stdout

         </match>

         <source>

          @type forward

           port 24224

         </source>

         <source>

         @type http

          port 8888

         </source>

         <source>

         @type debug_agent

          bind 192.168.0.22

          port 24230

        </source>

        <source>

         @type tail

         path /var/log/httpd/*.log

         pos_file /var/log/td-agent/access.log.pos

         tag access.log

         format none

         time_format %Y-%m-%d %H:%M:%S,%L %z

         timezone +0530

         time_key time

         keep_time_key true

         types time:time

       </source>

       <match *.log>

         @type copy

       <store>

         @type file

          path /var/log/td-agent/httpd/access_forward.log

       </store>

       <store>

        @type forward

         heartbeat_type tcp

       <server>

          host 192.168.0.34

       </server>

          flush_interval 5s

       </store>

       </match>

 Step 8:  Enable and start the td-agent.service

        #  systemctl enable td-agent.service

        #  systemctl start td-agent.service

        #  systemctl status td-agent.service

Step 11:  Check the td-agent log file.

        # tail -f /var/log/td-agent/td-agent.log


Ref: https://www.fluentd.org/


Saturday, September 5, 2020

Kibana Installation in CentOS and RHEL

Kibana

Kibana is an open source data visualization dashboard for Elasticsearch. It provides visualization capabilities on top of the content indexed on an Elasticsearch cluster. Users can create bar, line and scatter plots, or pie charts and maps on top of large volumes of data.

As we are going to create the Kibana dashboard with Nginx webserver for better security. Before setting up the Kibana, we have to install and setup the Elasticsearch in the same server. For Elasticsearch setup please click here.

Step 1:  Download the Kibana RPM file

         $ sudo wget https://artifacts.elastic.co/downloads/kibana/kibana-7.9.1-x86_64.rpm

Step 2:  Install the downloaded RPM file.

         $ sudo rpm -ivh kibana-7.9.1-x86_64.rpm

Step 3:  Edit the Kibana configuration file. Enable the below lines

         $ sudo vim /etc/kibana/kibana.yml
              server.port: 5601
              server.host: "localhost"
              elasticsearch.hosts: ["http://localhost:9200"]

Step 4:  Enable and start the Kibana

         $  sudo systemctl enable kibana
          $  sudo systemctl start kibana

Step 5:  Check the kibana is running or not by checking the listening port in below command 
        
          $ netstat -lntp

NGINX Installation with Kibana 

Step 6:  Nginx is available in the Epel repository, install epel-release with yum

         $ sudo yum -y install epel-release   

Step 7: Next, install the Nginx and httpd-tools package

         $ sudo yum -y install nginx httpd-tools  

Step 8:  Edit the Nginx configuration file and remove the 'server { }' block, so we can add a new virtual host configuration.

Step 9: Now we need to create a new virtual host configuration file in the conf.d directory.   Create the new file 'kibana.conf'

        $ sudo  vim /etc/nginx/conf.d/kibana.conf
               server {
                      listen 80;
                      proxy_connect_timeout       600;
                      proxy_send_timeout          600;
                      proxy_read_timeout          600;
                      send_timeout                600;
                      index index.php index.html index.htm;
                      server_name localhost or ip;
                      auth_basic "Restricted Access";
                      auth_basic_user_file /etc/nginx/htpasswd.users;
               location / {
                      proxy_pass http://localhost:5601;
                      proxy_http_version 1.1;
                      proxy_set_header Upgrade $http_upgrade;
                      proxy_set_header Connection 'upgrade';
                      proxy_set_header Host $host;
                      proxy_cache_bypass $http_upgrade;
                      }
              location /nested {
                     alias /var/www/html;
                     #try_files $uri $uri/ @nested;      
                      }
            }

Step 10: Then create a new basic authentication file with the htpasswd command.

         $ sudo htpasswd -c /etc/nginx/htpasswd-user kibanaadmin
                    12345678

Step 11: Test the Nginx configuration and make sure there is no error. Then add Nginx to run at the boot time and start Nginx.

         $ sudo nginx -t

          $ sudo systemctl enable nginx

          $ sudo systemctl start nginx



Thursday, September 3, 2020

ElasticSearch Installation in CentOS and RHEL

#Elasticsearch  #Installation #CentOS #RHEL

What is Elasticsearch?

      Elasticsearch is a distributed, RESTful search and analytics engine capable of addressing a growing number of use cases. As the heart of the Elastic Stack, it centrally stores your data for lightning fast search, fine tuned relevancy, and powerful analytics that scale with ease. 

Now, we will see how to install the Elasticsearch in CentOS and RHEL servers. Basically, we have to install JAVA has to be prerequisite for Elasticsearch. So first we will install Java and setup the environment variables and then install the Elasticsearch with respect to basic production configuration.

Prerequisite:

To setup the Elasticsearch we require Java need to installed in Elasticsearch stack, using following steps we will be able to install the java.

Step 1:  Installing the openjdk 1.8 in the elasticsearch server
    
     a) update the server and install java with below command.

          $ sudo yum update 
          $ sudo yum install java-1.8.0-openjdk -y

     b) Once installed the java, check the java version using below command

          $ sudo java -version

     c) After validating the java version , configure the java home to use in run time in .bashrc file

          $ sudo update-alternatives --config java

          $ sudo vi .bashrc
           export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-1.el7_6.x86_64/jre/bin/java
 
     d) Execute the .bashrc to enable the java home 

          $ source .bashrc

Elasticsearch Setup:

Once the Java setup is completed, we need to setup the elasticsearch using below steps

Step 2: Before installing Elasticsearch, add the elastic.co key to the server.

         $ rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

Step 3: Now, Download the Latest RPM of Elasticsearch


Step 4: Install the Downloaded RPM

       $ rpm -ivh elasticsearch-7.9.1-x86_64.rpm

Step 5:  Now go to the configuration directory and edit the elasticsearch.yml configuration file. Enable the below lines in configuration file

      $ cd /etc/elasticsearch/
      $ vim elasticsearch.yml
             bootstrap.memory_lock: true
             xpack.monitoring.enabled: true
             network.host: localhost
             http.port: 9200
             http.max_content_length: 1.6gb

     And Save the file.

Step 6: Now edit the elasticsearch.service file for the memory lock configuration. Uncomment LimitMEMLOCK line or if it is not there please add this line to limit session

     $ vim /usr/lib/systemd/system/elasticsearch.service

           LimitMEMLOCK=infinity

Step 7: Edit the sysconfig configuration file for Elasticsearch.  Uncomment line 60 and make sure the value is 'unlimited'.

     $ vim /etc/sysconfig/elasticsearch
               MAX_LOCKED_MEMORY=unlimited    
       MAX_OPEN_FILES=65535
               MAX_MAP_COUNT=262144
Step 8: Reload systemd, enable Elasticsearch to start at boot time, then start the service.

     $ sudo systemctl daemon-reload
     $ sudo systemctl enable elasticsearch
     $ sudo systemctl start elasticsearch

Step 9: To check the elasticsearch is running or not. Check the listening port with 9200

     $ netstat -lntp 

Step 10: Then check the memory lock to ensure that mlockall is enabled,

     $ curl -XGET 'localhost:9200/_nodes?filter_path=**.mlockall&pretty'
     $ curl -XGET 'localhost:9200/?pretty'




                                                                                                            Next: Kibana Installation in CentOS

Friday, April 17, 2020

Install Grafana in CentOS with YUM/Distribution-method/RPM Methods

Grafana Introduction

  • Grafana is a  multi-platform open source analytics and interactive visualisation software
  • It is available since 2014
  • It provides charts, graphs and alerts for the web when connected to supported data sources.
  • It is expandable through a plug-in system.

Supported operating systems

  • Debian/Ubuntu
  • RPM-Based Linux (CentOS, Fedora, Opensuse, RHEL)
  • MacOS
  • Windows
Installation of Grafana on other operating systems is possible, but not supported.

Hardware Recommendations

  • Grafana does not use a lot of resources and is very lightweight in use of memory and CPU.
  • Minimum recommended memory: 255 MB 
  • Minimum recommended CPU: 1
  • Some features might require more memory or CPUs. Features require more resources include:
               a. Server side rendering of images
               b. Alerting
               c.  Data source proxy

Installation with YUM repo :

Step1 : Updating the server
$ sudo yum update
Step 2: Adding grafana repo for installing grafana
$ sudo vi /etc/yum.repos.d/grafana.repo
[grafana]
name=grafana baseurl=https://packages.grafana.com/oss/rpm
repo_gpgcheck=1
enabled=1
gpgcheck=1
gpgkey=https://packages.grafana.com/gpg.key
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt
Step 3: Installing Grafana
$ sudo yum install grafana
Step 4: Start and Enable the grafana service
$ sudo systemctl daemon-reload
$ sudo systemctl start grafana-server
$ sudo systemctl status grafana-server
$ sudo systemctl enable grafana-server

Installation with Distribution Method (Using Binary):
Step 1: Downloading Grafana Zip file
Step 2: Unzip the Grafana zip file
tar -zxvf grafana-6.6.0.linux-amd64.tar.gz
Step 3: Start and enable the grafana server
$ sudo systemctl daemon-reload
$ sudo systemctl start grafana-server
$ sudo systemctl status grafana-server
$ sudo systemctl enable grafana-server

Installation with RPM:

Step 1: Adding Grafana RPM key
Step 2: Install some dependencies to install grafana with RPM
 $ sudo yum install initscripts urw-fonts wget
Step 3: Download the grafana RPM file
 $ sudo rpm -Uvh grafana-6.6.0-1.x86_64.rpm
Step 4: Installing Grafana

$ sudo yum localinstall grafana-6.6.0-1.x86_64.rpm
Step 3: Start and Enable the Grafana Server
 $ sudo systemctl daemon-reload
 $ sudo systemctl start grafana-server
 $ sudo systemctl status grafana-server
 $ sudo systemctl enable grafana-server
Referral Page : https://grafana.com/docs/grafana/latest/installation/



Featured Post

Ansible Tool Introduction

                                                                                                                                    Next ...