Showing posts with label Linux. Show all posts
Showing posts with label Linux. Show all posts

Wednesday, July 20, 2022

Azure Cli Installation in Ubuntu (Debian)

The Azure CLI is a cross-platform command-line tool that can be installed locally on Linux computers. You can use the Azure CLI on Linux to connect to Azure and execute administrative commands on Azure resources. The CLI on Linux allows the execution of various commands through a terminal using interactive command-line prompts or a script.

Step 1:  Get packages needed for the install process
   $ sudo apt-get update
   $ sudo apt-get install ca-certificates curl apt-transport-https lsb-release gnupg
   
Step 2: Download and install the Microsoft signing key

$ sudo curl -sL https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor | sudo tee /etc/apt/trusted.gpg.d/microsoft.asc.gpg > /dev/null
Step 3: Add the Azure CLI software repository

$ sudo AZ_REPO=$(lsb_release -cs) echo "deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ $AZ_REPO main" | sudo tee /etc/apt/sources.list.d/azure-cli.list
Step 4: Update repository information and install the azure-cli package
$ sudo apt-get update $ sudo apt-get install azure-cli
Step 5: Check the Azure cli version installed
$ az --version

Ref: https://learn.microsoft.com/en-us/cli/azure/install-azure-cli-linux?pivots= apt

Logstash Installation and configuration in CentOS

Logstash is a light-weight, open-source, server-side data processing pipeline that allows you to collect data from a variety of sources, transform it on the fly, and send it to your desired destination. It is most often used as a data pipeline for Elasticsearch, an open-source analytics and search engine.

Filebeat is a lightweight shipper for forwarding and centralizing log data. Installed as an agent on your servers, Filebeat monitors the log files or locations that you specify, collects log events, and forwards them either to Elasticsearch or Logstash for indexing.

Stages:

 1. Install Logstash  & Configuring Logstash

 2. Install Filebeat (on the Client Servers) & Configuring Filebeat

Stage 1:

Install Logstash

$ sudo wget https://artifacts.elastic.co/downloads/logstash/logstash-7.6.0.rpm

$ sudo rpm –ivh logstash-7.6.0.rpm

Configuring Logstash

As part of configuring the Logstash , we need to generate the Selfsigned certs. Basically we can generate certs in two methods i). Based on Private IP ii). Based on Domain.

Option 1: Based on Private IP

Add a SSL certificate based on the IP address of the ELK server. Add the ELK server’s private IP in /etc/pki/tls/openssl.cnf.

$ sudo vi /etc/pki/tls/openssl.cnf

Add the following line with private Ip just below [ v3_ca ] section

subjectAltName = IP: 192.168.20.158

 GENERATE A SELF-SIGNED CERTIFICATE VALID FOR 365 DAYS

$ cd /etc/pki/tls

$ sudo openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/ssl-logstash-forwarder.key -out certs/ssl-logstash-forwarder.crt

Option 2 – Based on domain (FQDN)

$ cd /etc/pki/tls

$ sudo openssl req -subj '/CN=ELK_server_fqdn/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/ssl-logstash-forwarder.key -out certs/ssl-logstash-forwarder.crt


LOGSTASH INPUT, FILTER, OUTPUT FILES

Logstash configuration files are in the JSON-format, and reside in /etc/logstash/conf.d. The configuration consists of three sections: inputs, filters, and outputs – usually specified as separate files.

INPUT.CONF 

Insert the following lines into it. This is necessary for Logstash to “learn” how to process beats coming from clients. Make sure the path to the certificate and key match the right paths as outlined in the previous step:  This specifies that Logstash will listen on tcp port 5044 i.e. log-forwarder will connect at this port to send logs.

$ sudo vi /etc/logstash/conf.d/01-beats-input.conf

input {

beats {

port => 5044

ssl => true

ssl_certificate => "/etc/pki/tls/certs/ssl-logstash-forwarder.crt"

ssl_key => "/etc/pki/tls/private/ssl-logstash-forwarder.key"

}

}

FILTER.CONF

This filter looks for logs that are labeled as “syslog” type (by Filebeat), and it will try to use grok to parse incoming syslog logs to make it structured and query-able.

$ sudo vi /etc/logstash/conf.d/01-beats-filter.conf

filter {

if [type] == "syslog" {

grok {

match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }

add_field => [ "received_at", "%{@timestamp}" ]

add_field => [ "received_from", "%{host}" ]

}

syslog_pri { }

date {

match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]

}

}

}

OUTPUT.CONF

In output.conf basically we configured for Logstash to store the beats data in Elasticsearch which is running at localhost:9200, in an index named after the beat used (filebeat, in our case).

$ sudo vi /etc/logstash/conf.d/01-beats-output.conf

output {

elasticsearch {

hosts => ["localhost:9200"]

sniffing => true

manage_template => false

index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"

document_type => "%{[@metadata][type]}"

}

}

START AND ENABLE LOGSTASH

$ sudo systemctl daemon-reload

$ sudo systemctl start logstash

$ sudo systemctl enable logstash

Stage 2:

Install Filebeat (on the Client Servers)

We need to install the Filebeat in Client servers(means from which server we are planning to get the logs) can be one or more servers, the below process need to follow for all client servers. 

COPY THE SSL CERTIFICATE FROM THE ELK SERVER TO THE CLIENT(S)

$ sudo scp /etc/pki/tls/certs/logstash-forwarder.crt root@<client server IP>:/etc/pki/tls/certs/

NOTE

Once copied the generated certs to client servers, as a next we need to Perform the following steps on the client machine (ones sending logs to ELK server)

IMPORT THE ELASTICSEARCH PUBLIC GPG KEY

$ sudo rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch

CREATE AND EDIT A NEW YUM REPOSITORY FILE FOR FILEBEAT

$ sudo vi /etc/yum.repos.d/filebeat.repo

Add the following repository configuration to filebeat.repo file

[elastic-7.x]

name=Elastic repository for 7.x packages

baseurl=https://artifacts.elastic.co/packages/7.x/yum

gpgcheck=1

gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch

enabled=1

autorefresh=1

type=rpm-md

INSTALL THE FILEBEAT PACKAGE

$ sudo yum -y  install filebeat

CONFIGURE FILEBEAT

Edit Filebeat configuration file & Replace ‘elk_server_private_ip‘ with the private IP of your ELK server i.e. hosts: [“IP:5044”]

$ sudo vi /etc/filebeat/filebeat.yml

filebeat:

prospectors:

-

paths:

- /var/log/secure

- /var/log/messages

#  - /var/log/*.loginput_type: logdocument_type: syslogregistry_file: /var/lib/filebeat/registryoutput:

logstash:

hosts: ["elk_server_private_ip:5044"]

bulk_max_size: 1024tls:

certificate_authorities: ["/etc/pki/tls/certs/ssl-logstash-forwarder.crt"]shipper:logging:

files:

rotateeverybytes: 10485760 # = 10MB

START AND ENABLE FILEBEAT

$ systemctl start filebeat

$ systemctl enable filebeat

Test Filebeat

On your ELK Server, verify that Elasticsearch is receiving the data by querying for the Filebeat index with this command:

$ curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'


Ref: https://www.elastic.co/logstash/

Ref: https://www.elastic.co/beats/filebeat

Sunday, April 12, 2020

NFS Server Installation


Important files for NFS Configration

  • /etc/exports: Its a main configuration file of NFS, all exported files and directories are defined in this file at the NFS Server end.
  • /etc/fstab: To mount a NFS directory on your system across the reboots, we need to make an entry in /etc/fstab.
  • /etc/sysconfig/nfs: Configuration file of NFS to control on which port rpc and other services are listening.

To setup NFS mounts, we’ll be needing at least two Linux/Unix machines. Here in this tutorial, I’ll be using two servers.

             NFS Server: nfs.example.com with IP-192.168.0.63

             NFS Client : nfs-client.example.com with IP-192.168.0.64

At NFS Server End

Step 1: As the first step, we will install these packages on the CentOS server with yum
#      sudo yum install nfs-utils -y
Step 2:  Now create the directory that will be shared by NFS
#  sudo mkdir /var/nfsshare
Step 3: Change the permissions of the folder as follows:
# chmod -R 755 /var/nfsshare
# chown nobody:nobody /var/nfshare

NOTE: We use /var/nfsshare as shared folder, if we use another drive such as the /home directory, then the permission chnges will cause a massive permissions problem and ruin the whole hierarchy. So in case we want to share the /home directory then permissions must not be changed.

Step 4: 
#      PATH=$PATH:${ANT_HOME}/bin







Featured Post

Ansible Tool Introduction

                                                                                                                                    Next ...