Showing posts with label Installation. Show all posts
Showing posts with label Installation. Show all posts

Wednesday, July 20, 2022

Kuberenetes Introduction - Part 1

 

What is Kubernetes?

  • Kubernetes is an open-source system for automating the deployment, scaling, and management of containerized applications.
  • Kubernetes helps to package and distribute apps using image format and container technologies.
  • It groups containers that make up an application into logical units for easy management and discovery.
  • Kubernetes builds upon 15 years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community.
  • Kubernetes Features

    Automatic bin packing

    • Automatically places containers based on their resource requirements and other  constraints, while not sacrificing availability.

    • Mix critical and best-effort workloads in order to drive up utilization and save even more resources.

    Automated rollouts and rollbacks

    • Kubernetes progressively rolls out changes to your application or its configuration,  while monitoring application health to ensure it doesn't kill all your instances at the same time.

    • If something goes wrong, Kubernetes will roll back the change for you.

    • Take advantage of a growing ecosystem of deployment solutions.

    Horizontal scaling

    • Scale your application up and down with a simple command, with a UI, or automatically based on CPU usage.

    Service discovery and load balancing

    • No need to modify your application to use an unfamiliar service discovery mechanism.

    • Kubernetes gives containers their own IP addresses and a single DNS name for a set of containers and can load-balance across them.

    Storage orchestration

    • Automatically mount the storage system of your choice, whether from local storage, a  public cloud provider such as GCP or AWS, or a network storage system such as NFS,  iSCSI, Gluster, Ceph, Cinder, or Flocker.

    Self-healing

    • Restarts containers that fail, replace and reschedule containers when nodes die, kills  containers that don't respond to your user-defined health check, and doesn't advertise  them to clients until they are ready to serve.

    Secret and configuration management

    • Deploy and update secrets and application configuration without rebuilding your image and without exposing secrets in your stack configuration.

    Kubernetes Architecture

    Node Controller/ Master

    • The node controller is a Kubernetes master component that manages various aspects of nodes.

    • The node controller has multiple roles in a node’s life. (1). The first is assigning a CIDR block to the node when it is registered (if the CIDR  assignment is turned on).  (2). The second is keeping the node controller’s internal list of nodes up to date with the cloud provider’s list of available machines. (3).  The third is monitoring the nodes’ health.

    • The services on a Node Controller/Master Node include kube-api, kube-scheduler, kube-controller-manager and etcd.

    Node/Minions

    • A node is a worker machine in Kubernetes, previously known as a minion.

    • A node may be a VM or physical machine, depending on the cluster.

    • Each node has the services necessary to run pods and is managed by the node components.

    • The services on a Node include Docker, kubelet and kube-proxy, CNI.





    Node Controller/Master Components

    Kube-ApiServer

    • The Kubernetes API-server generally validates the configuration data store in ‘Etcd’ and the details of the deployed container that are in agreement.

    • It also provides a RESTful interface to make communication easy.

    Kube-Controller-Manager

    • It is generally responsible for handling the cluster level function such as replication controller.

    • Whenever the desired state of the cluster changes it is written to Etcd and then the controller manager tries to bring up the cluster in the desired state.

    Kube-Schedule Server

    • It is responsible for assigning tasks to nodes/minions in the cluster.

    Etcd

    • It is an open-source key-value store developed by the CoreOs team. Kubernetes uses  ‘Etcd’ to store the configuration data accessed by all nodes (minions and master) in the cluster.

    Node Components

    Kubelet

    • Host-level pod management.

    • It deals with pod specifications that are defined in YAML or JSON format.

    • It is responsible for managing pods and their containers.

    • It is an agent process that runs on each node.

    • Kubelet takes the pod specifications and checks whether the pods are running healthy or not.

    Kube-Proxy

    • Every node in the cluster runs a simple network proxy. Using a proxy node in cluster routes requests to the correct container in a node.

    • Manages the container network (IP addresses and ports) based on the network service manifests received  from the Kubernetes master.

    Docker

    • An API and framework built around Linux Containers (LXC) that allows for the easy management of containers and their images.

    CNI

    • A network overlay that will allow containers to communicate across multiple hosts.



    Setting Up Azure CLI on Linux for Seamless Azure Management

    Introduction

    The Azure CLI (Command-Line Interface) is a powerful tool that enables seamless interaction with Azure resources. In this blog post, we'll walk through the process of setting up Azure CLI on a Linux machine. Whether you are an administrator, developer, or an Azure enthusiast, having Azure CLI locally on your Linux computer allows you to execute administrative commands efficiently.

    Prerequisites

    Before we start, ensure you have the following prerequisites:

    • A Linux computer
    • Administrative privileges on the Linux machine

    Step-by-Step Guide

    Step 1: Import Microsoft Repository Key

    Open your terminal and run the following command to import the Microsoft repository key:

    bash
    $ sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc
    Step 2: Create Local Azure-CLI Repository Information

    Create and configure the local Azure CLI repository information by running the following commands:

    bash
    $ sudo sh -c 'echo -e "[azure-cli]
    name=Azure CLI
    baseurl=https://packages.microsoft.com/yumrepos/azure-cli
    enabled=1
    gpgcheck=1
    gpgkey=https://packages.microsoft.com/keys/microsoft.asc" > /etc/yum.repos.d/azure-cli.repo'


    Step 3: Install Azure CLI

    Install Azure CLI using the yum install command:

    bash
    $ sudo yum install azure-cli -y

    Step 4: Check the Installed Version

    Verify the successful installation by checking the installed version of Azure CLI:

    bash
    $ az --version

    Conclusion

    Congratulations! You've successfully set up Azure CLI on your Linux machine. With Azure CLI, you can now execute a wide range of administrative commands, manage Azure resources, and streamline your interactions with Azure services directly from the command line.

    For more details and additional options, you can refer to the official documentation: Install Azure CLI on Linux.

    Now you're ready to explore and harness the power of Azure CLI for efficient Azure management on your Linux environment. Happy coding!

    Logstash Installation and configuration in CentOS

    Logstash is a light-weight, open-source, server-side data processing pipeline that allows you to collect data from a variety of sources, transform it on the fly, and send it to your desired destination. It is most often used as a data pipeline for Elasticsearch, an open-source analytics and search engine.

    Filebeat is a lightweight shipper for forwarding and centralizing log data. Installed as an agent on your servers, Filebeat monitors the log files or locations that you specify, collects log events, and forwards them either to Elasticsearch or Logstash for indexing.

    Stages:

     1. Install Logstash  & Configuring Logstash

     2. Install Filebeat (on the Client Servers) & Configuring Filebeat

    Stage 1:

    Install Logstash

    $ sudo wget https://artifacts.elastic.co/downloads/logstash/logstash-7.6.0.rpm

    $ sudo rpm –ivh logstash-7.6.0.rpm

    Configuring Logstash

    As part of configuring the Logstash , we need to generate the Selfsigned certs. Basically we can generate certs in two methods i). Based on Private IP ii). Based on Domain.

    Option 1: Based on Private IP

    Add a SSL certificate based on the IP address of the ELK server. Add the ELK server’s private IP in /etc/pki/tls/openssl.cnf.

    $ sudo vi /etc/pki/tls/openssl.cnf

    Add the following line with private Ip just below [ v3_ca ] section

    subjectAltName = IP: 192.168.20.158

     GENERATE A SELF-SIGNED CERTIFICATE VALID FOR 365 DAYS

    $ cd /etc/pki/tls

    $ sudo openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/ssl-logstash-forwarder.key -out certs/ssl-logstash-forwarder.crt

    Option 2 – Based on domain (FQDN)

    $ cd /etc/pki/tls

    $ sudo openssl req -subj '/CN=ELK_server_fqdn/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/ssl-logstash-forwarder.key -out certs/ssl-logstash-forwarder.crt


    LOGSTASH INPUT, FILTER, OUTPUT FILES

    Logstash configuration files are in the JSON-format, and reside in /etc/logstash/conf.d. The configuration consists of three sections: inputs, filters, and outputs – usually specified as separate files.

    INPUT.CONF 

    Insert the following lines into it. This is necessary for Logstash to “learn” how to process beats coming from clients. Make sure the path to the certificate and key match the right paths as outlined in the previous step:  This specifies that Logstash will listen on tcp port 5044 i.e. log-forwarder will connect at this port to send logs.

    $ sudo vi /etc/logstash/conf.d/01-beats-input.conf

    input {

    beats {

    port => 5044

    ssl => true

    ssl_certificate => "/etc/pki/tls/certs/ssl-logstash-forwarder.crt"

    ssl_key => "/etc/pki/tls/private/ssl-logstash-forwarder.key"

    }

    }

    FILTER.CONF

    This filter looks for logs that are labeled as “syslog” type (by Filebeat), and it will try to use grok to parse incoming syslog logs to make it structured and query-able.

    $ sudo vi /etc/logstash/conf.d/01-beats-filter.conf

    filter {

    if [type] == "syslog" {

    grok {

    match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }

    add_field => [ "received_at", "%{@timestamp}" ]

    add_field => [ "received_from", "%{host}" ]

    }

    syslog_pri { }

    date {

    match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]

    }

    }

    }

    OUTPUT.CONF

    In output.conf basically we configured for Logstash to store the beats data in Elasticsearch which is running at localhost:9200, in an index named after the beat used (filebeat, in our case).

    $ sudo vi /etc/logstash/conf.d/01-beats-output.conf

    output {

    elasticsearch {

    hosts => ["localhost:9200"]

    sniffing => true

    manage_template => false

    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"

    document_type => "%{[@metadata][type]}"

    }

    }

    START AND ENABLE LOGSTASH

    $ sudo systemctl daemon-reload

    $ sudo systemctl start logstash

    $ sudo systemctl enable logstash

    Stage 2:

    Install Filebeat (on the Client Servers)

    We need to install the Filebeat in Client servers(means from which server we are planning to get the logs) can be one or more servers, the below process need to follow for all client servers. 

    COPY THE SSL CERTIFICATE FROM THE ELK SERVER TO THE CLIENT(S)

    $ sudo scp /etc/pki/tls/certs/logstash-forwarder.crt root@<client server IP>:/etc/pki/tls/certs/

    NOTE

    Once copied the generated certs to client servers, as a next we need to Perform the following steps on the client machine (ones sending logs to ELK server)

    IMPORT THE ELASTICSEARCH PUBLIC GPG KEY

    $ sudo rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch

    CREATE AND EDIT A NEW YUM REPOSITORY FILE FOR FILEBEAT

    $ sudo vi /etc/yum.repos.d/filebeat.repo

    Add the following repository configuration to filebeat.repo file

    [elastic-7.x]

    name=Elastic repository for 7.x packages

    baseurl=https://artifacts.elastic.co/packages/7.x/yum

    gpgcheck=1

    gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch

    enabled=1

    autorefresh=1

    type=rpm-md

    INSTALL THE FILEBEAT PACKAGE

    $ sudo yum -y  install filebeat

    CONFIGURE FILEBEAT

    Edit Filebeat configuration file & Replace ‘elk_server_private_ip‘ with the private IP of your ELK server i.e. hosts: [“IP:5044”]

    $ sudo vi /etc/filebeat/filebeat.yml

    filebeat:

    prospectors:

    -

    paths:

    - /var/log/secure

    - /var/log/messages

    #  - /var/log/*.loginput_type: logdocument_type: syslogregistry_file: /var/lib/filebeat/registryoutput:

    logstash:

    hosts: ["elk_server_private_ip:5044"]

    bulk_max_size: 1024tls:

    certificate_authorities: ["/etc/pki/tls/certs/ssl-logstash-forwarder.crt"]shipper:logging:

    files:

    rotateeverybytes: 10485760 # = 10MB

    START AND ENABLE FILEBEAT

    $ systemctl start filebeat

    $ systemctl enable filebeat

    Test Filebeat

    On your ELK Server, verify that Elasticsearch is receiving the data by querying for the Filebeat index with this command:

    $ curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'


    Ref: https://www.elastic.co/logstash/

    Ref: https://www.elastic.co/beats/filebeat

    Sunday, September 6, 2020

    Jenkins Instllation using Binary and Distribution methods in Ubuntu

    Jenkins is an Open Source automation server. It helps to automate the parts of software development related to building, testing, and deploying, facilitating continuous integration and continuous delivery. It is a server based system that runs in servlet containers such as Apache Tomcat ...

    PreRequisites:

    To setup the Jenkins as a prerequisite we need java need to install in the server and then setup the Jenkins.

          Step 1: Setting up the openjdk repo in machine

             $ cd /opt
             $ sudo add-apt-repository ppa:openjdk-r/ppa

          Step 2: update the OS after adding repo and Install jdk 8
             
             $ sudo apt update
             $ sudo apt install openjdk-8-jdk

    Jenkins installation using Binary:

         Step 3:  Download and install the necessary GPG key with the command

             $ sudo wget -q -O - https://pkg.jenkins.io/debian/jenkins-ci.org.key | sudo apt-key add -

         Step 4: Add the necessary repository with the command

             $  sudo sh -c 'echo deb http://pkg.jenkins.io/debian-stable binary/ > /etc/apt/sources.list.d/jenkins.list'  

        Step 5: Update apt with the command 

            $ sudo apt-get update

        Step 6: Install Jenkins with the command 

            $ sudo apt-get install jenkins -y

        Step 7:  Start the Jenkins Service

            $ sudo systemctl start jenkins


    Jenkins Installation using Distribution method

    Tomcat Installation:    from https://tomcat.apache.org/download-90.cgi

        Step 1: Install the tomcat from above URL

            $ cd /opt/

        Step 2: Un tar the downloaded file and change the directory name

            $ tar -xvzf apache-tomcat-9.0.37.tar.gz -C /opt

            $ mv /opt/apache-tomcat-9.0.37/ /opt/tomcat

            $ cd tomcat/bin

        Step 3: Change the mode od directory, give the execution permission for user in current bin directory.

            $ sudo chmod u+x *  

    Jenins Installation:   from https://updates.jenkins-ci.org/

        Step 4: Install the jenkins.war file from above url

        Step 5: Move the Jenkins.war file to /opt/tomcat/webapps

            $ sudo mv /opt/jenkins.war /opt/tomcat/webapps/

            $ sudo apt update

            $ cd /opt/tomcat/bin

        Step 6: Start the jenkins from the current directory (/opt/tomcat/bin) 

            $ sudo sh startup.sh



    Fluentd Installation in CentOS and RHEL

    Fluentd

    Fluentd is a cross platform open-source data collection software project originally developed at Treasure Data. It is written primarily in the Ruby programming language.

    Fluentd having two different configurations parts, which will be doing from Elasticsearch and Kibana server side that is known as Fluentd Aggregator Configuration and one more will be application side fluentd configuration which will forward the application logs from app server/webserver to Elasticstack which is known as Fluentd Forwarder Configuration. For Elasticstack configuration with Elasticsearch click here and for kibana click here.

    Fluentd Aggregator configuration

    Step 1:  Install the td-agent

             # curl -L https://toolbelt.treasuredata.com/sh/install-redhat-td-agent3.sh | sh

             #  yum -y install gcc libcurl-devel

             # yum groupinstall "Development Tools" kernel-devel kernel-headers -y

             # sudo /opt/td-agent/embedded/bin/fluent-gem install fluent-plugin-elasticsearch

             # wget https://rubygems.org/gems/fluent-plugin-elasticsearch/versions/3.3.0

    Step 2:  Edit the /etc/td-agent/td-agent.conf file. Remove the existing lines and replace with the below code.

          # vim /etc/td-agent/td-agent.conf

               <source>

                 @type forward

                 port 24224

              </source>

              <match *.log>

               @type copy

              <store>

               @type file

                path /var/log/td-agent/httpd/access_forward.log

                time_slice_format %Y%m%d

                time_slice_wait 10m

                time_format %Y%m%dT%H%M%S%z

                compress gzip

                utc

             </store>

             <store>

              @type elasticsearch_dynamic

               host 192.168.0.34

               port 9200

               index_name fluentd-${tag_parts[1]+ "-" + Time.at(time).getlocal("+05:30").strftime(@logstash_dateformat)}

              logstash_format true

              time_format %Y-%m-%dT%H:%M:%S

              timezone +0530

              include_timestamp true

              type_name fluentd

            <buffer>

               flush_interval 5s

               flush_thread_count 3

               chunk_limit_size 64m

            </buffer>

            </store>

            </match>

    Step 3:  Enable and start the td-agent.service

            #  systemctl enable td-agent.service

            #  systemctl start td-agent.service

            #  systemctl status td-agent.service

    Step 4:  Check the td-agent log file.

            # tail -f /var/log/td-agent/td-agent.log


    Fluentd Forwarder Configuration:

    Step 5:  Install the td-agent

             # curl -L https://toolbelt.treasuredata.com/sh/install-redhat-td-agent3.sh | sh

             #  yum -y install gcc libcurl-devel

             # yum groupinstall "Development Tools" kernel-devel kernel-headers -y

    Step 6:  Edit the Log file permissions

                   i. Change the httpd log directory permissions to “og+rx”

                   ii. Change the  log file permissions to “og+r”  in httpd directory

    Step 7:  Edit the /etc/td-agent/td-agent.conf file. Remove the existing lines and replace with the below code.

            # vim /etc/td-agent/td-agent.conf

                <match td.*.*>

                 @type tdlog

                 apikey YOUR_API_KEY

                 auto_create_table

                 buffer_type file

                 buffer_path /var/log/td-agent/buffer/td

              <secondary>

               @type file

                path /var/log/td-agent/failed_records

             </secondary>

             </match>

             <match debug.**>

              @type stdout

             </match>

             <source>

              @type forward

               port 24224

             </source>

             <source>

             @type http

              port 8888

             </source>

             <source>

             @type debug_agent

              bind 192.168.0.22

              port 24230

            </source>

            <source>

             @type tail

             path /var/log/httpd/*.log

             pos_file /var/log/td-agent/access.log.pos

             tag access.log

             format none

             time_format %Y-%m-%d %H:%M:%S,%L %z

             timezone +0530

             time_key time

             keep_time_key true

             types time:time

           </source>

           <match *.log>

             @type copy

           <store>

             @type file

              path /var/log/td-agent/httpd/access_forward.log

           </store>

           <store>

            @type forward

             heartbeat_type tcp

           <server>

              host 192.168.0.34

           </server>

              flush_interval 5s

           </store>

           </match>

     Step 8:  Enable and start the td-agent.service

            #  systemctl enable td-agent.service

            #  systemctl start td-agent.service

            #  systemctl status td-agent.service

    Step 11:  Check the td-agent log file.

            # tail -f /var/log/td-agent/td-agent.log


    Ref: https://www.fluentd.org/


    Thursday, September 3, 2020

    ElasticSearch Installation in CentOS and RHEL

    #Elasticsearch  #Installation #CentOS #RHEL

    What is Elasticsearch?

          Elasticsearch is a distributed, RESTful search and analytics engine capable of addressing a growing number of use cases. As the heart of the Elastic Stack, it centrally stores your data for lightning fast search, fine tuned relevancy, and powerful analytics that scale with ease. 

    Now, we will see how to install the Elasticsearch in CentOS and RHEL servers. Basically, we have to install JAVA has to be prerequisite for Elasticsearch. So first we will install Java and setup the environment variables and then install the Elasticsearch with respect to basic production configuration.

    Prerequisite:

    To setup the Elasticsearch we require Java need to installed in Elasticsearch stack, using following steps we will be able to install the java.

    Step 1:  Installing the openjdk 1.8 in the elasticsearch server
        
         a) update the server and install java with below command.

              $ sudo yum update 
              $ sudo yum install java-1.8.0-openjdk -y

         b) Once installed the java, check the java version using below command

              $ sudo java -version

         c) After validating the java version , configure the java home to use in run time in .bashrc file

              $ sudo update-alternatives --config java

              $ sudo vi .bashrc
               export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-1.el7_6.x86_64/jre/bin/java
     
         d) Execute the .bashrc to enable the java home 

              $ source .bashrc

    Elasticsearch Setup:

    Once the Java setup is completed, we need to setup the elasticsearch using below steps

    Step 2: Before installing Elasticsearch, add the elastic.co key to the server.

             $ rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

    Step 3: Now, Download the Latest RPM of Elasticsearch


    Step 4: Install the Downloaded RPM

           $ rpm -ivh elasticsearch-7.9.1-x86_64.rpm

    Step 5:  Now go to the configuration directory and edit the elasticsearch.yml configuration file. Enable the below lines in configuration file

          $ cd /etc/elasticsearch/
          $ vim elasticsearch.yml
                 bootstrap.memory_lock: true
                 xpack.monitoring.enabled: true
                 network.host: localhost
                 http.port: 9200
                 http.max_content_length: 1.6gb

         And Save the file.

    Step 6: Now edit the elasticsearch.service file for the memory lock configuration. Uncomment LimitMEMLOCK line or if it is not there please add this line to limit session

         $ vim /usr/lib/systemd/system/elasticsearch.service

               LimitMEMLOCK=infinity

    Step 7: Edit the sysconfig configuration file for Elasticsearch.  Uncomment line 60 and make sure the value is 'unlimited'.

         $ vim /etc/sysconfig/elasticsearch
                   MAX_LOCKED_MEMORY=unlimited    
           MAX_OPEN_FILES=65535
                   MAX_MAP_COUNT=262144
    Step 8: Reload systemd, enable Elasticsearch to start at boot time, then start the service.

         $ sudo systemctl daemon-reload
         $ sudo systemctl enable elasticsearch
         $ sudo systemctl start elasticsearch

    Step 9: To check the elasticsearch is running or not. Check the listening port with 9200

         $ netstat -lntp 

    Step 10: Then check the memory lock to ensure that mlockall is enabled,

         $ curl -XGET 'localhost:9200/_nodes?filter_path=**.mlockall&pretty'
         $ curl -XGET 'localhost:9200/?pretty'




                                                                                                                Next: Kibana Installation in CentOS

    Friday, April 17, 2020

    Install Grafana in CentOS with YUM/Distribution-method/RPM Methods

    Grafana Introduction

    • Grafana is a  multi-platform open source analytics and interactive visualisation software
    • It is available since 2014
    • It provides charts, graphs and alerts for the web when connected to supported data sources.
    • It is expandable through a plug-in system.

    Supported operating systems

    • Debian/Ubuntu
    • RPM-Based Linux (CentOS, Fedora, Opensuse, RHEL)
    • MacOS
    • Windows
    Installation of Grafana on other operating systems is possible, but not supported.

    Hardware Recommendations

    • Grafana does not use a lot of resources and is very lightweight in use of memory and CPU.
    • Minimum recommended memory: 255 MB 
    • Minimum recommended CPU: 1
    • Some features might require more memory or CPUs. Features require more resources include:
                   a. Server side rendering of images
                   b. Alerting
                   c.  Data source proxy

    Installation with YUM repo :

    Step1 : Updating the server
    $ sudo yum update
    Step 2: Adding grafana repo for installing grafana
    $ sudo vi /etc/yum.repos.d/grafana.repo
    [grafana]
    name=grafana baseurl=https://packages.grafana.com/oss/rpm
    repo_gpgcheck=1
    enabled=1
    gpgcheck=1
    gpgkey=https://packages.grafana.com/gpg.key
    sslverify=1
    sslcacert=/etc/pki/tls/certs/ca-bundle.crt
    Step 3: Installing Grafana
    $ sudo yum install grafana
    Step 4: Start and Enable the grafana service
    $ sudo systemctl daemon-reload
    $ sudo systemctl start grafana-server
    $ sudo systemctl status grafana-server
    $ sudo systemctl enable grafana-server

    Installation with Distribution Method (Using Binary):
    Step 1: Downloading Grafana Zip file
    Step 2: Unzip the Grafana zip file
    tar -zxvf grafana-6.6.0.linux-amd64.tar.gz
    Step 3: Start and enable the grafana server
    $ sudo systemctl daemon-reload
    $ sudo systemctl start grafana-server
    $ sudo systemctl status grafana-server
    $ sudo systemctl enable grafana-server

    Installation with RPM:

    Step 1: Adding Grafana RPM key
    Step 2: Install some dependencies to install grafana with RPM
     $ sudo yum install initscripts urw-fonts wget
    Step 3: Download the grafana RPM file
     $ sudo rpm -Uvh grafana-6.6.0-1.x86_64.rpm
    Step 4: Installing Grafana

    $ sudo yum localinstall grafana-6.6.0-1.x86_64.rpm
    Step 3: Start and Enable the Grafana Server
     $ sudo systemctl daemon-reload
     $ sudo systemctl start grafana-server
     $ sudo systemctl status grafana-server
     $ sudo systemctl enable grafana-server
    Referral Page : https://grafana.com/docs/grafana/latest/installation/



    Featured Post

    Ansible Tool Introduction

                                                                                                                                        Next ...