Skip to content

Overview & Install

ELK acronym of 3 open source projects: Elasticsearch, Logstash and Kibana.
Elasticsearch is a search and analysis engine.

Logstash is a server-side pipeline for data processing.

https://www.elastic.co/fr/what-is/elk-stack

Deploy an Elastic Suite

https://www.elastic.co/guide/en/elastic-stack-get-started/current/get-started-docker.html

Single Elastic Node with Docker

Test done some years ago

---
version: '2.2'
services:
  elasticsearch:
    # 6.7.0
    image: docker.elastic.co/elasticsearch/elasticsearch:6.7.0
    container_name: elasticsearch
    environment:
      - cluster.name=docker-cluster
      - bootstrap.memory_lock=true
      - discovery.type=single-node
     # - ES_JAVA_OPTS="-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - ./esdata1:/usr/share/elasticsearch/data
      #- ./esconf1:/usr/share/elasticsearch/config/
    restart: unless-stopped
    ports:
      - "9200:9200"
      - "9300:9300"
    networks:
      - elk_net

  kibana:
    image: docker.elastic.co/kibana/kibana:6.7.0
    container_name: kibana
    #environment:
       #SERVER_NAME: kibana.example.org
      # ELASTICSEARCH_HOSTS: elasticsearch
    networks:
      - elk_net
    depends_on:
      - elasticsearch
    restart: unless-stopped
    ports:
      - "127.0.0.1:5601:5601"

  logstash:
    image: docker.elastic.co/logstash/logstash:6.7.0
    container_name: logstash
    volumes:
      - ./logstash/data:/usr/share/logstash/config/data
      - ./logstash/pipeline/logstash.conf:/usr/share/logstash/pipeline/logstash.conf
      - ./logstash/logstash.yml:/usr/share/logstash/config/logstash.yml
      - ./logstash/pipelines.yml:/usr/share/logstash/config/pipelines.yml
    networks:
      - elk_net
    depends_on:
      - elasticsearch
    restart: unless-stopped
    ports:
      - "9600:9600"
      - "5044:5044"
networks:
  elk_net:
    driver: bridge
Pre-Setup sysctl
  sysctl -w vm.max_map_count=262144 # to persistant change here /etc/sysctl.conf  then sysctl -p
  ##Permissions on Elastic files folder
  mkdir dir
  chmod g+rwx dir
  chgrp 1000 dir

Note

kibana and logstash linked to the cordinator/client
logstash: port 5044
service coordinator: port 9200
service master: port 9300

Run a HA Elastic Stack Cluster on Docker/K8S

Minimum Nodes for Cluster

3 Nodes ES: master, data, coordinator
1 Node kibana
1 Node logstash

Basic interactions with ElasticSearch

    # list indexes/indices
    curl "localhost:9200/_cat/indices?v"

    # list shards(an index can be divided into several shards)
    curl "localhost:9200/_cat/shards?v"

    curl localhost:9200/_cluster/health/?level=shards?pretty

    curl -XGET "http://localhost:9200/_cat/shards" | grep UNASSIGNED | awk {'print $1'}
    curl -X POST "http://localhost:9200/_cluster/reroute?retry_failed=true

    curl -XGET "http://localhost:9200/_cluster/allocation/explain?pretty"

    # list index data
    curl -X GET "localhost:9200/${index}/_search?pretty=true&q=*:*"

    # Create/Delete index
    # https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-index.html

      curl -X PUT "localhost:9200/customer/_doc/1?pretty" -H 'Content-Type: application/json' -d'
      {
        "name": "John Doe"
      }
      '

      curl -X PUT "localhost:9200/customer"

      curl -X PUT "localhost:9200/customer/_doc/1" -H 'Content-Type: application/json' -d'
      {
      "name": "John Doe"
      }
      '
      curl -X GET "localhost:9200/${index}/${type}/${id}"
      curl -X DELETE "localhost:9200/${index}/${type}/${id}"

      curl -X GET "localhost:9200/runner-deploy.2019.06/deploy/PJnsemsBN3EZTx6cMHFU"
      curl -X GET "localhost:9200/customer/_doc/1" => ici type = doc

      curl -X DELETE "localhost:9200/customer"

      # disable shard allocation
      curl -X PUT -H 'Content-Type: application/json' 'localhost:9200/_cluster/settings' -d'
      {
      "persistent": {
        "cluster.routing.allocation.enable": "primaries"
      }
      }'

      # set replicas as 0 in settings
      curl -X PUT -H 'Content-Type: application/json' 'localhost:9200/_settings' -d '
      {
          "index" : {
              "number_of_replicas" : 0
          }
      }
generique template
    PUT _template/General_Delete
    {
      "index_patterns": ["*"],
      "order" : 0,
      "settings": {
        "number_of_replicas": 1
      }
    }

Cluster management

  * cluster api: `https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster.html`
  * cluster state: `https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-state.html`

Fix Forbidden -ReadOnly

reason": "blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"

curl -X PUT "localhost:9200/_all/_settings" -H 'Content-Type: application/json' -d'{"index.blocks.read_only_allow_delete": false}'

clean old data: https://www.elastic.co/guide/en/elasticsearch/client/curator/current/index.html

Migration data (re-indexation) between 2 ElasticSearch Instances

To Do on Kibana

  • Doc: https://www.elastic.co/guide/en/elasticsearch/reference/current/reindex-upgrade-remote.html
  • Allow old elastic in the new one conf: reindex.remote.whitelist: old.elastic.com:9200
On target kibana
  # migrating index runner-linter-2019.04
  POST _reindex
  {
    "source": {
      "remote": {
        "host": "old.elastic.com:9200"
      },
      "index": "gitlab-runner.2019.04",
      "query": {
        "match_all": {}
      }
    },
    "dest": {
      "index": "runner-linter-2019.04"
    }
  }

Another example of re-indexation between two elastic

#!/bin/bash


index_tab=$(cat indice.txt)

for index in ${index_tab[@]}
do
    echo -e "\nstart index: $index\n"
    curl -X GET "http://elastic1:9200/poc_deploy.2019.08/deploy/${index}?pretty=true" \
    |jq "._source" \
    |sed 's/^\([ ]*"buildDuration": \)"\([0-9][0-9]*\)"\(,\)$/\1\2\3/g' \
    |sed 's/^\([ ]*"LEAD_TIME": \)"\([0-9][0-9]*\)"\(,\)$/\1\2\3/g' \
    |sed 's/^\([ ]*"COMMIT_NUM": \)"\([0-9][0-9]*\)"$/\1\2/g' \
    |sed 's/^\([ ]*"errors": \)"\([0-9][0-9]*\)"\(,\)$/\1\2\3/g' > data.json

    curl --silent -X POST "elastic2:9200/poc-deploy-2019-08/deploy/" \
    -H 'Content-Type: application/json' -d @data.json

    echo -e  "\nFinished index: $index\n"

done

Kibana - LB & HA

Disable monitoring in kibana
  PUT _cluster/settings
  {"persistent" : {"xpack.monitoring.collection.enabled": false }}

Kibana With Nginx

Ningx

The default port of Kibana is 5601.
You can configure a Nginx Proxy to use the port 443 or
to enable basic authentification if you are using a free version.7

nginx.conf

user              root;
worker_processes  1;
error_log         /var/log/nginx/error.log;
pid               /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {
    default_type  application/octet-stream;
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';
    sendfile        on;
    server_tokens off;
    keepalive_timeout  420;

    server {
      listen 443;
      server_name mt_server;
      client_max_body_size 450M;
      client_body_buffer_size 512k;
      ssl on;
      ssl_certificate           /etc/nginx/certs/cert.crt;
      ssl_certificate_key       /etc/nginx/private/cert.key;
      ssl_session_timeout   5m;
      ssl_session_cache  builtin:1000  shared:SSL:10m;
      ssl_protocols  TLSv1 TLSv1.1 TLSv1.2;
      ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
      ssl_prefer_server_ciphers on;
      access_log            /var/log/nginx/elk.access.log;

      auth_basic "restricted";
      auth_basic_user_file  /etc/nginx/private/.htpasswd;

      location /
      {
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_pass http://kibana:5601;
        #proxy_redirect http://127.0.0.1:5601;
      }

    }
}

docker-compose.yml

---
version: '2'
services:

  nginx:
    image: nginx:1.15-alpine
    container_name: nginx
    user: root
    restart: unless-stopped
    ports:
      - "443:443"
    volumes:
      - /var/opt/data/files/nginx/key.pem:/etc/nginx/private/cert.key:Z
      - /var/opt/data/files/nginx/cert.pem:/etc/nginx/certs/cert.crt:Z
      - /var/opt/data/files/nginx/nginx.conf:/etc/nginx/nginx.conf:Z
      - /var/opt/data/files/nginx/.htpasswd:/etc/nginx/private/.htpasswd:Z
    networks:
      - elk_elk_net
networks:
  elk_elk_net:
    external: true

Math Operation in graph JSON Input

{"script":"_value/60000"}

Logstash Settings

Send Jenkins pipeline Build stats with logstash
    # authentification:  logstash =>
    # input http with built-in authentification
    post{
            success{
                script{
                    currentBuild.result = 'SUCCESS'
                    logstashSend failBuild: false, maxLines: 1000
                }

            }
            failure{
                script{
                    currentBuild.result = 'FAILURE'
                    logstashSend failBuild: false, maxLines: 1000
                }

            }

Enable logstash on AWX

logstash - awx: https://docs.ansible.com/ansible-tower/latest/html/administration/logging.html

Initially: awx, activity_stream, job_events, system_tracking

  • Logstash pipeline conf to manage data from AWX
input {
    tcp {
        port => 5044
        id => "logstash"
        codec => json
    }
}
output {
    if [ type ] == "ansible" {
        elasticsearch {
        hosts => ["http://elasticsearch:9200"]
        index => "ansible-%{+yyyy.MM}"
        }

    } else{
        elasticsearch {
            hosts => ["http://elasticsearch:9200"]
            index => "awx-%{+yyyy.MM}"
        }
    }

    stdout {
        codec => rubydebug
    }
}

//////////////////////////////////////

input {
    tcp {
        port => 5044
        id => "logstash"
        codec => json
    }
}

output {
        elasticsearch {
        hosts => ["http://elasticsearch:9200"]
        index => "ansible-%{+yyyy.MM}"
        }

    stdout {
        codec => rubydebug
    }
}
Conf & Test on AWX
    # Modify template
    message:Activity Stream update entry for job_template
    actor: dwkw0920.....

    # Launch
    logger_name:awx.analytics.activity_stream

    message:Activity Stream update entry for job
    changes.name:dxp_uat_deployment ou changes.job_template:dxp_uat_deployment-31
    changes.playbook:ansible/deployment.yml
    changes.job_type:run OR check
    changes.id:1481 //job id

    job_events:
    event_data.pid: common to  events
    event_data.res.msg:Failed to connect to
    event_data.role:clean_service

    event_display:Playbook Complete //end event
    failed: true //playbook KO
    event_display:Playbook Complete AND failed: true/false => KO/OK

    activity_stream       job_events

    changes.id: ==> job:

Plugins Filebeat | Metricbeat

Filebeat on the jenkins server to manage syslog and nginx log

Install Filebeat
    curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.7.0-x86_64.rpm
    sudo rpm -vi filebeat-6.7.0-x86_64.rpm

    vim /etc/filebeat/filebeat.yml
      Jenkins url

    - type: log
      enabled: true
      paths:
      - /var/log/jenkins/jenkins.log
    multiline.pattern: '^[A-Z]{1}[a-z]{2} {1,2}[0-9]{1,2}, [0-9]{4} {1,2}[0-9]{1,2}:[0-9]{2}:[0-9]{2}'
    multiline.negate: true
    multiline.match: after
    sudo filebeat modules enable system
    sudo filebeat modules enable nginx
    sudo filebeat setup
    sudo service filebeat start"
Install MetricBeat
    curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-6.7.0-amd64.deb
    sudo dpkg -i metricbeat-6.7.0-amd64.deb

    vim /etc/metricbeat/metricbeat.yml
      elastic.example.com
    #adapter le  index.number_of_shards: xx dans /etc/metricbeat/metricbeat.yml

    # etc/metricbeat/modules.d/system.yml

    metricbeat modules enable system

    sudo metricbeat setup
    sudo service metricbeat start

Troubleshooting Notes

# Foreword

If Kibana (the GUI) is unavailable, check
if elasticsearch is also: ${url}/_cluster/health?pretty
    If elastic is reachable then check:
       "status" : "green",
       "number_of_nodes" : 7,
       "number_of_data_nodes" : 2,

     if these 3 values are good then the problem is only with the Kibana component
   otherwise
     The elasticsearch cluster is impacted.
     Note that if elastic does not work then neither does kibana
     (The reverse is not systematically true)


Connect to K8S
** If the problem only concerns Kibana, make sure the associated pod is up or not in error.
Look at the pods logs

** If elastic is impacted
Check the state of the pods, the pods logs

If the url responds, check the fs space of the pods
 shards allocation
${url}/_nodes/stats/fs?pretty



## In docker-compose(lx016) version

When the url doesn't answer:

connect to the server
docker ps

Usually containers don't work because of a full fs(/var)
In this case, free space in the fs in question and docker-compose up -d in the confs directory to restart the service