La suite Elastic : Beat ElasticSearch Logstash Kibana (BELK)
Documentation fonctionnelle mais non terminée
Projet ELK : La Suite Elastic
Elastic Stack : https://www.elastic.co/fr/blog/elastic-stack-5-0-0-released
Retour d'expérience :
Retour d'installation : http://www.alasta.com/bigdata/2016/05/05/elasticstack-alpha-decouverte.html (2016) - http://magieweb.org/2017/04/tutoriel-mise-en-place-dun-serveur-de-monitoring-avec-elastic-stack-elk/ (2017) - http://blog.kinokocorp.com/?p=191 (2017 - Centos7 )
Partie 1 : Le Besoin
- Recensement
- Indexation
- Correlation
- Actions
- Securité des données
- Service en continue
- Authentification
- Sauvegarde
- Alerte XMPP ? https://elastalert.readthedocs.io/en/latest/
Partie 2 : Les outils
- Tableau avec les outils gratuit et payant : https://www.elastic.co/fr/subscriptions#request-info
- Les gratuits :
- ElasticSearch : Recherchez, analysez et stockez vos données https://www.elastic.co/products/elasticsearch
- Logstash : Intégrer les données https://www.elastic.co/fr/products/logstash
- pipeline qui ingère et traite simultanément des données provenant d'une multitude de sources, puis les transforme. On préférera la solution Beats
- Kibana : Visualisez los données https://www.elastic.co/fr/products/kibana
- Beats : Intégrer les données https://www.elastic.co/fr/products/beats
- Filebeat : log fichier
- Metricbeat : indicateur
- Packetbeat : données réseau
- Winlogbeat : logs windows
- Heartbeat : heartbeat
- X-pack : Search Profiler
- X-pack : Monitoring
- Palier au module LDAP uniquement payant, faire une authentification apache/nginx : https://mapr.com/blog/how-secure-elasticsearch-and-kibana/
Tools
LibBeats
- Beats de la communautée : https://www.elastic.co/guide/en/beats/libbeat/current/community-beats.html
- systemd.journald / http / apache / mysql /ping / openconfig / nagios
- Exemple de journald : https://github.com/mheese/journalbeat
Kafka
- Kafka en complément d’ElasticSearch, afin de faire tampon entre ElasticSearch et ceux qui envoient les messages. Cela permet par exemple d’arrêter ElasticSearch le temps d’une mise à jour, Kafka se chargeant de stocker les messages et de les transmettre une fois que le serveur ElasticSearch est à nouveau disponible
- Kafka-manager : Afin d'avoir une interface web de gestion du cluster
elasticsearch-HQ
- elasticsearch-HQ est un outil web permettant l'administration d'un cluster ElasticSearch. Il permet de voir l'état des nœuds, voir les différents documents.
Architecture
- kafka1.domaine.fr
- kafka2.domaine.fr
- elasticstack.domaine.fr elasticsearch.domaine.fr kibana.domaine.fr ( même machine )
- clientweb1.domaine.fr ( filebeat )
- clientdns1.domaine.fr ( logstash )
Installation
Pré-requis
- 4vcpu; 6Go
- selinux : disable
- firewalld : disable
openJDK
- Il faut installer openjdk sur les noeuds elasticsearch et les noeuds kafka
yum install java-1.8.0-openjdk
Clés RPM & folder
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch mkdir -p /local/rpm cd /local/rpm
ELASTICSEARCH
https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.5.2.rpm rpm --install elasticsearch-5.2.1.rpm
Configuration
- vi /etc/elasticsearch/elasticsearch.yml
cluster.name: cluster-test node.name: ${HOSTNAME} bootstrap.memory_lock: true path.data: /local/elasticsearch/data path.logs: /local/elasticsearch/logs network.host: localhost http.port: 9200 #memory_lock = Désactiver le swap pour Elasticsearch : ( pour la gestion java des gros traitement)
- vi /usr/lib/systemd/system/elasticsearch.service
#Décommenter la ligne suivante : LimitMEMLOCK=infinity #Supprimer l'option --quiet du paramètre ExecStart pour voir les évènements elasticsearch dans journalctl : --quiet
- vi /etc/sysconfig/elasticsearch
#Décommente la ligne suivante : MAX_LOCKED_MEMORY=unlimited
Start :
systemctl daemon-reload systemctl enable elasticsearch systemctl start elasticsearch
Check
netstat -ltpn tcp6 0 0 127.0.0.1:9200 :::* LISTEN 2344/java curl -XGET 'localhost:9200/_nodes?filter_path=**.mlockall&pretty' nodes :{......} curl -XGET 'localhost:9200/?pretty' { "name" : "8Y5O47R", "cluster_name" : "elasticsearch", "cluster_uuid" : "2tt8eL_2TKuUsHVzflH6xQ", "version" : { "number" : "5.5.2", "build_hash" : "b2f0c09", "build_date" : "2017-08-14T12:33:14.154Z", "build_snapshot" : false, "lucene_version" : "6.6.0" }, "tagline" : "You Know, for Search" }
Kafka
mkdir /local/kafka cd /local/kafka wget http://apache.crihan.fr/dist/kafka/0.11.0.0/kafka_2.12-0.11.0.0.tgz tar -xvf kafka_2.12-0.11.0.0.tgz cd kafka_2.12-0.11.0.0 groupadd kafka useradd kafka -d "/local/kafka/" -s "/bin/sh" -g "kafka" -M
Configuration
- Import des certificats :
# Conversion au format pkcs12 openssl pkcs12 -export -in /etc/pki/certs/cert.crt -inkey /etc/pki/certs/cert.key -chain -CAfile /etc/pki/certs/certCA.crt -name "elasticstack" -out elasticstack.p12 # import dans le keystore keytool -importkeystore -deststorepass hhjjkk -destkeystore server.keystore.jks -srckeystore elasticstack.p12 -srcstoretype PKCS12 # Lister le keystore: keytool -list -keystore server.keystore.jks # Autorité de certification : keytool -keystore server.truststore.jks -alias CARoot -import -file /etc/pki/certs/certCA.crt
- vim config/server.properties
# Ecoute du port + fix problème fqdn/certificat listeners=PLAINTEXT://:9092,SSL://:9093 advertised.host.name=kafka1.domaine.fr advertised.listeners=PLAINTEXT://kafka1.domaine.fr:9092,SSL://kafka1.domaine.fr:9093 # Replications sur les deux noeud offsets.topic.replication.factor=2 transaction.state.log.replication.factor=2 transaction.state.log.min.isr=2 default.replication.factor=2 offsets.topic.replication.factor=3 # SSL ssl.keystore.location=/local/kafka/kafka_2.12-0.11.0.0/server.keystore.jks ssl.keystore.password=hhjjkk ssl.key.password=hhjjkk ssl.truststore.location=/local/kafka/kafka_2.12-0.11.0.0/server.truststore.jks ssl.truststore.password=hhjjkk
- vim config/server.properties
dataDir=/tmp/zookeeper clientPort=2181 tickTime=2000 initLimit=10 syncLimit=5 server.1=kafka1.domaine.fr:2888:3888 server.2=kafka2.domaine.fr:2888:3888
Creation des services systemd
- vim /etc/systemd/system/kafka-zookeeper.service
[Unit] Description=Apache Zookeeper server (Kafka) Documentation=http://zookeeper.apache.org Requires=network.target remote-fs.target After=network.target remote-fs.target [Service] Type=simple User=kafka Group=kafka Environment=JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk ExecStart=/local/kafka/kafka_2.12-0.11.0.0/bin/zookeeper-server-start.sh /local/kafka/kafka_2.12-0.11.0.0/config/zookeeper.properties ExecStop=/local/kafka/kafka_2.12-0.11.0.0/bin/zookeeper-server-stop.sh [Install] WantedBy=multi-user.target
- vi /etc/systemd/system/kafka.service
[Unit] Description=Apache Kafka server (broker) Documentation=http://kafka.apache.org/documentation.html Requires=network.target remote-fs.target After=network.target remote-fs.target kafka-zookeeper.service [Service] Type=simple User=kafka Group=kafka Environment=JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk ExecStart=/local/kafka/kafka_2.12-0.11.0.0/bin/kafka-server-start.sh /local/kafka/kafka_2.12-0.11.0.0/config/server.properties ExecStop=/local/kafka/kafka_2.12-0.11.0.0/bin/kafka-server-stop.sh [Install] WantedBy=multi-user.target
Start
systemctl daemon-reload systemctl start kafka-zookeeper.service systemctl start kafka.service
Monitoring du cluster
git clone https://github.com/yahoo/kafka-manager.git cd kafka-manager/ ./sbt clean dist cd target/universal/ unzip kafka-manager-1.3.3.13.zip cd kafka-manager-1.3.3.13 ZK_HOSTS=localhost:2181 ./bin/kafka-manager
—-
LOGSTASH
wget https://artifacts.elastic.co/downloads/logstash/logstash-5.5.2.rpm rpm -ivh logstash-5.5.2.rpm
Configuration
- Import du certification dans /etc/pki/TERENA
- Convertir la clé pour que logstash puisse l'utiliser :
erreur : [2017-09-04T15:44:06,011][ERROR][logstash.inputs.beats ] Looks like you either have an invalid key or your private key was not in PKCS8 format. {:exception=>java.lang.IllegalArgumentException: File does not contain valid private key: /etc/pki/certs/cert.key} solution : 15:45:45 root@elasticstack:/local/rpm# openssl pkcs8 -topk8 -inform PEM -outform PEM -in /etc/pki/certs/cert.key -out /etc/pki/certs/cert.pem -nocrypt
cd /etc/logstash/conf.d/
- webtest.conf (log apache)
input { kafka { bootstrap_servers => 'kafka1.domaine.fr:9092,kafka2.domaine.fr:9092' topics => ["WEB-TEST_APACHE"] auto_offset_reset => "earliest" /* pour que logstash recupère les logs manquant */ codec => json {} } } filter { grok { match => { "message" => ["%{IPORHOST:[apache2][access][remote_ip]} - %{DATA:[apache2][access][user_name]} \[%{HTTPDATE:[apache2][access][time]}\] \"%{WORD:[apache2][access][method]} %{DATA:[apache2][access][url]} HTTP/%{NUMBER:[apache2][access][http_version]}\" %{NUMBER:[apache2][access][response_code]} %{NUMBER:[apache2][access][body_sent][bytes]}( \"%{DATA:[apache2][access][referrer]}\")?( \"%{DATA:[apache2][access][agent]}\")?", "%{IPORHOST:[apache2][access][remote_ip]} - %{DATA:[apache2][access][user_name]} \\[%{HTTPDATE:[apache2][access][time]}\\] \"-\" %{NUMBER:[apache2][access][response_code]} -" ] } remove_field => "message" } mutate { add_field => { "read_timestamp" => "%{@timestamp}" } } date { match => [ "[apache2][access][time]", "dd/MMM/YYYY:H:m:s Z" ] remove_field => "[apache2][access][time]" } useragent { source => "[apache2][access][agent]" target => "[apache2][access][user_agent]" remove_field => "[apache2][access][agent]" } geoip { source => "[apache2][access][remote_ip]" target => "[apache2][access][geoip]" } } output { elasticsearch { index => "webtest-logs-%{+YYYY.MM.dd}" hosts => ["localhost:9200"] sniffing => false } stdout { codec => rubydebug } }
- Tester la configuration et la syntax
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/web-test.conf -t
Start
systemctl enable logstash systemctl start logstash
FILEBEAT
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.5.2-x86_64.rpm rpm -vi filebeat-5.5.2-x86_64.rpm
Configuration
- Import du certification dans /etc/pki/TERENA
filebeat.prospectors: - input_type: log paths: - /var/log/httpd/*log document_type: apache - input_type: log paths: - /var/log/*.log .............
- Exemple vers logstash
output.logstash: # The Logstash hosts hosts: ["elasticstack.domaine.fr:5443"] # Optional SSL. By default is off. # List of root certificates for HTTPS server verifications ssl.certificate_authorities: ["/etc/pki/certs/certCA.crt"] # Certificate for SSL client authentication #ssl.certificate: "/etc/pki/client/cert.pem" # Client Certificate Key #ssl.key: "/etc/pki/client/cert.key" template.name: "filebeat" template.path: "filebeat.template.json" template.overwrite: false
- Exemple vers kafka
output.kafka: output.kafka: # initial brokers for reading cluster metadata #hosts: ["kafka1.domaine.fr:9092","kafka2.domaine.fr:9092"] hosts: ["kafka1.domaine.fr:9093","kafka2.domaine.fr:9093"] # message topic selection + partitioning topic: WEB-TEST_APACHE #topic: '%{[type]}' partition.round_robin: reachable_only: false required_acks: 1 compression: gzip max_message_bytes: 1000000 ssl.certificate_authorities: ["/etc/pki/certs/certCA.crt"] ssl.certificate: "/etc/pki/certs/cert.crt" ssl.key: "/etc/pki/certs/cert.key"
Start
systemctl enable filebeat systemctl start filebeat
KIBANA
wget https://artifacts.elastic.co/downloads/kibana/kibana-5.5.2-x86_64.rpm rpm -ivh kibana-5.5.2-x86_64.rpm
Configuration
- vim /etc/kibana/kibana.yml
server.port: 5601 server.host: "localhost" elasticsearch.url: "http://localhost:9200"
Start
systemctl enable kibana systemctl start kibana
ProxyPass
yum install httpd vim /etc/httpd/conf.d/kibana.conf <Location "/"> ProxyPass "http://localhost:5601/" ProxyPassReverse "http://localhost:5601/" # Ajouter authentification de votre choix (htpasswd, ldap, ... ) </Location>
Utilisation
- Selectionner les indexs ( pour le faire après configuration initiale : Management > Inde Patterns )
- exemple logstash : filebeat-*
- exemple kafka : webtest-logs-*
- Time Filter field name : @timestamp
ELASTICSEARCH-HQ
cd /local/ git clone https://github.com/royrusso/elasticsearch-HQ.git
Configuration
- vim /etc/elasticsearch/elasticsearch.yml
..... http.cors.allow-origin: "*" #Mettre ip autorisé à faire l'admin http.cors.enabled: true
- vim /etc/httpd/conf.d/proxypass.conf
<Location "/elasticsearch-HQ"> ProxyPass "!" </Location> <Directory "/var/www/html/elasticsearch-HQ"> Options Indexes FollowSymLinks AllowOverride None Require all granted </Directory>
AUTRE
packetbeat
Attention ! Beaucoup de CPU quand beaucoup de requêtes
yum install libpcap wget https://artifacts.elastic.co/downloads/beats/packetbeat/packetbeat-5.6.0-x86_64.rpm rpm -vi packetbeat-5.6.0-x86_64.rpm
- Importer les dashboard dans kibana
/usr/share/packetbeat/scripts/import_dashboards -es http://elasticsearch.domaine.fr:9200
#################
grok debugger http://grokdebug.herokuapp.com/
Vous pourriez laisser un commentaire si vous étiez connecté.