What is the purpose of elasticsearch logs? How to manage them?

7,031

Solution 1

The ES logs contain information about your running application i.e. errors. If you don't see any problems within your logs you can safely delete old logs by hand or by logrotate.

To reduce the size of your indexes you have to remove some docs from them because the indexes are the place where ES stores the data. Do not use logrotate for indexes or strange things will happen.

Solution 2

If someone will be interested how I resolved an issue with logs...

After some investigation, I found that you can actually set the amount of logs to store in logging.yml which is by default lie in /etc/elasticsearch, by adding:

maxBackupIndex: x

line like this:

file:
  type: dailyRollingFile
  file: ${path.logs}/${cluster.name}.log
  datePattern: "'.'yyyy-MM-dd"
  maxBackupIndex: 7
  layout:
    type: pattern
    conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"

which will only save 7 most recent logfiles.

But because my logfiles are too large and the solution above doesn't apply compression, I decided to use logrotate with compression instead, here is my config /etc/logrotate.d/elasticsearch:

/var/log/elasticsearch/elasticsearch.log.????-??-?? {
  daily
  missingok
  rotate 1
  compress
  notifempty
}

Short description: once a day compress files with dates in the end(there would be 1 new file created by elasticsearch each day), do nothing if it is empty, do not report errors.

Share:
7,031

Related videos on Youtube

Rustam A. Gasanov
Author by

Rustam A. Gasanov

Feel free to contact me: rustamagasanov(at)gmail.com

Updated on September 18, 2022

Comments

  • Rustam A. Gasanov
    Rustam A. Gasanov over 1 year

    As I understand indexes(or data) are being stored in

    /var/lib/elasticsearch
    

    by default, this folder contains nodes with 0 and 1 folders and overall size of these folders is 376M.

    The logs are being stored in

    /var/log/elasticsearch
    
    -rw-r--r-- 1 elasticsearch elasticsearch 1.4G Dec 17 23:59 elasticsearch.log.2014-12-17
    -rw-r--r-- 1 elasticsearch elasticsearch 1.5G Dec 18 19:35 elasticsearch.log.2014-12-18
    -rw-r--r-- 1 elasticsearch elasticsearch 383M Dec 19 20:11 elasticsearch.log.2014-12-19
    -rw-r--r-- 1 elasticsearch elasticsearch 7.2G Dec 30 23:59 elasticsearch.log.2014-12-30
    -rw-r--r-- 1 elasticsearch elasticsearch 9.1G Jan  1 23:59 elasticsearch.log.2015-01-01
    -rw-r--r-- 1 elasticsearch elasticsearch  29G Jan  2 23:59 elasticsearch.log.2015-01-02
    -rw-r--r-- 1 elasticsearch elasticsearch 928K Jan  3 23:59 elasticsearch.log.2015-01-03
    -rw-r--r-- 1 elasticsearch elasticsearch  91M Jan  4 23:59 elasticsearch.log.2015-01-04
    

    And as you can see they use WAY TOO MUCH space, I was even forced to remove 1 file with size of 28G to free some space on server.

    My elasticsearch version is 0.90.7

    According to official docs:

    From version 0.90 onwards, store compression is always enabled.

    In my case I don't see any compression, is it being applied to logs? If the data lie in /var/lib/, what is the purpose of logs, will my application work if I remove all logs? Why should I keep them? And if so, I don't understand what to do with the size, I can't make my indexes any smaller, maybe I can use logrotate?

  • designarti
    designarti almost 9 years
    If you choose to use logrotate, you should change logging.yml to type: file. You don't want two systems rotating your logs. In addition to changing dailyRollingFile to file, you should remove the datePattern parameter as it doesn't apply to the FileAppender log4j class.
  • Funbit
    Funbit over 8 years
    Not sure about previous versions, but in Elasticsearch 1.7.x this solution does not work, the property maxBackupIndex just does not exist.
  • Nicolas
    Nicolas over 8 years
    Using "type: dailyRollingFile" is incorrect. You should be using "rollingFile" instead.