Elasticsearch index much larger than the actual size of the logs it indexed?

11,578

Solution 1

There are a number of reasons why the data inside of Elasticsearch would be much larger than the source data. Generally speaking, Logstash and Lucene are both working to add structure to data that is otherwise relatively unstructured. This carries some overhead.

If you're working with a source of 3 GB and your indexed data is 30 GB, that's a multiple of about 10x over your source data. That's big, but not necessarily unheard of. If you're including the size of replicas in that measurement, then 30 GB could be perfectly reasonable. Based on my own experience and intuition, I might expect something in the 3–5x range relative to source data, depending on the kind of data, and the storage and analysis settings you're using in Elasticsearch.

Here are four different settings you can experiment with when trying to slim down an Elasticsearch index.

The _source Field

Elasticsearch keeps a copy of the raw original JSON of each incoming document. It's useful if you ever want to reconstruct the original contents of your index, or for match highlighting in your search results, but it definitely adds up. You may want to create an index template which disables the _source field in your index mappings.

Disabling the _source field may be the single biggest improvement in disk usage.

Documentation: Elasticsearch _source field

Individual stored fields

Similarly but separately to the _source field, you can control whether to store the values of a field on a per-field basis. Pretty straightforward, and mentioned a few times in the Mapping documentation for core types.

If you want a very small index, then you should only store the bare minimum fields that you need returned in your search responses. That could be as little as just the document ID to correlate with a primary data store.

Documentation: Elasticsearch mappings for core types

The _all Field

Sometimes you want to find documents that match a given term, and you don't really care which field that term occurs in. For that case, Elasticsearch has a special _all field, into which it shoves all the terms in all the fields in your documents.

It's convenient, but if your searches are fairly well targeted to specific fields, and you're not trying to loosely match anything/everything anywhere in your index, then you can get away with not using the _all field.

Documentation: Elasticsearch _all field

Analysis in general

This is back to the subject of Lucene adding structure to your otherwise unstructured data. Any fields which you intend to search against will need to be analyzed. This is the process of breaking a blob of unstructured text into tokens, and analyzing each token to normalize it or expand it into many forms. These tokens are inserted into a dictionary, and mappings between the terms and the documents (and fields) they appear in are also maintained.

This all takes space, and for some fields, you may not care to analyze them. Skipping analysis also saves some CPU time when indexing. Some kinds of analysis can really inflate your total terms, like using an n-gram analyzer with liberal settings, which breaks down your original terms into many smaller ones.

Documentation: Introduction to Analysis and Analyzers

More reading

Solution 2

As the previous commenter explaining in detail, there are many reasons why the size of log data after indexing into Elasticsearch could increase in size. The blog post he linked to is now dead because I killed my personal blog but it now lives on the elastic.co website: https://www.elastic.co/blog/elasticsearch-storage-the-true-story.

Share:
11,578
Christopher Bruce
Author by

Christopher Bruce

Updated on June 15, 2022

Comments

  • Christopher Bruce
    Christopher Bruce almost 2 years

    I noticed that elasticsearch consumed over 30GB of disk space over night. By comparison the total size of all the logs I wanted to index is only 5 GB...Well, not even that really, probably more like 2.5-3GB. Is there any reason for this and is there a way to re-configure it? I'm running the ELK stack.

  • Christopher Bruce
    Christopher Bruce over 9 years
    Do you know how or where to go to disable the _source field for elasticsearch? I've already done some googling and can't find a clear direction. Since I'm sending in files to ES via logstash, I checked there under the file input type but didn't see anything for it..
  • Nick Zadrozny
    Nick Zadrozny over 9 years
    Good question. You handle most of those settings with the index mappings. But Logstash dynamically creates an index per day. So you'll want to create a Template with the settings you want. New indices which match the template's pattern will inherit its settings.
  • Christopher Bruce
    Christopher Bruce over 9 years
    Cool, looks good as I see the template variable in the elasticsearch output type in the logstash documentation. Thanks for your immense help!
  • Nick Zadrozny
    Nick Zadrozny over 9 years
    Anytime! If you're running on AWS, allow me to plug my own humble bonsai.io for Elasticsearch hosting :)
  • slfan
    slfan over 5 years
    This is more like a comment to a previous answer than a new answer
  • petedogg
    petedogg over 5 years
    Sorry I don't use Stack Overflow very often. I wanted to make a comment but Stack has this silly requirement of having a 50 reputation to make comments.