"[circuit_breaking_exception] [parent]" Data too large, data for "[<http_request>]" would be error

35,043

After some more research finally, I found a solution for this i.e

  1. We should not disable circuit breaker as it might result in OOM error and eventually might crash elasticsearch.
  2. dynamically increasing circuit breaker memory percentage is good but it is also a temporary solution because at the end after solution increased percentage might also fill up.
  3. Finally, we have a third option i.e increase overall JVM heap size which is 1GB by default but as recommended it should be around 30-32 GB on production, also it should be less than 50% of available total memory.

For more info check this for good JVM memory configurations of elasticsearch on production, Heap: Sizing and Swapping

Share:
35,043
Raghu Chahar
Author by

Raghu Chahar

I am a software engineer. Avid researcher, currently digging at NodeJs, Elasticsearch, Linux, GCP, and AWS.

Updated on July 09, 2022

Comments

  • Raghu Chahar
    Raghu Chahar almost 2 years

    After smoothly working for more than 10 months, I start getting this error on production suddenly while doing simple search queries.

    {
      "error" : {
        "root_cause" : [
          {
            "type" : "circuit_breaking_exception",
            "reason" : "[parent] Data too large, data for [<http_request>] would be [745522124/710.9mb], which is larger than the limit of [745517875/710.9mb]",
            "bytes_wanted" : 745522124,
            "bytes_limit" : 745517875
          }
        ],
        "type" : "circuit_breaking_exception",
        "reason" : "[parent] Data too large, data for [<http_request>] would be [745522124/710.9mb], which is larger than the limit of [745517875/710.9mb]",
        "bytes_wanted" : 745522124,
        "bytes_limit" : 745517875
      },
      "status" : 503
    }
    

    Initially, I was getting this error while doing simple term queries when I got this circuit_breaking_exception error, To debug this I tried _cat/health query on elasticsearch cluster, but still, the same error, even the simplest query localhost:9200 is giving the same error Not sure what happens to the cluster suddenly. Her is my circuit breaker status:

    "breakers" : {
            "request" : {
              "limit_size_in_bytes" : 639015321,
              "limit_size" : "609.4mb",
              "estimated_size_in_bytes" : 0,
              "estimated_size" : "0b",
              "overhead" : 1.0,
              "tripped" : 0
            },
            "fielddata" : {
              "limit_size_in_bytes" : 639015321,
              "limit_size" : "609.4mb",
              "estimated_size_in_bytes" : 406826332,
              "estimated_size" : "387.9mb",
              "overhead" : 1.03,
              "tripped" : 0
            },
            "in_flight_requests" : {
              "limit_size_in_bytes" : 1065025536,
              "limit_size" : "1015.6mb",
              "estimated_size_in_bytes" : 560,
              "estimated_size" : "560b",
              "overhead" : 1.0,
              "tripped" : 0
            },
            "accounting" : {
              "limit_size_in_bytes" : 1065025536,
              "limit_size" : "1015.6mb",
              "estimated_size_in_bytes" : 146387859,
              "estimated_size" : "139.6mb",
              "overhead" : 1.0,
              "tripped" : 0
            },
            "parent" : {
              "limit_size_in_bytes" : 745517875,
              "limit_size" : "710.9mb",
              "estimated_size_in_bytes" : 553214751,
              "estimated_size" : "527.5mb",
              "overhead" : 1.0,
              "tripped" : 0
            }
          }
    

    I found a similar issue hereGithub Issue that suggests increasing circuit breaker memory or disabling the same. But I am not sure what to choose. Please help!

    Elasticsearch Version 6.3