The value of "spark.yarn.executor.memoryOverhead" setting?

69,235
spark.yarn.executor.memoryOverhead

Is just the max value .The goal is to calculate OVERHEAD as a percentage of real executor memory, as used by RDDs and DataFrames

--executor-memory/spark.executor.memory

controls the executor heap size, but JVMs can also use some memory off heap, for example for interned Strings and direct byte buffers.

The value of the spark.yarn.executor.memoryOverhead property is added to the executor memory to determine the full memory request to YARN for each executor. It defaults to max(executorMemory * 0.10, with minimum of 384).

The executors will use a memory allocation based on the property of spark.executor.memoryplus an overhead defined by spark.yarn.executor.memoryOverhead

Share:
69,235
liyong
Author by

liyong

Updated on March 06, 2020

Comments

  • liyong
    liyong about 4 years

    The value of spark.yarn.executor.memoryOverhead in a Spark job with YARN should be allocated to App or just the max value?