What is spark.driver.maxResultSize?

72,206

assuming that a worker wants to send 4G of data to the driver, then having spark.driver.maxResultSize=1G, will cause the worker to send 4 messages (instead of 1 with unlimited spark.driver.maxResultSize).

No. If estimated size of the data is larger than maxResultSize given job will be aborted. The goal here is to protect your application from driver loss, nothing more.

if I set it to 1M (the minimum), will it be the most protective approach?

In sense yes, but obviously it is not useful in practice. Good value should allow application to proceed normally but protect application from unexpected conditions.

Share:
72,206
gsamaras
Author by

gsamaras

Yahoo! Machine Learning and Computer Vision team, San Francisco, California. Masters in Data Science. Received Stackoverflow Swag, Good Samaritan SO swag and "10 years Stackoverflow" Swag x2! In Top 10 users of my country.

Updated on July 09, 2022

Comments

  • gsamaras
    gsamaras almost 2 years

    The ref says:

    Limit of total size of serialized results of all partitions for each Spark action (e.g. collect). Should be at least 1M, or 0 for unlimited. Jobs will be aborted if the total size is above this limit. Having a high limit may cause out-of-memory errors in driver (depends on spark.driver.memory and memory overhead of objects in JVM). Setting a proper limit can protect the driver from out-of-memory errors.

    What does this attribute do exactly? I mean at first (since I am not battling with a job that fails due to out of memory errors) I thought I should increase that.

    On second thought, it seems that this attribute defines the max size of the result a worker can send to the driver, so leaving it at the default (1G) would be the best approach to protect the driver..

    But will happen on this case, the worker will have to send more messages, so the overhead will be just that the job will be slower?


    If I understand correctly, assuming that a worker wants to send 4G of data to the driver, then having spark.driver.maxResultSize=1G, will cause the worker to send 4 messages (instead of 1 with unlimited spark.driver.maxResultSize). If so, then increasing that attribute to protect my driver from being assassinated from Yarn should be wrong.

    But still the question above remains..I mean what if I set it to 1M (the minimum), will it be the most protective approach?

  • Tom N Tech
    Tom N Tech about 5 years
    Setting it to 0 for unlimited is highly convenient right up until that makes things crash.
  • Neo
    Neo about 5 years
    Why setting maxResultSize to max is not a good option? How does it make driver fail?
  • Thomas Decaux
    Thomas Decaux over 4 years
    because Driver do lot of things! (care of workers, block manager etc...) no heap enough => crash.
  • Wes Hardaker
    Wes Hardaker over 4 years
    So, if you set it to a low value.... it' crashes too! It's kinda like an assert(): you hit a condition you don't want and either it'll stop because of the assert or it'll crash because it hit a heap limit. If you don't do the assert in theory it'll take more before the heap crash (possibly with disk thrashing while swapping).
  • Blaisem
    Blaisem over 3 years
    @ThomasDecaux What does the maxResultSize have to do with the driver memory heap? Does setting maxResultSize too high cause the result sizes to grow more than normally?
  • ijoseph
    ijoseph about 2 years
    Whether increasing the limit makes sense depends on how much memory the driver actually has. These days, the default of 1GB of memory free (on top of other responsibilities of the driver, like query plan storage etc.) doesn't seem like that much, so probably increasing the default at least is usually a good idea.