Spark union of multiple RDDs

92,268

Solution 1

If these are RDDs you can use SparkContext.union method:

rdd1 = sc.parallelize([1, 2, 3])
rdd2 = sc.parallelize([4, 5, 6])
rdd3 = sc.parallelize([7, 8, 9])

rdd = sc.union([rdd1, rdd2, rdd3])
rdd.collect()

## [1, 2, 3, 4, 5, 6, 7, 8, 9]

There is no DataFrame equivalent but it is just a matter of a simple one-liner:

from functools import reduce  # For Python 3.x
from pyspark.sql import DataFrame

def unionAll(*dfs):
    return reduce(DataFrame.unionAll, dfs)

df1 = sqlContext.createDataFrame([(1, "foo1"), (2, "bar1")], ("k", "v"))
df2 = sqlContext.createDataFrame([(3, "foo2"), (4, "bar2")], ("k", "v"))
df3 = sqlContext.createDataFrame([(5, "foo3"), (6, "bar3")], ("k", "v"))

unionAll(df1, df2, df3).show()

## +---+----+
## |  k|   v|
## +---+----+
## |  1|foo1|
## |  2|bar1|
## |  3|foo2|
## |  4|bar2|
## |  5|foo3|
## |  6|bar3|
## +---+----+

If number of DataFrames is large using SparkContext.union on RDDs and recreating DataFrame may be a better choice to avoid issues related to the cost of preparing an execution plan:

def unionAll(*dfs):
    first, *_ = dfs  # Python 3.x, for 2.x you'll have to unpack manually
    return first.sql_ctx.createDataFrame(
        first.sql_ctx._sc.union([df.rdd for df in dfs]),
        first.schema
    )

Solution 2

You can also use addition for UNION between RDDs

rdd = sc.parallelize([1, 1, 2, 3])
(rdd + rdd).collect()
## [1, 1, 2, 3, 1, 1, 2, 3]
Share:
92,268
user3803714
Author by

user3803714

Updated on July 09, 2022

Comments

  • user3803714
    user3803714 almost 2 years

    In my pig code I do this:

    all_combined = Union relation1, relation2, 
        relation3, relation4, relation5, relation 6.
    

    I want to do the same with spark. However, unfortunately, I see that I have to keep doing it pairwise:

    first = rdd1.union(rdd2)
    second = first.union(rdd3)
    third = second.union(rdd4)
    # .... and so on
    

    Is there a union operator that will let me operate on multiple rdds at a time:

    e.g. union(rdd1, rdd2,rdd3, rdd4, rdd5, rdd6)

    It is a matter on convenience.

  • Sivaprasanna Sethuraman
    Sivaprasanna Sethuraman almost 7 years
    What is the purpose of *rest here? It is not used anywhere.
  • drkostas
    drkostas almost 6 years
    I want to perform about 3000 unions between one-row DFs. Using the first option, it gets exponentially slower after the 100th iteration(I am testing this with tqdm). Using the second option, it starts really slow from the beginning and keeps slowing down linearly. Is there any better way of doing this?
  • HarshMarshmallow
    HarshMarshmallow over 5 years
    there's more than one way to union tables in spark. this comment is incorrect. see zero323's comment above
  • Gramatik
    Gramatik almost 5 years
    @drkostas may not be the best way, but I solved that by saving off an RDD then loading it and continuing the loop. This kills the history of the RDD, you're slowing down because it reruns each loop in the RDDs history before it for each new loop. Spark doesn't like looping
  • drkostas
    drkostas almost 5 years
    @Gramatik Yes I solved in the same way too. By saving every dataframe in a parquet with the option append and then loading the parquet in a new dataframe.