Including null values in an Apache Spark Join

45,075

Solution 1

Spark provides a special NULL safe equality operator:

numbersDf
  .join(lettersDf, numbersDf("numbers") <=> lettersDf("numbers"))
  .drop(lettersDf("numbers"))
+-------+-------+
|numbers|letters|
+-------+-------+
|    123|    abc|
|    456|    def|
|   null|    zzz|
|       |    hhh|
+-------+-------+

Be careful not to use it with Spark 1.5 or earlier. Prior to Spark 1.6 it required a Cartesian product (SPARK-11111 - Fast null-safe join).

In Spark 2.3.0 or later you can use Column.eqNullSafe in PySpark:

numbers_df = sc.parallelize([
    ("123", ), ("456", ), (None, ), ("", )
]).toDF(["numbers"])

letters_df = sc.parallelize([
    ("123", "abc"), ("456", "def"), (None, "zzz"), ("", "hhh")
]).toDF(["numbers", "letters"])

numbers_df.join(letters_df, numbers_df.numbers.eqNullSafe(letters_df.numbers))
+-------+-------+-------+
|numbers|numbers|letters|
+-------+-------+-------+
|    456|    456|    def|
|   null|   null|    zzz|
|       |       |    hhh|
|    123|    123|    abc|
+-------+-------+-------+

and %<=>% in SparkR:

numbers_df <- createDataFrame(data.frame(numbers = c("123", "456", NA, "")))
letters_df <- createDataFrame(data.frame(
  numbers = c("123", "456", NA, ""),
  letters = c("abc", "def", "zzz", "hhh")
))

head(join(numbers_df, letters_df, numbers_df$numbers %<=>% letters_df$numbers))
  numbers numbers letters
1     456     456     def
2    <NA>    <NA>     zzz
3                     hhh
4     123     123     abc

With SQL (Spark 2.2.0+) you can use IS NOT DISTINCT FROM:

SELECT * FROM numbers JOIN letters 
ON numbers.numbers IS NOT DISTINCT FROM letters.numbers

This is can be used with DataFrame API as well:

numbersDf.alias("numbers")
  .join(lettersDf.alias("letters"))
  .where("numbers.numbers IS NOT DISTINCT FROM letters.numbers")

Solution 2

val numbers2 = numbersDf.withColumnRenamed("numbers","num1") //rename columns so that we can disambiguate them in the join
val letters2 = lettersDf.withColumnRenamed("numbers","num2")
val joinedDf = numbers2.join(letters2, $"num1" === $"num2" || ($"num1".isNull &&  $"num2".isNull) ,"outer")
joinedDf.select("num1","letters").withColumnRenamed("num1","numbers").show  //rename the columns back to the original names

Solution 3

Based on K L's idea, you could use foldLeft to generate join column expression:

def nullSafeJoin(rightDF: DataFrame, columns: Seq[String], joinType: String)(leftDF: DataFrame): DataFrame = 
{

  val colExpr: Column = leftDF(columns.head) <=> rightDF(columns.head)
  val fullExpr = columns.tail.foldLeft(colExpr) { 
    (colExpr, p) => colExpr && leftDF(p) <=> rightDF(p) 
  }

  leftDF.join(rightDF, fullExpr, joinType)
}

then, you could call this function just like:

aDF.transform(nullSafejoin(bDF, columns, joinType))

Solution 4

Complementing the other answers, for PYSPARK < 2.3.0 you would not have Column.eqNullSafe neither IS NOT DISTINCT FROM.

You still can build the <=> operator with an sql expression to include it in the join, as long as you define alias for the join queries:

from pyspark.sql.types import StringType
import pyspark.sql.functions as F

numbers_df = spark.createDataFrame (["123","456",None,""], StringType()).toDF("numbers")
letters_df = spark.createDataFrame ([("123", "abc"),("456", "def"),(None, "zzz"),("", "hhh") ]).\
    toDF("numbers", "letters")

joined_df = numbers_df.alias("numbers").join(letters_df.alias("letters"),
                                             F.expr('numbers.numbers <=> letters.numbers')).\
    select('letters.*')
joined_df.show()

+-------+-------+
|numbers|letters|
+-------+-------+
|    456|    def|
|   null|    zzz|
|       |    hhh|
|    123|    abc|
+-------+-------+
Share:
45,075
Powers
Author by

Powers

I am a data engineer and like Spark, Scala, Ruby, Go. Spend most of my time in Colombia and Brasil. Data blog: https://mungingdata.com/ Programming practice quizzes: https://www.codequizzes.com/ Personal blog: https://neapowers.com/

Updated on January 27, 2021

Comments

  • Powers
    Powers over 3 years

    I would like to include null values in an Apache Spark join. Spark doesn't include rows with null by default.

    Here is the default Spark behavior.

    val numbersDf = Seq(
      ("123"),
      ("456"),
      (null),
      ("")
    ).toDF("numbers")
    
    val lettersDf = Seq(
      ("123", "abc"),
      ("456", "def"),
      (null, "zzz"),
      ("", "hhh")
    ).toDF("numbers", "letters")
    
    val joinedDf = numbersDf.join(lettersDf, Seq("numbers"))
    

    Here is the output of joinedDf.show():

    +-------+-------+
    |numbers|letters|
    +-------+-------+
    |    123|    abc|
    |    456|    def|
    |       |    hhh|
    +-------+-------+
    

    This is the output I would like:

    +-------+-------+
    |numbers|letters|
    +-------+-------+
    |    123|    abc|
    |    456|    def|
    |       |    hhh|
    |   null|    zzz|
    +-------+-------+
    
  • Powers
    Powers over 7 years
    Thanks. This is another good answer that uses the <=> operator. If you're doing a multiple column join, the conditions can be chained with the && operator.
  • Av Pinzur
    Av Pinzur almost 6 years
    In my experience (Spark 2.2.1 on Amazon Glue), the SQL syntax is the same as the Scala: SELECT * FROM numbers JOIN letters ON numbers.numbers <=> letters.numbers
  • BiS
    BiS over 4 years
    This method has a problem, it will drop leftDF columns at the end, which is wrong for right joins. I proposed an edit with a TODO, I think it will work as it is (I'm using it now). But just in case someone else copies it, he should verify that too.
  • BiS
    BiS over 4 years
    The edit was rejected... god knows why, the following "code" should the fix it on the last foreach: columns.foreach(column => { if (joinType.contains("right")) { joinedDF = joinedDF.drop(leftDF(column)) } else { joinedDF = joinedDF.drop(rightDF(column)) } })
  • Admin
    Admin over 4 years
    Very true -- or you could call and reverse the order... so left and right are switched.
  • Egor Ignatenkov
    Egor Ignatenkov over 4 years
    is there a way to use eqNullSafe if I am passing to join's on parameter a list of columns?
  • user2441441
    user2441441 about 4 years
    @zero323 I have a similar question, but I want to do it with Seq. Can you help link is- stackoverflow.com/questions/61128618/…