Join two DataFrames where the join key is different and only select some columns

21,370

Solution 1

Your pseudocode is basically correct. This slightly modified version would work if the id column existed in both DataFrames:

A_B = A.join(B, on="id").select("A.*", "B.b1", "B.b2")

From the docs for pyspark.sql.DataFrame.join():

If on is a string or a list of strings indicating the name of the join column(s), the column(s) must exist on both sides, and this performs an equi-join.

Since the keys are different, you can just use withColumn() (or withColumnRenamed()) to created a column with the same name in both DataFrames:

A_B = A.withColumn("id", col("a_id")).join(B.withColumn("id", col("b_id")), on="id")\
    .select("A.*", "B.b1", "B.b2")

If your DataFrames have long complicated names, you could also use alias() to make things easier:

A_B = long_data_frame_name1.alias("A").withColumn("id", col("a_id"))\
    .join(long_data_frame_name2.alias("B").withColumn("id", col("b_id")), on="id")\
    .select("A.*", "B.b1", "B.b2")

Solution 2

Try this solution:

A_B = A.join(B,col('B.id') == col('A.id')).select([col('A.'+xx) for xx in A.columns]
      + [col('B.other1'),col('B.other2')])

The below lines in SELECT played the trick of selecting all columns from A and 2 columns from Table B.

[col('a.'+xx) for xx in a.columns] : all columns in a

[col('b.other1'),col('b.other2')] : some columns of b

Solution 3

I think the easier solution is just to join table A to table B with selected columns you want. here is a sample code to do this:

joined_tables = table_A.join(table_B.select('col1', 'col2', 'col3'), ['id'])

the code above join all columns from table_A and columns "col1", "col2", "col3" from table_B.

Share:
21,370
ASU_TY
Author by

ASU_TY

Updated on October 21, 2020

Comments

  • ASU_TY
    ASU_TY over 3 years

    What I would like to do is:

    Join two DataFrames A and B using their respective id columns a_id and b_id. I want to select all columns from A and two specific columns from B

    I tried something like what I put below with different quotation marks but still not working. I feel in pyspark, there should have a simple way to do this.

    A_B = A.join(B, A.id == B.id).select(A.*, B.b1, B.b2)
    

    I know you could write

    A_B = sqlContext.sql("SELECT A.*, B.b1, B.b2 FROM A JOIN B ON A.a_id = B.b_id")
    

    to do this but I would like to do it more like the pseudo code above.

  • ASU_TY
    ASU_TY about 6 years
    Hi Pault, thank you for your help! However, the two id columns in my A and B tables have different names (e.g. a_id and b_id). I wonder if your approach will still work? I will do a check myself today but i feel like i tried the similar approach in your format but not using on= and it seems not working. )
  • pault
    pault about 6 years
    @ASU_TY I have updated the answer with a workaround for your issue.
  • ASU_TY
    ASU_TY about 6 years
    I tried but it does not work for me. seems like the my console does not like this format("A.*", "B.b1", "B.b2") for some reasons
  • pault
    pault about 6 years
    Can you edit your question and show the exact code you used and the full Traceback? Try to provide a minimal reproducible example if possible. Also please include which version if Spark and python you are using.
  • Kirk Broadhurst
    Kirk Broadhurst about 3 years
    This is the right answer - you shouldn't have to rename columns to perform a join!