Using ReduceByKey to group list of values
10,901
Solution 1
Use aggregateByKey
:
sc.parallelize(Array(("red", "zero"), ("yellow", "one"), ("red", "two")))
.aggregateByKey(ListBuffer.empty[String])(
(numList, num) => {numList += num; numList},
(numList1, numList2) => {numList1.appendAll(numList2); numList1})
.mapValues(_.toList)
.collect()
scala> Array[(String, List[String])] = Array((yellow,List(one)), (red,List(zero, two)))
See this answer for the details on aggregateByKey
, this link for the rationale behind using a mutable dataset ListBuffer
.
EDIT:
Is there a way to achieve the same result using reduceByKey?
The above is actually worse in performance, please see comments by @zero323 for the details.
Solution 2
sc.parallelize(Array(("red", "zero"), ("yellow", "one"), ("red", "two")))
.map(t => (t._1,List(t._2)))
.reduceByKey(_:::_)
.collect()
Array[(String, List[String])] = Array((red,List(zero, two)), (yellow,List(one)))
Author by
sikara tijuhara
Updated on June 04, 2022Comments
-
sikara tijuhara almost 2 years
I want to group list of values per key and was doing something like this:
sc.parallelize(Array(("red", "zero"), ("yellow", "one"), ("red", "two"))).groupByKey().collect.foreach(println) (red,CompactBuffer(zero, two)) (yellow,CompactBuffer(one))
But I noticed a blog post from Databricks and it's recommending not to use groupByKey for large dataset.
Is there a way to achieve the same result using reduceByKey?
I tried this but it's concatenating all values. By the way, for my case, both key and value are string type.
sc.parallelize(Array(("red", "zero"), ("yellow", "one"), ("red", "two"))).reduceByKey(_ ++ _).collect.foreach(println) (red,zerotwo) (yellow,one)