Dataframe sample in Apache spark | Scala

70,262

Solution 1

The fraction parameter represents the aproximate fraction of the dataset that will be returned. For instance, if you set it to 0.1, 10% (1/10) of the rows will be returned. For your case, I believe you want to do the following:

val newSample = df1.sample(true, 1D*noOfSamples/df1.count)

However, you may notice that newSample.count will return a different number each time you run it, and that's because the fraction will be a threshold for a random-generated value (as you can see here), so the resulting dataset size can vary. An workaround can be:

val newSample = df1.sample(true, 2D*noOfSamples/df1.count).limit(df1.count/noOfSamples)

Some scalability observations

You may note that doing a df1.count might be expensive as it evaluates the whole DataFrame, and you'll lose one of the benefits of sampling in the first place.

Therefore depending on the context of your application, you may want to use an already known number of total samples, or an approximation.

val newSample = df1.sample(true, 1D*noOfSamples/knownNoOfSamples)

Or assuming the size of your DataFrame as huge, I would still use a fraction and use limit to force the number of samples.

val guessedFraction = 0.1
val newSample = df1.sample(true, guessedFraction).limit(noOfSamples)

As for your questions:

can it be greater than 1?

No. It represents a fraction between 0 and 1. If you set it to 1 it will bring 100% of the rows, so it wouldn't make sense to set it to a number larger than 1.

Also is there anyway we can specify the number of rows to be sampled?

You can specify a larger fraction than the number of rows you want and then use limit, as I show in the second example. Maybe there is another way, but this is the approach I use.

Solution 2

To answer your question, is there anyway we can specify the number of rows to be sampled?

I recently needed to sample a certain number of rows from a spark data frame. I followed the below process,

  1. Convert the spark data frame to rdd. Example: df_test.rdd

  2. RDD has a functionality called takeSample which allows you to give the number of samples you need with a seed number. Example: df_test.rdd.takeSample(withReplacement, Number of Samples, Seed)

  3. Convert RDD back to spark data frame using sqlContext.createDataFrame()

Above process combined to single step:

Data Frame (or Population) I needed to Sample from has around 8,000 records: df_grp_1

df_grp_1
test1 = sqlContext.createDataFrame(df_grp_1.rdd.takeSample(False,125,seed=115))

test1 data frame will have 125 sampled records.

Solution 3

To answer if the fraction can be greater than 1. Yes, it can be if we have replace as yes. If a value greater than 1 is provided with replace false, then following exception will occur:

java.lang.IllegalArgumentException: requirement failed: Upper bound (2.0) must be <= 1.0.

Solution 4

The below code works if you want to do a random split of 70% & 30% of a data frame df,

val Array(trainingDF, testDF) = df.randomSplit(Array(0.7, 0.3), seed = 12345)

Solution 5

I too find lack of sample by count functionality disturbing. If you are not picky about creating a temp view I find the code below useful (df is your dataframe, count is sample size):

val tableName = s"table_to_sample_${System.currentTimeMillis}"
df.createOrReplaceTempView(tableName)
val sampled = sqlContext.sql(s"select *, rand() as random from ${tableName} order by random limit ${count}")
sqlContext.dropTempTable(tableName)
sampled.drop("random")

It returns an exact count as long as your current row count is as large as your sample size.

Share:
70,262
hbabbar
Author by

hbabbar

Updated on November 23, 2020

Comments

  • hbabbar
    hbabbar over 3 years

    I'm trying to take out samples from two dataframes wherein I need the ratio of count maintained. eg

    df1.count() = 10
    df2.count() = 1000
    
    noOfSamples = 10
    

    I want to sample the data in such a way that i get 10 samples of size 101 each( 1 from df1 and 100 from df2)

    Now while doing so,

    var newSample = df1.sample(true, df1.count() / noOfSamples)
    println(newSample.count())
    

    What does the fraction here imply? can it be greater than 1? I checked this and this but wasn't able to comprehend it fully.

    Also is there anyway we can specify the number of rows to be sampled?