NumPy or SciPy to calculate weighted median

13,181

Solution 1

What we can do, if i understood your problem correctly. Is to sum up the observations, dividing by 2 would give us the observation number corresponding to the median. From there we need to figure out what observation this number was.

One trick here, is to calculate the observation sums with np.cumsum. Which gives us a running cumulative sum.

Example:
np.cumsum([1,2,3,4]) -> [ 1, 3, 6, 10]
Each element is the sum of all previously elements and itself. We have 10 observations here. so the mean would be the 5th observation. (We get 5 by dividing the last element by 2).
Now looking at the cumsum result, we can easily see that that must be the observation between the second and third elements (observation 3 and 6).

So all we need to do, is figure out the index of where the median (5) will fit.
np.searchsorted does exactly what we need. It will find the index to insert an elements into an array, so that it stays sorted.

The code to do it like so:

import numpy as np
#my test data
freq_count = np.array([[30, 191, 9, 0], [10, 20, 300, 10], [10,20,30,40], [100,10,10,10], [1,1,1,100]])

c = np.cumsum(freq_count, axis=1) 
indices = [np.searchsorted(row, row[-1]/2.0) for row in c]
masses = [i * 10 for i in indices] #Correct if the masses are indeed 0, 10, 20,...

#This is just for explanation.
print "median masses is:",  masses
print freq_count
print np.hstack((c, c[:, -1, np.newaxis]/2.0))

Output will be:

median masses is: [10 20 20  0 30]  
[[ 30 191   9   0]  <- The test data
 [ 10  20 300  10]  
 [ 10  20  30  40]  
 [100  10  10  10]  
 [  1   1   1 100]]  
[[  30.   221.   230.   230.   115. ]  <- cumsum results with median added to the end.
 [  10.    30.   330.   340.   170. ]     you can see from this where they fit in.
 [  10.    30.    60.   100.    50. ]  
 [ 100.   110.   120.   130.    65. ]  
 [   1.     2.     3.   103.    51.5]]  

Solution 2

wquantiles is a small python package that will do exactly what you need. It just uses np.cumsum() and np.interp() under the hood.

Solution 3

Sharing some code that I got a hand with. This allows you to run stats on each column of an excel spreadsheet.

import xlrd
import sys
import csv
import numpy as np
import itertools
from itertools import chain

book = xlrd.open_workbook('/filepath/workbook.xlsx')
sh = book.sheet_by_name("Sheet1")
ofile = '/outputfilepath/workbook.csv'

masses = sh.col_values(0, start_rowx=1)  # first column has mass
age = sh.row_values(0, start_colx=1)   # first row has age ranges

count = 1
mass = []
for a in ages:
    age.append(sh.col_values(count, start_rowx=1))
    count += 1

stats = []
count = 0    
for a in ages:
    expanded = []
    # create a tuple with the mass vector

    age_mass = zip(masses, age[count])
    count += 1
    # replicate element[0] for element[1] times
    expanded = list(list(itertools.repeat(am[0], int(am[1]))) for am in age_mass)

    #  separate into one big list
    medianlist = [x for t in expanded for x in t]

    # convert to array and mask out zeroes
    npa = np.array(medianlist)
    npa = np.ma.masked_equal(npa,0)

    median = np.median(npa)
    meanMass = np.average(npa)
    maxMass = np.max(npa)
    minMass = np.min(npa)
    stdev = np.std(npa)

    stats1 = [median, meanMass, maxMass, minMass, stdev]
    print stats1

    stats.append(stats1)

np.savetxt(ofile, (stats), fmt="%d") 
Share:
13,181
Car
Author by

Car

Updated on July 10, 2022

Comments

  • Car
    Car almost 2 years

    I'm trying to automate a process that JMP does (Analyze->Distribution, entering column A as the "Y value", using subsequent columns as the "weight" value). In JMP you have to do this one column at a time - I'd like to use Python to loop through all of the columns and create an array showing, say, the median of each column.

    For example, if the mass array is [0, 10, 20, 30], and the weight array for column 1 is [30, 191, 9, 0], the weighted median of the mass array should be 10. However, I'm not sure how to arrive at this answer.

    So far I've

    1. imported the csv showing the weights as an array, masking values of 0, and
    2. created an array of the "Y value" the same shape and size as the weights array (113x32). I'm not entirely sure I need to do this, but thought it would be easier than a for loop for the purpose of weighting.

    I'm not sure exactly where to go from here. Basically the "Y value" is a range of masses, and all of the columns in the array represent the number of data points found for each mass. I need to find the median mass, based on the frequency with which they were reported.

    I'm not an expert in Python or statistics, so if I've omitted any details that would be useful let me know!

    Update: here's some code for what I've done so far:

    #Boilerplate & Import files
    import csv
    import scipy as sp
    from scipy import stats
    from scipy.stats import norm
    import numpy as np
    from numpy import genfromtxt
    import pandas as pd
    import matplotlib.pyplot as plt
    
    inputFile = '/Users/cl/prov.csv'
    origArray = genfromtxt(inputFile, delimiter = ",")
    nArray = np.array(origArray)
    dimensions = nArray.shape
    shape = np.asarray(dimensions)
    
    #Mask values ==0
    maTest = np.ma.masked_equal(nArray,0)
    
    #Create array of masses the same shape as the weights (nArray)
    fieldLength = shape[0]
    rowLength = shape[1]
    
    for i in range (rowLength):
        createArr = np.arange(0, fieldLength*10, 10)
        nCreateArr = np.array(createArr)
        massArr.append(nCreateArr)
        nCreateArr = np.array(massArr)
    nmassArr = nCreateArr.transpose()
    
  • Car
    Car over 10 years
    Thanks so much for your explanation! I'm getting close but am not quite there yet. I don't think I articulated my problem quite right - basically, the median should always be a number within the range of masses - the frequencies of [30, 191, 9, 0] correspond with masses [0, 10, 20, 30], respectively (i.e. mass in range 0-10 showed up 30 times, mass of 10-20 showed up 191 times, etc.). With your answer above it looks like I'm getting the median of the frequency count instead, right?
  • M4rtini
    M4rtini over 10 years
    Yes, it finds the median of the frequency count, and then relates that to the masses. Using that the ranges of masses are directly related to the elements of the frequency count. Do you need it to figure out the true median or the range that contains the median? This will find the range containing the median.
  • M4rtini
    M4rtini over 10 years
    Could you try to either give more example of inputs, outputs. Or check the "test data" i used and say what the output should be for them.
  • Car
    Car over 10 years
    Ideally I'd find the true median, but the range would also be fine. using your test data, I found medians of [20, 25, 25, 25, 25], respectively. Here's some actual data [30, 191, 30, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 99, 256, 254, 82, 5, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 65, 205, 189, 249, 120, 72, 40, 2, 0, 0, 0], [0, 0, 0, 0, 0, 1, 59, 192, 324, 204, 188, 127, 104, 29]. These correspond with masses from 0-130, counting by 10s. The medians using JMP: [10, 30, 65, 90].
  • Car
    Car over 10 years
    The medians using your edits are [125.5, 348, 471, and 614]. This looks like it's getting there - they're getting consecutively larger, which follows the same pattern as JMP. I'll tinker around with it to see if there's a small tweak that will get it the rest of the way, but would appreciate any more input you've got! At a glance it may be something with the indices formula - instead of 0-130 by 10's, I'm getting 0, 10, 50, 80 as the output (after modifying it to (i-1)*10 to start at 0).
  • Mad Physicist
    Mad Physicist over 3 years
    I've had a proper implementation of weighted introselect on my back burner for a couple of years now :(