How do I calculate r-squared using Python and Numpy?

327,569

Solution 1

From the numpy.polyfit documentation, it is fitting linear regression. Specifically, numpy.polyfit with degree 'd' fits a linear regression with the mean function

E(y|x) = p_d * x**d + p_{d-1} * x **(d-1) + ... + p_1 * x + p_0

So you just need to calculate the R-squared for that fit. The wikipedia page on linear regression gives full details. You are interested in R^2 which you can calculate in a couple of ways, the easisest probably being

SST = Sum(i=1..n) (y_i - y_bar)^2
SSReg = Sum(i=1..n) (y_ihat - y_bar)^2
Rsquared = SSReg/SST

Where I use 'y_bar' for the mean of the y's, and 'y_ihat' to be the fit value for each point.

I'm not terribly familiar with numpy (I usually work in R), so there is probably a tidier way to calculate your R-squared, but the following should be correct

import numpy

# Polynomial Regression
def polyfit(x, y, degree):
    results = {}

    coeffs = numpy.polyfit(x, y, degree)

     # Polynomial Coefficients
    results['polynomial'] = coeffs.tolist()

    # r-squared
    p = numpy.poly1d(coeffs)
    # fit values, and mean
    yhat = p(x)                         # or [p(z) for z in x]
    ybar = numpy.sum(y)/len(y)          # or sum(y)/len(y)
    ssreg = numpy.sum((yhat-ybar)**2)   # or sum([ (yihat - ybar)**2 for yihat in yhat])
    sstot = numpy.sum((y - ybar)**2)    # or sum([ (yi - ybar)**2 for yi in y])
    results['determination'] = ssreg / sstot

    return results

Solution 2

A very late reply, but just in case someone needs a ready function for this:

scipy.stats.linregress

i.e.

slope, intercept, r_value, p_value, std_err = scipy.stats.linregress(x, y)

as in @Adam Marples's answer.

Solution 3

From yanl (yet-another-library) sklearn.metrics has an r2_score function;

from sklearn.metrics import r2_score

coefficient_of_dermination = r2_score(y, p(x))

Solution 4

I have been using this successfully, where x and y are array-like.

Note: for linear regression only

def rsquared(x, y):
    """ Return R^2 where x and y are array-like."""

    slope, intercept, r_value, p_value, std_err = scipy.stats.linregress(x, y)
    return r_value**2

Solution 5

I originally posted the benchmarks below with the purpose of recommending numpy.corrcoef, foolishly not realizing that the original question already uses corrcoef and was in fact asking about higher order polynomial fits. I've added an actual solution to the polynomial r-squared question using statsmodels, and I've left the original benchmarks, which while off-topic, are potentially useful to someone.


statsmodels has the capability to calculate the r^2 of a polynomial fit directly, here are 2 methods...

import statsmodels.api as sm
import statsmodels.formula.api as smf

# Construct the columns for the different powers of x
def get_r2_statsmodels(x, y, k=1):
    xpoly = np.column_stack([x**i for i in range(k+1)])    
    return sm.OLS(y, xpoly).fit().rsquared

# Use the formula API and construct a formula describing the polynomial
def get_r2_statsmodels_formula(x, y, k=1):
    formula = 'y ~ 1 + ' + ' + '.join('I(x**{})'.format(i) for i in range(1, k+1))
    data = {'x': x, 'y': y}
    return smf.ols(formula, data).fit().rsquared # or rsquared_adj

To further take advantage of statsmodels, one should also look at the fitted model summary, which can be printed or displayed as a rich HTML table in Jupyter/IPython notebook. The results object provides access to many useful statistical metrics in addition to rsquared.

model = sm.OLS(y, xpoly)
results = model.fit()
results.summary()

Below is my original Answer where I benchmarked various linear regression r^2 methods...

The corrcoef function used in the Question calculates the correlation coefficient, r, only for a single linear regression, so it doesn't address the question of r^2 for higher order polynomial fits. However, for what it's worth, I've come to find that for linear regression, it is indeed the fastest and most direct method of calculating r.

def get_r2_numpy_corrcoef(x, y):
    return np.corrcoef(x, y)[0, 1]**2

These were my timeit results from comparing a bunch of methods for 1000 random (x, y) points:

  • Pure Python (direct r calculation)
    • 1000 loops, best of 3: 1.59 ms per loop
  • Numpy polyfit (applicable to n-th degree polynomial fits)
    • 1000 loops, best of 3: 326 µs per loop
  • Numpy Manual (direct r calculation)
    • 10000 loops, best of 3: 62.1 µs per loop
  • Numpy corrcoef (direct r calculation)
    • 10000 loops, best of 3: 56.6 µs per loop
  • Scipy (linear regression with r as an output)
    • 1000 loops, best of 3: 676 µs per loop
  • Statsmodels (can do n-th degree polynomial and many other fits)
    • 1000 loops, best of 3: 422 µs per loop

The corrcoef method narrowly beats calculating the r^2 "manually" using numpy methods. It is >5X faster than the polyfit method and ~12X faster than the scipy.linregress. Just to reinforce what numpy is doing for you, it's 28X faster than pure python. I'm not well-versed in things like numba and pypy, so someone else would have to fill those gaps, but I think this is plenty convincing to me that corrcoef is the best tool for calculating r for a simple linear regression.

Here's my benchmarking code. I copy-pasted from a Jupyter Notebook (hard not to call it an IPython Notebook...), so I apologize if anything broke on the way. The %timeit magic command requires IPython.

import numpy as np
from scipy import stats
import statsmodels.api as sm
import math

n=1000
x = np.random.rand(1000)*10
x.sort()
y = 10 * x + (5+np.random.randn(1000)*10-5)

x_list = list(x)
y_list = list(y)

def get_r2_numpy(x, y):
    slope, intercept = np.polyfit(x, y, 1)
    r_squared = 1 - (sum((y - (slope * x + intercept))**2) / ((len(y) - 1) * np.var(y, ddof=1)))
    return r_squared
    
def get_r2_scipy(x, y):
    _, _, r_value, _, _ = stats.linregress(x, y)
    return r_value**2
    
def get_r2_statsmodels(x, y):
    return sm.OLS(y, sm.add_constant(x)).fit().rsquared
    
def get_r2_python(x_list, y_list):
    n = len(x_list)
    x_bar = sum(x_list)/n
    y_bar = sum(y_list)/n
    x_std = math.sqrt(sum([(xi-x_bar)**2 for xi in x_list])/(n-1))
    y_std = math.sqrt(sum([(yi-y_bar)**2 for yi in y_list])/(n-1))
    zx = [(xi-x_bar)/x_std for xi in x_list]
    zy = [(yi-y_bar)/y_std for yi in y_list]
    r = sum(zxi*zyi for zxi, zyi in zip(zx, zy))/(n-1)
    return r**2
    
def get_r2_numpy_manual(x, y):
    zx = (x-np.mean(x))/np.std(x, ddof=1)
    zy = (y-np.mean(y))/np.std(y, ddof=1)
    r = np.sum(zx*zy)/(len(x)-1)
    return r**2
    
def get_r2_numpy_corrcoef(x, y):
    return np.corrcoef(x, y)[0, 1]**2
    
print('Python')
%timeit get_r2_python(x_list, y_list)
print('Numpy polyfit')
%timeit get_r2_numpy(x, y)
print('Numpy Manual')
%timeit get_r2_numpy_manual(x, y)
print('Numpy corrcoef')
%timeit get_r2_numpy_corrcoef(x, y)
print('Scipy')
%timeit get_r2_scipy(x, y)
print('Statsmodels')
%timeit get_r2_statsmodels(x, y)

7/28/21 Benchmark results. (Python 3.7, numpy 1.19, scipy 1.6, statsmodels 0.12)

Python
2.41 ms ± 180 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Numpy polyfit
318 µs ± 44.3 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Numpy Manual
79.3 µs ± 4.05 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Numpy corrcoef
83.8 µs ± 1.37 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Scipy
221 µs ± 7.12 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Statsmodels
375 µs ± 3.63 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Share:
327,569
Travis Beale
Author by

Travis Beale

NADA

Updated on October 29, 2021

Comments

  • Travis Beale
    Travis Beale over 2 years

    I'm using Python and Numpy to calculate a best fit polynomial of arbitrary degree. I pass a list of x values, y values, and the degree of the polynomial I want to fit (linear, quadratic, etc.).

    This much works, but I also want to calculate r (coefficient of correlation) and r-squared(coefficient of determination). I am comparing my results with Excel's best-fit trendline capability, and the r-squared value it calculates. Using this, I know I am calculating r-squared correctly for linear best-fit (degree equals 1). However, my function does not work for polynomials with degree greater than 1.

    Excel is able to do this. How do I calculate r-squared for higher-order polynomials using Numpy?

    Here's my function:

    import numpy
    
    # Polynomial Regression
    def polyfit(x, y, degree):
        results = {}
    
        coeffs = numpy.polyfit(x, y, degree)
         # Polynomial Coefficients
        results['polynomial'] = coeffs.tolist()
    
        correlation = numpy.corrcoef(x, y)[0,1]
    
         # r
        results['correlation'] = correlation
         # r-squared
        results['determination'] = correlation**2
    
        return results
    
    • Nick Dandoulakis
      Nick Dandoulakis almost 15 years
      Note: you use the degree only in the calculation of coeffs.
    • nTraum
      nTraum almost 15 years
      tydok is correct. You are calculating the correlation of x and y and r-squared for y=p_0 + p_1 * x. See my answer below for some code that should work. If you don't mind me asking, what is your ultimate goal? Are you doing model selection (choosing what degree to use)? Or something else?
    • habarnam
      habarnam over 3 years
      side question : doesn't pandas corr() function return the r^"2 pearson coeffcient?
  • Travis Beale
    Travis Beale almost 15 years
    This seems to be the root of my problem. How does Excel get a different r-squared value for a polynomial fit vs. a linear regression then?
  • Travis Beale
    Travis Beale almost 15 years
    It's part of the graphing functions of Excel. You can plot some data, right-click on it, then choose from several different types of trend lines. There is the option to see the equation of the line as well as an r-squared value for each type. The r-squared value is also different for each type.
  • nTraum
    nTraum almost 15 years
    @Travis Beale -- you are going to get a different r-squared for each different mean function you try (unless two models are nested and the extra coeffecients in the larger model all work to be 0). So of course Excel gives a different r-squared values. @Baltimark -- this is linear regression so it is r-squared.
  • Josef
    Josef over 13 years
    I just want to point out that using the numpy array functions instead of list comprehension will be much faster, e.g. numpy.sum((yi - ybar)**2) and easier to read
  • 象嘉道
    象嘉道 over 12 years
    It's reasonable to analyze with coefficient of correlation, and then to do the bigger job, regression.
  • LWZ
    LWZ about 11 years
    According to wiki page en.wikipedia.org/wiki/Coefficient_of_determination, the most general definition of R^2 is R^2 = 1 - SS_err/SS_tot, with R^2 = SS_reg/SS_tot being just a special case.
  • Tickon
    Tickon about 9 years
    Here's a good description of the issue with R2 for non-linear regression: blog.minitab.com/blog/adventures-in-statistics/…
  • tashuhka
    tashuhka almost 9 years
    This reply only works for linear regression, which is the simplest polynomial regression
  • Josef
    Josef over 8 years
    You are comparing 3 methods with fitting a slope and regression with 3 methods without fitting a slope.
  • flutefreak7
    flutefreak7 over 8 years
    Yeah, I knew that much... but now I feel silly for not reading the original question and seeing that it uses corrcoef already and is specifically addressing r^2 for higher order polynomials... now I feel silly for posting my benchmarks which were for a different purpose. Oops...
  • flutefreak7
    flutefreak7 over 8 years
    I've updated my answer with a solution to the original question using statsmodels, and apologized for the needless benchmarking of linear regression r^2 methods, which I kept as interesting, yet off-topic info.
  • Josef
    Josef over 8 years
    I still find the benchmark interesting because I didn't expect scipy's linregress to be slower than statsmodels which does more generic work.
  • Josef
    Josef over 8 years
    Note, np.column_stack([x**i for i in range(k+1)]) can be vectorized in numpy with x[:,None]**np.arange(k+1) or using numpy's vander functions which have reversed order in columns.
  • Franck Dernoncourt
    Franck Dernoncourt almost 7 years
    (Beware: "Default value corresponds to ‘variance_weighted’, this behaviour is deprecated since version 0.17 and will be changed to ‘uniform_average’ starting from 0.19")
  • Qinqing Liu
    Qinqing Liu over 6 years
    r2_score in sklearn could be negative value, which is not the normal case.
  • Vladimir Lukin
    Vladimir Lukin over 4 years
    Caution: r_value here is a Pearson's correlation coefficient, not R-squared. r_squared = r_value**2
  • Fabian Schn.
    Fabian Schn. almost 4 years
    There is a typo, it should be n = len(x_list) for pure Python
  • flutefreak7
    flutefreak7 over 3 years
    Thanks @FabianSchn. - fixed
  • c z
    c z over 3 years
    Why is r2_score([1,2,3],[4,5,7]) = -16?
  • Merlin
    Merlin over 3 years
    One thing I like is it doesn't require training the model -- often I'm computing metrics from models trained in different environment.
  • flutefreak7
    flutefreak7 almost 3 years
    For what it's worth @Josef 2016 comment about linregress being slower than statsmodels is not true in 2021. I'm assuming linregress got faster. My 2021 benchmark results have been added to the answer.
  • russian_spy
    russian_spy over 2 years
    This formula gives a different answer than the numpy module for non-trivial data. This is likely because r_squared is an optimization problem with multiple solutions for the slope and offset of the best fit line.
  • russian_spy
    russian_spy over 2 years
    I posted this solution because the wikipedia article formula gives a different result than the numpy solution. I believe the numpy module is correct because the wikipedia formula does not consider that multiple solutions exist (different slope and offsets of best fit line) and numpy apparently solves an actual optimization problem and not just calculate a fraction of sums. Evidence of the [simple] wikipedia formula being wrong is that it produces negative r_squared values, which means it's coming up with the wrong slope for the best fit line for non-trivial data.
  • Michel Floyd
    Michel Floyd over 2 years
    The function above applies to any model, linear, nonlinear, ML etc… It only looks at the differences between the predicted values and the actual values. Each model will typically create a different R^2. Fitting a given model involves minimizing R^2 by varying the parameters of the model. A straight line fit for a curve with one independent variable and one dependent variable has a unique solution (the local minima == the global minima). More complicated models, particularly with additional independent variables, may have many local minima and finding the global minima may be very difficult.
  • liorr
    liorr over 2 years
    This is not Perason's coefficient of determination, but the square of the correlation coefficient - something else entirely.
  • Adam Marples
    Adam Marples over 2 years
    @liorr It's my understanding that the coefficient of determination is the square of the coefficient of correlation
  • liorr
    liorr over 2 years
    I think this is only true when using linear regression: en.wikipedia.org/wiki/Coefficient_of_determination "One class of such cases includes that of simple linear regression where r2 is used instead of R2. When only an intercept is included, then r2 is simply the square of the sample correlation coefficient (i.e., r) between the observed outcomes and the observed predictor values."
  • bonCodigo
    bonCodigo over 2 years
    What about R squared for a non-linear least square function?
  • Adam Marples
    Adam Marples over 2 years
    @liorr I am using r**2 from linear regression in my answer, scipy.stats.linregress, so it is correct
  • liorr
    liorr over 2 years
    I think you are confusing linear regression, which can fit a polynomial of any degree, with fitting a linear model. You are using linear regression - but only to fit a polynomial of the first degree. So, you are not really answering the question, which was about the "...best fit polynomial of arbitrary degree". I suggest you edit your answer to reflect that it is only useful for polynomials of first degree (i.e., model function a + bx).
  • Adam Marples
    Adam Marples over 2 years
    Ah yes I did not properly read the question. In my defence it was 9 years ago and I still haven't.