Linear Regression and Gradient Descent in Scikit learn?

21,688

Scikit learn provides you two approaches to linear regression:

  1. LinearRegression object uses Ordinary Least Squares solver from scipy, as LR is one of two classifiers which have closed form solution. Despite the ML course - you can actually learn this model by just inverting and multiplicating some matrices.

  2. SGDRegressor which is an implementation of stochastic gradient descent, very generic one where you can choose your penalty terms. To obtain linear regression you choose loss to be L2 and penalty also to none (linear regression) or L2 (Ridge regression)

There is no "typical gradient descent" because it is rarely used in practise. If you can decompose your loss function into additive terms, then stochastic approach is known to behave better (thus SGD) and if you can spare enough memory - OLS method is faster and easier (thus first solution).

Share:
21,688

Related videos on Youtube

Netro
Author by

Netro

Updated on April 28, 2021

Comments

  • Netro
    Netro about 3 years

    in coursera course for machine learning https://share.coursera.org/wiki/index.php/ML:Linear_Regression_with_Multiple_Variables#Gradient_Descent_for_Multiple_Variables, it says gradient descent should converge.

    I m using Linear regression from scikit learn. It doesn't provide gradient descent info. I have seen many questions on stackoverflow to implement linear regression with gradient descent.

    How do we use Linear regression from scikit-learn in real world? OR Why does scikit-learn doesn't provide gradient descent info in linear regression output?

    • David Maust
      David Maust over 8 years
      One note is for LogisticRegression, it does provide an argument called solver where you can select which optimizer it will use. It will show debug information for the optimizer if you set verbose=1. scikit-learn.org/stable/modules/generated/…
  • Vivek Puurkayastha
    Vivek Puurkayastha over 5 years
    @lejlot do you mean SGDRegressor ?