Normal Equation Implementation in Python / Numpy

10,530

Solution 1

Your implementation is correct. You've only swapped X and y (look closely how they define x and y), that's why you get a different result.

The call normalEquation(y, X) gives [ 24.96601443 3.30576144] as it should.

Solution 2

You can implement normal equation like below:

import numpy as np

X = 2 * np.random.rand(100, 1)
y = 4 + 3 * X + np.random.randn(100, 1)

X_b = np.c_[np.ones((100, 1)), X]  # add x0 = 1 to each instance
theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y)

X_new = np.array([[0], [2]])
X_new_b = np.c_[np.ones((2, 1)), X_new]  # add x0 = 1 to each instance
y_predict = X_new_b.dot(theta_best)
y_predict

Solution 3

This assumes X is an m by n+1 dimensional matrix where x_0 always = 1 and y is a m-dimensional vector.

import numpy as np

step1 = np.dot(X.T, X)
step2 = np.linalg.pinv(step1)
step3 = np.dot(step2, X.T)
theta = np.dot(step3, y) # if y is m x 1.  If 1xm, then use y.T
Share:
10,530
PS94
Author by

PS94

Updated on June 04, 2022

Comments

  • PS94
    PS94 almost 2 years

    I've written some beginner code to calculate the co-efficients of a simple linear model using the normal equation.

    # Modules
    import numpy as np
    
    # Loading data set
    X, y = np.loadtxt('ex1data3.txt', delimiter=',', unpack=True)
    
    data = np.genfromtxt('ex1data3.txt', delimiter=',')
    
    def normalEquation(X, y):
        m = int(np.size(data[:, 1]))
    
        # This is the feature / parameter (2x2) vector that will
        # contain my minimized values
        theta = []
    
        # I create a bias_vector to add to my newly created X vector
        bias_vector = np.ones((m, 1))
    
        # I need to reshape my original X(m,) vector so that I can
        # manipulate it with my bias_vector; they need to share the same
        # dimensions.
        X = np.reshape(X, (m, 1))
    
        # I combine these two vectors together to get a (m, 2) matrix
        X = np.append(bias_vector, X, axis=1)
    
        # Normal Equation:
        # theta = inv(X^T * X) * X^T * y
    
        # For convenience I create a new, tranposed X matrix
        X_transpose = np.transpose(X)
    
        # Calculating theta
        theta = np.linalg.inv(X_transpose.dot(X))
        theta = theta.dot(X_transpose)
        theta = theta.dot(y)
    
        return theta
    
    p = normalEquation(X, y)
    
    print(p)
    

    Using the small data set found here:

    http://www.lauradhamilton.com/tutorial-linear-regression-with-octave

    I get the co-efficients: [-0.34390603; 0.2124426 ] using the above code instead of: [24.9660; 3.3058]. Could anyone help clarify where I am going wrong?

    • jeremycg
      jeremycg over 6 years
      you have your X and Y around the wrong way from the example! If I reverse them, I get the answers you suggest
  • PS94
    PS94 over 6 years
    Oh, the shame! Thank you both for your responses.
  • Brad Solomon
    Brad Solomon over 6 years
    You can also use add_constant for this.
  • PS94
    PS94 over 6 years
    Thanks for the advice @Maxim