Python sklearn show loss values during training

11,232

Solution 1

So I couldn't find very good documentation on directly fetching the loss values per iteration, but I hope this will help someone in the future:

old_stdout = sys.stdout
sys.stdout = mystdout = StringIO()
clf = SGDClassifier(**kwargs, verbose=1)
clf.fit(X_tr, y_tr)
sys.stdout = old_stdout
loss_history = mystdout.getvalue()
loss_list = []
for line in loss_history.split('\n'):
    if(len(line.split("loss: ")) == 1):
        continue
    loss_list.append(float(line.split("loss: ")[-1]))
plt.figure()
plt.plot(np.arange(len(loss_list)), loss_list)
plt.savefig("warmstart_plots/pure_SGD:"+str(kwargs)+".png")
plt.xlabel("Time in epochs")
plt.ylabel("Loss")
plt.close()

This code will take a normal SGDClassifier(just about any linear classifier), and intercept the verbose=1 flag, and will then split to get the loss from the verbose printing. Obviously this is slower but will give us the loss and print it.

Solution 2

Use model.loss_curve_.

You can use the verbose option to print the values on each iteration but if you want the actual values, this is not the best way to proceed because you will need to do some hacky stuff to parse them.

It's true, the documentation doesn't mention anything about this attribute, but if you check in the source code, you may notice that one of MLPClassifier base classes (BaseMultilayerPerceptron) actually defines an attribute loss_curve_ where it stores the values on each iterarion.

As you get all the values in a list, plotting should be trivial using any library.

Notice that this attribute is only present while using a stochastic solver (i.e. sgd or adam).

Solution 3

I just adapted and updated the answer from @OneRaynyDay. Using context manager is way more elegant.

Defining Context Manager:

import sys
import io
import matplotlib.pyplot as plt

class DisplayLossCurve(object):
  def __init__(self, print_loss=False):
    self.print_loss = print_loss

  """Make sure the model verbose is set to 1"""
  def __enter__(self):
    self.old_stdout = sys.stdout
    sys.stdout = self.mystdout = io.StringIO()
  
  def __exit__(self, *args, **kwargs):
    sys.stdout = self.old_stdout
    loss_history = self.mystdout.getvalue()
    loss_list = []
    for line in loss_history.split('\n'):
      if(len(line.split("loss: ")) == 1):
        continue
      loss_list.append(float(line.split("loss: ")[-1]))
    plt.figure()
    plt.plot(np.arange(len(loss_list)), loss_list)
    plt.xlabel("Epoch")
    plt.ylabel("Loss")

    if self.print_loss:
      print("=============== Loss Array ===============")
      print(np.array(loss_list))
      
    return True

Usage:

from sklearn.linear_model import SGDRegressor

model = SGDRegressor(verbose=1)

with DisplayLossCurve():
  model.fit(X, Y)

# OR

with DisplayLossCurve(print_loss=True):
  model.fit(X, Y)
Share:
11,232

Related videos on Youtube

OneRaynyDay
Author by

OneRaynyDay

I work at a trading company, previously at Airbnb. I write some blogs here. Just trying to learn interesting things like everyone else.

Updated on October 17, 2022

Comments

  • OneRaynyDay
    OneRaynyDay over 1 year

    I want check my loss values during the training time so I can observe the loss at each iteration. So far I haven't found an easy way for scikit learn to give me a history of loss values, nor did I find a functionality already within scikit to plot the loss for me.

    If there was no way to plot this, it'd be great if I could simply fetch the final loss values at the end of classifier.fit.

    Note: I am aware of the fact that some solutions are closed form. I'm using several classifiers which do not have analytical solutions, such as logistic regression and svm.

    Does anyone have any suggestions?

  • haneulkim
    haneulkim about 3 years
    Do you have implementation with logistic regression?