How to tell Keras stop training based on loss value?
Solution 1
I found the answer. I looked into Keras sources and find out code for EarlyStopping. I made my own callback, based on it:
class EarlyStoppingByLossVal(Callback):
def __init__(self, monitor='val_loss', value=0.00001, verbose=0):
super(Callback, self).__init__()
self.monitor = monitor
self.value = value
self.verbose = verbose
def on_epoch_end(self, epoch, logs={}):
current = logs.get(self.monitor)
if current is None:
warnings.warn("Early stopping requires %s available!" % self.monitor, RuntimeWarning)
if current < self.value:
if self.verbose > 0:
print("Epoch %05d: early stopping THR" % epoch)
self.model.stop_training = True
And usage:
callbacks = [
EarlyStoppingByLossVal(monitor='val_loss', value=0.00001, verbose=1),
# EarlyStopping(monitor='val_loss', patience=2, verbose=0),
ModelCheckpoint(kfold_weights_path, monitor='val_loss', save_best_only=True, verbose=0),
]
model.fit(X_train.astype('float32'), Y_train, batch_size=batch_size, nb_epoch=nb_epoch,
shuffle=True, verbose=1, validation_data=(X_valid, Y_valid),
callbacks=callbacks)
Solution 2
The keras.callbacks.EarlyStopping callback does have a min_delta argument. From Keras documentation:
min_delta: minimum change in the monitored quantity to qualify as an improvement, i.e. an absolute change of less than min_delta, will count as no improvement.
Solution 3
One solution is to call model.fit(nb_epoch=1, ...)
inside a for loop, then you can put a break statement inside the for loop and do whatever other custom control flow you want.
Solution 4
I solved the same problem using custom callback.
In the following custom callback code assign THR with the value at which you want to stop training and add the callback to your model.
from keras.callbacks import Callback
class stopAtLossValue(Callback):
def on_batch_end(self, batch, logs={}):
THR = 0.03 #Assign THR with the value at which you want to stop training.
if logs.get('loss') <= THR:
self.model.stop_training = True
Solution 5
While I was taking the TensorFlow in practice specialization, I learned a very elegant technique. Just little modified from the accepted answer.
Let's set the example with our favorite MNIST data.
import tensorflow as tf
class new_callback(tf.keras.callbacks.Callback):
def epoch_end(self, epoch, logs={}):
if(logs.get('accuracy')> 0.90): # select the accuracy
print("\n !!! 90% accuracy, no further training !!!")
self.model.stop_training = True
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0 #normalize
callbacks = new_callback()
# model = tf.keras.models.Sequential([# define your model here])
model.compile(optimizer=tf.optimizers.Adam(),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=10, callbacks=[callbacks])
So, here I set the metrics=['accuracy']
, and thus in the callback class the condition is set to 'accuracy'> 0.90
.
You can choose any metric and monitor the training like this example. Most importantly you can set different conditions for different metric and use them simultaneously.
Hopefully this helps!
ZFTurbo
Updated on August 18, 2020Comments
-
ZFTurbo over 3 years
Currently I use the following code:
callbacks = [ EarlyStopping(monitor='val_loss', patience=2, verbose=0), ModelCheckpoint(kfold_weights_path, monitor='val_loss', save_best_only=True, verbose=0), ] model.fit(X_train.astype('float32'), Y_train, batch_size=batch_size, nb_epoch=nb_epoch, shuffle=True, verbose=1, validation_data=(X_valid, Y_valid), callbacks=callbacks)
It tells Keras to stop training when loss didn't improve for 2 epochs. But I want to stop training after loss became smaller than some constant "THR":
if val_loss < THR: break
I've seen in documentation there are possibility to make your own callback: http://keras.io/callbacks/ But nothing found how to stop training process. I need an advice.
-
Honesty over 7 yearsIt'd be nice if they made a callback that takes in a single function that can do that.
-
QtRoS about 7 yearsJust if it will be useful for someone - in my case I used monitor='loss', it worked well.
-
jkdev almost 7 yearsIt seems Keras has been updated. The EarlyStopping callback function has min_delta built into it now. No need to hack the source code anymore, yay! stackoverflow.com/a/41459368/3345375
-
jkdev almost 7 yearsFor reference, here are the docs for an earlier version of Keras (1.1.0) in which the min_delta argument was not yet included: faroit.github.io/keras-docs/1.1.0/callbacks/#earlystopping
-
jkdev almost 7 yearsUpon re-reading the question and answers, I need to correct myself: min_delta means "Stop early if there is not enough improvement per epoch (or per multiple epochs)." However, the OP asked how to "Stop early when the loss gets below a certain level."
-
zyxue about 6 yearshow could I make it not stop until
min_delta
persists over multiple epochs? -
devin about 6 yearsthere's another parameter to EarlyStopping called patience: number of epochs with no improvement after which training will be stopped.
-
alyssaeliyah over 5 yearsNameError: name 'Callback' is not defined... How will I fix it?
-
ZFTurbo over 5 yearsEliyah try this:
from keras.callbacks import Callback
-
xarion over 3 yearsfunction name should be on_epoch_end
-
Cathy about 3 yearsone correction it should be elif elif current < self.value:
-
NeStack over 2 years@jkdev min_delta doesn't quite address the question of early stopping by an absolute value. Instead min_delta works as a difference between values
-
NeStack over 2 yearsWhile min_delta might be useful, it doesn't quite address the question of early stopping by an absolute value. Instead, min_delta works as a difference between values