当前位置:网站首页>3、 Automatically terminate training

3、 Automatically terminate training

2022-06-25 08:51:00 Beyond proverb

occasionally , When the model loses the expected effect of the function value , You can finish your training , On the one hand, save time , On the other hand, prevent over fitting
here , Set the loss function value to be less than 0.4, Training stopped

from tensorflow import keras
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np


class myCallback(tf.keras.callbacks.Callback):
    def on_epoch_end(self,epoch,logs={
    }):
        if(logs.get('loss')<0.4):
            print("\nLoss is low so cancelling training!")
            self.model.stop_training = True


callbacks = myCallback()
mnist = tf.keras.datasets.fashion_mnist
(training_images,training_labels),(test_images,test_labels) = mnist.load_data()
training_images_y = training_images/255.0
test_images_y = test_images/255.0

model = tf.keras.models.Sequential([
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(512,activation=tf.nn.relu),
    tf.keras.layers.Dense(10,activation=tf.nn.softmax)
])
model.compile(optimizer='adam',loss='sparse_categorical_crossentropy',metrics=['accuracy'])
model.fit(training_images_y,training_labels,epochs=5,callbacks=[callbacks])
""" Colocations handled automatically by placer. Epoch 1/5 60000/60000 [==============================] - 12s 194us/sample - loss: 0.4729 - acc: 0.8303 Epoch 2/5 59712/60000 [============================>.] - ETA: 0s - loss: 0.3570 - acc: 0.8698 Loss is low so cancelling training! 60000/60000 [==============================] - 11s 190us/sample - loss: 0.3570 - acc: 0.8697 """
原网站

版权声明
本文为[Beyond proverb]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/176/202206250755452315.html