ReduceLROnPlateau¶
- class paddle.callbacks. ReduceLROnPlateau ( monitor='loss', factor=0.1, patience=10, verbose=1, mode='auto', min_delta=0.0001, cooldown=0, min_lr=0 ) [source]
-
Reduce learning rate when a metric of evaluation has stopped improving. Models often benefit from reducing the learning rate by a factor of 2-10 once learning stagnates. This callback monitors a quantity and if no improvement is seen for a ‘patience’ number of epochs, the learning rate is reduced.
- Parameters
-
monitor (str, optional) – Quantity to be monitored. Default: ‘loss’.
factor (float, optional) – factor by which the learning rate will be reduced. new_lr = lr * factor. Default: 0.1.
patience (int, optional) – Number of epochs with no improvement after which learning rate will be reduced. Default: 10.
verbose (int, optional) – The verbosity mode. 0: quiet, 1: update messages. Default: 1.
mode (str, optional) – one of {‘auto’, ‘min’, ‘max’}. In ‘min’ mode, the learning rate will be reduced when the quantity monitored has stopped decreasing. In ‘max’ mode, learning rate will reduce until monitored quantity stops increasing. In ‘auto’ mode, exact mode can be inferred by the name of monitor. If ‘acc’ in monitor, the mode will be considered as ‘max’, otherwise the mode will be set to ‘min’. Default: ‘auto’.
min_delta (int|float, optional) – threshold for measuring the new optimum, to only focus on significant changes. Default: 0.
cooldown (int, optional) – number of epochs to wait before resuming normal operation after lr has been reduced. Default: 0.
min_lr (float, optional) – lower bound on the learning rate. Default: 0.
Examples
>>> import paddle >>> from paddle import Model >>> from paddle.static import InputSpec >>> from paddle.vision.models import LeNet >>> from paddle.vision.datasets import MNIST >>> from paddle.metric import Accuracy >>> from paddle.nn.layer.loss import CrossEntropyLoss >>> import paddle.vision.transforms as T >>> sample_num = 200 >>> transform = T.Compose( ... [T.Transpose(), T.Normalize([127.5], [127.5])]) >>> train_dataset = MNIST(mode='train', transform=transform) >>> val_dataset = MNIST(mode='test', transform=transform) >>> net = LeNet() >>> optim = paddle.optimizer.Adam( ... learning_rate=0.001, parameters=net.parameters()) >>> inputs = [InputSpec([None, 1, 28, 28], 'float32', 'x')] >>> labels = [InputSpec([None, 1], 'int64', 'label')] >>> model = Model(net, inputs=inputs, labels=labels) >>> model.prepare( ... optim, ... loss=CrossEntropyLoss(), ... metrics=[Accuracy()]) >>> callbacks = paddle.callbacks.ReduceLROnPlateau(patience=3, verbose=1) >>> model.fit(train_dataset, ... val_dataset, ... batch_size=64, ... log_freq=200, ... save_freq=10, ... epochs=20, ... callbacks=[callbacks])
-
on_train_begin
(
logs=None
)
on_train_begin¶
-
Called at the start of training.
- Parameters
-
logs (dict) – The logs is a dict or None.
-
on_eval_end
(
logs=None
)
on_eval_end¶
-
Called at the end of evaluation.
- Parameters
-
logs (dict) – The logs is a dict or None. The logs passed by paddle.Model is a dict contains ‘loss’, metrics and ‘batch_size’ of last batch of validation dataset.