MultiplicativeDecay

class paddle.optimizer.lr. MultiplicativeDecay ( learning_rate, lr_lambda, last_epoch=- 1, verbose=False ) [source]

Multiply the learning rate of optimizer by the factor given in function lr_lambda .

The algorithm can be described as the code below.

learning_rate = 0.5        # init learning_rate
lr_lambda = lambda epoch: 0.95

learning_rate = 0.5        # epoch 0,
learning_rate = 0.475      # epoch 1, 0.5*0.95
learning_rate = 0.45125    # epoch 2, 0.475*0.95
Parameters
  • learning_rate (float) – The initial learning rate. It is a python float number.

  • lr_lambda (function) – A function which computes a factor by epoch , and then multiply the last learning rate by this factor.

  • last_epoch (int, optional) – The index of last epoch. Can be set to restart training. Default: -1, means initial learning rate.

  • verbose (bool, optional) – If True, prints a message to stdout for each update. Default: False .

Returns

MultiplicativeDecay instance to schedule learning rate.

Examples

import paddle

# train on default dynamic graph mode
linear = paddle.nn.Linear(10, 10)
scheduler = paddle.optimizer.lr.MultiplicativeDecay(learning_rate=0.5, lr_lambda=lambda x:0.95, verbose=True)
sgd = paddle.optimizer.SGD(learning_rate=scheduler, parameters=linear.parameters())
for epoch in range(20):
    for batch_id in range(5):
        x = paddle.uniform([10, 10])
        out = linear(x)
        loss = paddle.mean(out)
        loss.backward()
        sgd.step()
        sgd.clear_gradients()
        scheduler.step()    # If you update learning rate each step
  # scheduler.step()        # If you update learning rate each epoch
get_lr ( )

get_lr

For those subclass who overload LRScheduler (Base Class), User should have a custom implementation of get_lr() .

Otherwise, an NotImplementedError exception will be thrown.

set_dict ( state_dict )

set_dict

Loads the schedulers state.

set_state_dict ( state_dict )

set_state_dict

Loads the schedulers state.

state_dict ( )

state_dict

Returns the state of the scheduler as a dict.

It is a subset of self.__dict__ .

state_keys ( )

state_keys

For those subclass who overload LRScheduler (Base Class). Acquiescently, “last_epoch, last_lr” will be saved by self.keys = ['last_epoch', 'last_lr'] .

last_epoch is the current epoch num, and last_lr is the current learning rate.

If you want to change the default behavior, you should have a custom implementation of _state_keys() to redefine self.keys .

step ( epoch=None )

step

step should be called after optimizer.step . It will update the learning rate in optimizer according to current epoch . The new learning rate will take effect on next optimizer.step .

Parameters

epoch (int, None) – specify current epoch. Default: None. Auto-increment from last_epoch=-1.

Returns

None