StepDecay

class paddle.fluid.dygraph.learning_rate_scheduler. StepDecay ( learning_rate, step_size, decay_rate=0.1 ) [source]
Api_attr

imperative

Decays the learning rate of optimizer by decay_rate every step_size number of epoch.

The algorithm can be described as the code below.

learning_rate = 0.5
step_size = 30
decay_rate = 0.1

learning_rate = 0.5     if epoch < 30
learning_rate = 0.05    if 30 <= epoch < 60
learning_rate = 0.005   if 60 <= epoch < 90
...
Parameters
  • learning_rate (float|int) – The initial learning rate. It can be set to python float or int number.

  • step_size (int) – Period of learning rate decay.

  • decay_rate (float, optional) – The Ratio that the learning rate will be reduced. new_lr = origin_lr * decay_rate . It should be less than 1.0. Default: 0.1.

Returns

None.

Examples

import paddle.fluid as fluid
import numpy as np
with fluid.dygraph.guard():
    x = np.random.uniform(-1, 1, [10, 10]).astype("float32")
    linear = fluid.dygraph.Linear(10, 10)
    input = fluid.dygraph.to_variable(x)
    scheduler = fluid.dygraph.StepDecay(0.5, step_size=3)
    adam = fluid.optimizer.Adam(learning_rate = scheduler, parameter_list = linear.parameters())

    for epoch in range(9):
        for batch_id in range(5):
            out = linear(input)
            loss = fluid.layers.reduce_mean(out)
            adam.minimize(loss)
        scheduler.epoch()

        print("epoch:{}, current lr is {}" .format(epoch, adam.current_step_lr()))
        # epoch:0, current lr is 0.5
        # epoch:1, current lr is 0.5
        # epoch:2, current lr is 0.5
        # epoch:3, current lr is 0.05
        # epoch:4, current lr is 0.05
        # epoch:5, current lr is 0.05
        # epoch:6, current lr is 0.005
        # epoch:7, current lr is 0.005
        # epoch:8, current lr is 0.005
create_lr_var ( lr )

create_lr_var

convert lr from float to variable

Parameters

lr – learning rate

Returns

learning rate variable

epoch ( epoch=None )

epoch

compueted learning_rate and update it when invoked.

set_dict ( state_dict )

set_dict

Loads the schedulers state.

set_state_dict ( state_dict )

set_state_dict

Loads the schedulers state.

state_dict ( )

state_dict

Returns the state of the scheduler as a dict.

It is a subset of self.__dict__ .