InverseTimeDecay

class paddle.fluid.dygraph.learning_rate_scheduler. InverseTimeDecay ( learning_rate, decay_steps, decay_rate, staircase=False, begin=0, step=1, dtype='float32' ) [source]
Api_attr

imperative

Applies inverse time decay to the initial learning rate.

The algorithm can be described as following. If staircase is set to False, then:

\[\begin{split}decayed\_learning\_rate = \\frac{learning\_rate}{1 + decay\_rate * \\frac{global\_step}{decay\_step}}\end{split}\]

If staircase is set to True, then:

\[\begin{split}decayed\_learning\_rate = \\frac{learning\_rate}{1 + decay\_rate * math.floor(\\frac{global\_step}{decay\_step})}\end{split}\]
Parameters
  • learning_rate (Variable|float) – The initial learning rate. If the type is Variable, it’s a tensor with shape [1], the data type can be float32 or float64. It also can be set to python int number.

  • decay_steps (int) – The decay step size. It determines the decay cycle.

  • decay_rate (float) – The decay rate.

  • staircase (bool, optional) – If set to True, decay the learning rate at discrete intervals. The default value is False.

  • begin (int, optional) – The begin step. The initial value of global_step described above. The default value is 0.

  • step (int, optional) – The step size used to calculate the new global_step in the description above. The default value is 1.

  • dtype (str, optional) – The data type used to create the learning rate variable. The data type can be ‘float32’, ‘float64’. The default value is ‘float32’.

Returns

None.

Examples

import paddle.fluid as fluid
base_lr = 0.1
with fluid.dygraph.guard():
    emb = fluid.dygraph.Embedding([10, 10])
    sgd_optimizer = fluid.optimizer.SGD(
        learning_rate=fluid.dygraph.InverseTimeDecay(
              learning_rate=base_lr,
              decay_steps=10000,
              decay_rate=0.5,
              staircase=True),
        parameter_list = emb.parameters())
create_lr_var ( lr )

create_lr_var

convert lr from float to variable

Parameters

lr – learning rate

Returns

learning rate variable

set_dict ( state_dict )

set_dict

Loads the schedulers state.

set_state_dict ( state_dict )

set_state_dict

Loads the schedulers state.

state_dict ( )

state_dict

Returns the state of the scheduler as a dict.

It is a subset of self.__dict__ .