Lamb¶
- class paddle.optimizer. Lamb ( learning_rate=0.001, lamb_weight_decay=0.01, beta1=0.9, beta2=0.999, epsilon=1e-06, parameters=None, grad_clip=None, exclude_from_weight_decay_fn=None, multi_precision=False, name=None ) [source]
-
LAMB (Layer-wise Adaptive Moments optimizer for Batching training) Optimizer.
LAMB Optimizer is designed to scale up the batch size of training without losing accuracy, which supports adaptive element-wise updating and accurate layer-wise correction. For more information, please refer to Large Batch Optimization for Deep Learning: Training BERT in 76 minutes .
The updating of parameters follows:
\[ \begin{align}\begin{aligned}m_t &= \beta_1 m_{t - 1}+ (1 - \beta_1)g_t\\v_t &= \beta_2 v_{t - 1} + (1 - \beta_2)g_t^2\\m_t &= \frac{m_t}{\beta_1^t}\\v_t &= \frac{v_t}{\beta_2^t}\\r_t &= \frac{m_t}{\sqrt{v_t}+\epsilon}\\w_t &= w_{t-1} -\eta_t \frac{\left \| w_{t-1}\right \|}{\left \| r_t + \lambda w_{t-1}\right \|} (r_t + \lambda w_{t-1})\end{aligned}\end{align} \]where \(m\) is the 1st moment, and \(v\) the 2nd moment, \(\\eta\) the learning rate, \(\\lambda\) the LAMB weight decay rate.
- Parameters
-
learning_rate (float|Variable, optional) – the learning rate used to update parameters. Can be a float value or a Variable with data type float32. Default 0.001.
lamb_weight_decay (float, optional) – The LAMB weight decay rate. Default 0.01. Remind that weight_decay should be None.
beta1 (float, optional) – The exponential decay rate for the 1st moment estimates. Default 0.9.
beta2 (float, optional) – The exponential decay rate for the 2nd moment estimates. Default 0.999.
epsilon (float, optional) – A small float value for numerical stability. Default 1e-6.
parameters (Iterable, optional) – Iterable of
Variable
names to update to minimizeloss
. This parameter is required in dygraph mode. And you can specify different options for different parameter groups such as the learning rate, weight decay, etc, then the parameters are list of dict. Note that the learning_rate in paramter groups represents the scale of base learning_rate. The default value is None in static graph mode, at this time all parameters will be updated.grad_clip (GradientClipBase, optional) – Gradient cliping strategy, it’s an instance of some derived class of
GradientClipBase
. There are three cliping strategies ( api_paddle_fluid_clip_ClipGradByGlobalNorm , api_paddle_fluid_clip_ClipGradByNorm , api_paddle_fluid_clip_ClipGradByValue ). If you want better convergence, it is recommended to use api_paddle_fluid_clip_ClipGradByGlobalNorm . Default None, meaning there is no gradient clipping.name (str|None) – For detailed information, please refer to Name . Usually name is no need to set and None by default.
Examples
import paddle inp = paddle.uniform(shape=[10, 10], dtype='float32', min=-0.1, max=0.1) linear = paddle.nn.Linear(10, 10) out = linear(inp) loss = paddle.mean(out) beta1 = paddle.to_tensor([0.9], dtype="float32") beta2 = paddle.to_tensor([0.85], dtype="float32") lamb = paddle.optimizer.Lamb(learning_rate=0.002, parameters=linear.parameters(), lamb_weight_decay=0.01) back = out.backward() lamb.step() lamb.clear_grad()
-
append_regularization_ops
(
parameters_and_grads,
regularization=None
)
append_regularization_ops¶
-
Create and add backward regularization Operators
Creates and adds backward regularization operators in the BlockDesc. This will add gradients of the regularizer function to the gradients of the parameters and return these modified gradients. This is the same as implementing weight decay in optimizers for regularization.
- Parameters
-
parameters_and_grads – A list of (parameters, gradients) pairs that need to be regularized.
regularization – A global regularizer. If the parameter is not set. It will be applied with regularizer.
- Returns
-
list of (parameters, gradients) pair with the regularized gradient
- Return type
-
list[(Variable, Variable)]
- Raises
-
Exception – Unknown regularization type
-
clear_grad
(
set_to_zero=True
)
clear_grad¶
-
Clear the gradients of all optimized parameters for model.
If not, new gradient will accumulat on previous gradient.
There are two method to clear grad: set_to_zero or delete grad.
- Parameters
-
set_to_zero (bool, optional) – If set grads to zero or not, default is True.
- Returns
-
None
Examples
import paddle a = paddle.arange(26, dtype="float32").reshape([2, 13]) linear = paddle.nn.Linear(13, 5) # This can be any optimizer supported by dygraph. adam = paddle.optimizer.Adam(learning_rate = 0.01, parameters = linear.parameters()) out = linear(a) out.backward() adam.step() adam.clear_grad()
-
get_lr
(
)
get_lr¶
-
Get current learning rate of optimizer. If ‘LRScheduler’ is not used, the return value is all the same. If ‘LRScheduler’ is used, the return value is the current scheduled learing rete.
- Returns
-
The current learning rate of optimizer.
- Return type
-
float
Examples
# train on default dynamic graph mode import paddle import numpy as np emb = paddle.nn.Embedding(10, 3) ## example1: LRScheduler is not used, return the same value is all the same adam = paddle.optimizer.Adam(0.01, parameters = emb.parameters()) for batch in range(10): input = paddle.randint(low=0, high=5, shape=[5]) out = emb(input) out.backward() print("Learning rate of step{}: {}".format(batch, adam.get_lr())) # 0.01 adam.step() ## example2: StepDecay is used, return the scheduled learning rate scheduler = paddle.optimizer.lr.StepDecay(learning_rate=0.5, step_size=2, gamma=0.1) adam = paddle.optimizer.Adam(scheduler, parameters = emb.parameters()) for batch in range(10): input = paddle.randint(low=0, high=5, shape=[5]) out = emb(input) out.backward() print("Learning rate of step{}: {}".format(batch, adam.get_lr())) # 0.5->0.05... adam.step() scheduler.step() # train on static graph mode paddle.enable_static() main_prog = paddle.static.Program() start_prog = paddle.static.Program() with paddle.static.program_guard(main_prog, start_prog): x = paddle.static.data(name='x', shape=[None, 10]) z = paddle.static.nn.fc(x, 100) loss = paddle.mean(z) scheduler = paddle.optimizer.lr.StepDecay(learning_rate=0.5, step_size=2, gamma=0.1) adam = paddle.optimizer.Adam(learning_rate=scheduler) adam.minimize(loss) exe = paddle.static.Executor() exe.run(start_prog) for batch in range(10): print("Learning rate of step{}: {}", adam.get_lr()) # 0.5->0.05->0.005... out = exe.run(main_prog, feed={'x': np.random.randn(3, 10).astype('float32')}) scheduler.step()
-
minimize
(
loss,
startup_program=None,
parameters=None,
no_grad_set=None
)
minimize¶
-
Add operations to minimize
loss
by updatingparameters
.- Parameters
-
loss (Tensor) – A
Tensor
containing the value to minimize.startup_program (Program, optional) – api_fluid_Program for initializing parameters in
parameters
. The default value is None, at this time api_fluid_default_startup_program will be used.parameters (list, optional) – List of
Tensor
orTensor.name
to update to minimizeloss
. The default value is None, at this time all parameters will be updated.no_grad_set (set, optional) – Set of
Tensor
orTensor.name
that don’t need to be updated. The default value is None.
- Returns
-
tuple (optimize_ops, params_grads), A list of operators appended by minimize and a list of (param, grad) tensor pairs, param is
Parameter
, grad is the gradient value corresponding to the parameter. In static graph mode, the returned tuple can be passed tofetch_list
inExecutor.run()
to indicate program pruning. If so, the program will be pruned byfeed
andfetch_list
before run, see details inExecutor
. - Return type
-
tuple
Examples
import paddle linear = paddle.nn.Linear(10, 10) input = paddle.uniform(shape=[10, 10], min=-0.1, max=0.1) out = linear(input) loss = paddle.mean(out) beta1 = paddle.to_tensor([0.9], dtype="float32") beta2 = paddle.to_tensor([0.99], dtype="float32") adam = paddle.optimizer.Adam(learning_rate=0.1, parameters=linear.parameters(), weight_decay=0.01) loss.backward() adam.minimize(loss) adam.clear_grad()
-
set_lr
(
value
)
set_lr¶
-
- Api_attr
-
imperative
Set the value of the learning rate manually in the optimizer. If the optimizer use LRScheduler, this API cannot be invoked, because it will lead to conflict.
- Parameters
-
value (float) – the value of learning rate
- Returns
-
None
Examples
import paddle linear = paddle.nn.Linear(10, 10) adam = paddle.optimizer.Adam(0.1, parameters=linear.parameters()) # set learning rate manually by python float value lr_list = [0.2, 0.3, 0.4, 0.5, 0.6] for i in range(5): adam.set_lr(lr_list[i]) lr = adam.get_lr() print("current lr is {}".format(lr)) # Print: # current lr is 0.2 # current lr is 0.3 # current lr is 0.4 # current lr is 0.5 # current lr is 0.6
-
set_state_dict
(
state_dict
)
set_state_dict¶
-
Load optimizer state dict. For Adam optimizer, contains beta1, beta2, momentum etc. If LRScheduler have been used, global_step will be changed.
- Parameters
-
state_dict (dict) – Dict contains all the Tensor needed by optimizer
- Returns
-
None
Examples
import paddle emb = paddle.nn.Embedding(10, 10) layer_state_dict = emb.state_dict() paddle.save(layer_state_dict, "emb.pdparams") scheduler = paddle.optimizer.lr.NoamDecay( d_model=0.01, warmup_steps=100, verbose=True) adam = paddle.optimizer.Adam( learning_rate=scheduler, parameters=emb.parameters()) opt_state_dict = adam.state_dict() paddle.save(opt_state_dict, "adam.pdopt") opti_state_dict = paddle.load("adam.pdopt") adam.set_state_dict(opti_state_dict)
-
state_dict
(
)
state_dict¶
-
Get state dict information from optimizer. It contain all the tensor used by optimizer. For Adam optimizer, contains beta1, beta2, momentum etc. If LRScheduler have been used, global_step will be include in state dict. If the optimizer never be called(minimize function), the state_dict is empty.
- Parameters
-
None –
- Returns
-
dict contains all the Tensor used by optimizer
- Return type
-
state_dict(dict)
Examples
import paddle emb = paddle.nn.Embedding(10, 10) adam = paddle.optimizer.Adam(0.001, parameters=emb.parameters()) state_dict = adam.state_dict()
-
step
(
)
step¶
-
Execute the optimizer and update parameters once.
- Returns
-
None
Examples
import paddle a = paddle.arange(26, dtype="float32").reshape([2, 13]) linear = paddle.nn.Linear(13, 5) # This can be any optimizer supported by dygraph. adam = paddle.optimizer.Adam(learning_rate = 0.01, parameters = linear.parameters()) out = linear(a) out.backward() adam.step() adam.clear_grad()