decorate

paddle.amp. decorate ( models, optimizers=None, level='O1', master_weight=None, save_dtype=None ) [source]

Decorate models and optimizers for auto-mixed-precision. When level is O1(amp), the decorate will do nothing. When level is O2(pure fp16), the decorate will cast all parameters of models to FP16, except BatchNorm and LayerNorm.

Commonly, it is used together with auto_cast to achieve Pure fp16 in imperative mode.

Parameters
  • models (Layer|list of Layer, optional) – The defined models by user, models must be either a single model or a list of models. Default is None.

  • optimizers (Optimizer|list of Optimizer, optional) – The defined optimizers by user, optimizers must be either a single optimizer or a list of optimizers. Default is None.

  • level (str, optional) – Auto mixed precision level. Accepted values are “O1” and “O2”: O1 represent mixed precision, the decorator will do nothing; O2 represent Pure fp16, the decorator will cast all parameters of models to FP16, except BatchNorm and LayerNorm. Default is O1(amp)

  • master_weight (bool, optinal) – For level=’O2’, whether to use multi-precision during weight updating. If master_weight is None, in O2 level optimizer will use multi-precision. Default is None.

  • save_dtype (float, optional) – The save model parameter dtype when use paddle.save or paddle.jit.save,it should be float16, float32, float64 or None. The save_dtype will not change model parameters dtype, it just change the state_dict dtype. When save_dtype is None, the save dtype is same as model dtype. Default is None.

Examples

# required: gpu
# Demo1: single model and optimizer:
import paddle

model = paddle.nn.Conv2D(3, 2, 3, bias_attr=False)
optimzier = paddle.optimizer.SGD(parameters=model.parameters())

model, optimizer = paddle.amp.decorate(models=model, optimizers=optimzier, level='O2')

data = paddle.rand([10, 3, 32, 32])

with paddle.amp.auto_cast(enable=True, custom_white_list=None, custom_black_list=None, level='O2'):
    output = model(data)
    print(output.dtype) # FP16

# required: gpu
# Demo2: multi models and optimizers:
model2 = paddle.nn.Conv2D(3, 2, 3, bias_attr=False)
optimizer2 = paddle.optimizer.Adam(parameters=model2.parameters())

models, optimizers = paddle.amp.decorate(models=[model, model2], optimizers=[optimzier, optimizer2], level='O2')

data = paddle.rand([10, 3, 32, 32])

with paddle.amp.auto_cast(enable=True, custom_white_list=None, custom_black_list=None, level='O2'):
    output = models[0](data)
    output2 = models[1](data)
    print(output.dtype) # FP16
    print(output2.dtype) # FP16

Used in the guide/tutorials