save¶
- paddle. save ( obj, path, protocol=4, **configs ) [source]
-
Save an object to the specified path.
Note
Now supports saving
state_dict
of Layer/Optimizer, Tensor and nested structure containing Tensor, Program.Note
Different from
paddle.jit.save
, since the save result ofpaddle.save
is a single file, there is no need to distinguish multiple saved files by adding a suffix. The argumentpath
ofpaddle.save
will be directly used as the saved file name instead of a prefix. In order to unify the saved file name format, we recommend using the paddle standard suffix: 1. forLayer.state_dict
, recommend to use.pdparams
; 2. forOptimizer.state_dict
, recommend to use.pdopt
. For specific examples, please refer to API code examples.- Parameters
-
obj (Object) – The object to be saved.
path (str|BytesIO) – The path/buffer of the object to be saved. If saved in the current directory, the input path string will be used as the file name.
protocol (int, optional) – The protocol version of pickle module must be greater than 1 and less than 5. Default: 4
**configs (dict, optional) – optional keyword arguments. The following options are currently supported: use_binary_format(bool): When the saved object is static graph variable, you can specify
use_binary_for_var
. If True, save the file in the c++ binary format when saving a single static graph variable; otherwise, save it in pickle format. Default: False
- Returns
-
None
Examples
>>> # example 1: dynamic graph >>> import paddle >>> emb = paddle.nn.Embedding(10, 10) >>> layer_state_dict = emb.state_dict() >>> # save state_dict of emb >>> paddle.save(layer_state_dict, "emb.pdparams") >>> scheduler = paddle.optimizer.lr.NoamDecay( ... d_model=0.01, warmup_steps=100, verbose=True) >>> adam = paddle.optimizer.Adam( ... learning_rate=scheduler, ... parameters=emb.parameters()) >>> opt_state_dict = adam.state_dict() >>> # save state_dict of optimizer >>> paddle.save(opt_state_dict, "adam.pdopt") >>> # save weight of emb >>> paddle.save(emb.weight, "emb.weight.pdtensor")
>>> # example 2: Save multiple state_dict at the same time >>> import paddle >>> from paddle import nn >>> from paddle.optimizer import Adam >>> layer = paddle.nn.Linear(3, 4) >>> adam = Adam(learning_rate=0.001, parameters=layer.parameters()) >>> obj = {'model': layer.state_dict(), 'opt': adam.state_dict(), 'epoch': 100} >>> path = 'example/model.pdparams' >>> paddle.save(obj, path)
>>> # example 3: static graph >>> import paddle >>> import paddle.static as static >>> paddle.enable_static() >>> # create network >>> x = paddle.static.data(name="x", shape=[None, 224], dtype='float32') >>> z = paddle.static.nn.fc(x, 10) >>> place = paddle.CPUPlace() >>> exe = paddle.static.Executor(place) >>> exe.run(paddle.static.default_startup_program()) >>> prog = paddle.static.default_main_program() >>> for var in prog.list_vars(): ... if list(var.shape) == [224, 10]: ... tensor = var.get_value() ... break >>> # save/load tensor >>> path_tensor = 'temp/tensor.pdtensor' >>> paddle.save(tensor, path_tensor) >>> # save/load state_dict >>> path_state_dict = 'temp/model.pdparams' >>> paddle.save(prog.state_dict("param"), path_tensor)
>>> # example 4: save program >>> import paddle >>> paddle.enable_static() >>> data = paddle.static.data( ... name='x_static_save', shape=(None, 224), dtype='float32') >>> y_static = z = paddle.static.nn.fc(data, 10) >>> main_program = paddle.static.default_main_program() >>> path = "example/main_program.pdmodel" >>> paddle.save(main_program, path)
>>> # example 5: save object to memory >>> from io import BytesIO >>> import paddle >>> from paddle.nn import Linear >>> paddle.disable_static() >>> linear = Linear(5, 10) >>> state_dict = linear.state_dict() >>> byio = BytesIO() >>> paddle.save(state_dict, byio) >>> paddle.seed(2023) >>> tensor = paddle.randn([2, 3], dtype='float32') >>> paddle.save(tensor, byio)