Program¶
- class paddle.static. Program [source]
-
Create Python Program. It has at least one Block, when the control flow op like conditional_block, while api_paddle_base_layers_While is included, it will contain nested block.
Please reference the framework.proto for details.
A set of Program usually contains startup program and main program. A startup program is set to contain some initial work, eg. initialize the
Parameter
, and the main program will contain the network structure and vars for train.A set of Program can be used for test or train, in train program , Paddle will contain all content to build a train network, in test program Paddle will prune some content which is irrelevant to test, eg. backward ops and vars.
- Notes:
-
we have api_paddle_base_framework_default_startup_program and api_paddle_base_framework_default_main_program by default, a pair of them will shared the parameters. The api_paddle_base_framework_default_startup_program only run once to initialize parameters, api_paddle_base_framework_default_main_program run in every mini batch and adjust the weights.
- Returns
-
An empty Program.
- Return type
-
Program
Examples
>>> import paddle >>> import paddle.static as static >>> paddle.enable_static() >>> main_program = static.Program() >>> startup_program = static.Program() >>> with static.program_guard(main_program=main_program, startup_program=startup_program): ... x = static.data(name="x", shape=[-1, 784], dtype='float32') ... y = static.data(name="y", shape=[-1, 1], dtype='int32') ... z = static.nn.fc(name="fc", x=x, size=10, activation="relu") >>> print("main program is: {}".format(main_program)) >>> print("start up program is: {}".format(startup_program))
-
global_seed
(
seed=0
)
global_seed¶
-
Set global seed for Program
- Returns
-
None.
Examples
>>> import paddle >>> import paddle.static as static >>> paddle.enable_static() >>> prog = static.default_main_program() >>> print(prog.random_seed) 0 >>> ## the default random seed is 0 >>> prog.global_seed(102) >>> prog1 = static.default_main_program() >>> print(prog1.random_seed) 102 >>> ## the random seed is 102
-
to_string
(
throw_on_error,
with_details=False
)
to_string¶
-
To debug string.
- Parameters
-
throw_on_error (bool) – raise Value error when any of required fields is not set.
with_details (bool) – True if more details about variables and parameters, e.g.,
trainable
,optimize_attr
, need to print.
- Returns
-
The debug string describe current Program.
- Return type
-
str
- Raises
-
ValueError – If any of required fields is not set and throw_on_error is True.
Examples
>>> import paddle >>> import paddle.static as static >>> paddle.enable_static() >>> prog = static.default_main_program() >>> x = static.data(name="X", shape=[2,3], dtype="float32") >>> pred = static.nn.fc(x, size=3) >>> prog_string = prog.to_string(throw_on_error=True, with_details=False) >>> prog_string_with_details = prog.to_string(throw_on_error=False, with_details=True) >>> print("program string without detail: {}".format(prog_string)) >>> print("program string with detail: {}".format(prog_string_with_details))
-
clone
(
for_test=False
)
clone¶
-
Create a new Program with forward content of original one when
for_test=True
. Create a new Program as same as the original one whenfor_test=False
.Some operators, e.g., api_paddle_base_layers_batch_norm , behave differently between training and testing. They have an attribute,
is_test
, to control this behaviour. This method will change theis_test
attribute of them toTrue
whenfor_test=True
.Set for_test to False when you want to clone the program for training.
Set for_test to True when you want to clone the program for testing. We will prune the backward and optimize part of the program when you use
clone
afterOptimizer.minimize
, but we still recommend you to useclone
before usingOptimizer.minimize
.
Examples
>>> import paddle >>> import paddle.static as static >>> paddle.enable_static() >>> img = static.data(name='image', shape=[None, 784]) >>> pred = static.nn.fc(x=img, size=10, activation='relu') >>> loss = paddle.mean(pred) >>> # Here we use clone before Momentum >>> test_program = static.default_main_program().clone(for_test=True) >>> optimizer = paddle.optimizer.Momentum(learning_rate=0.01, momentum=0.9) >>> optimizer.minimize(loss)
- Parameters
-
for_test (bool) – True if change the
is_test
attribute of operators toTrue
and prune the backward and optimize part of the program. The default value isFalse
. - Returns
-
A new Program with forward content of original one when
for_test=True
. A new Program as same as the original one whenfor_test=False
- Return type
-
Program
Examples
Note
The Program’s order maybe different after
clone
and this will not affect your training or testing progress. In the following example we give you an simple methodprint_prog(program)
to print Program Descs inorder to make sure you have same print result afterclone
:>>> import paddle >>> def print_prog(prog): ... for name, value in sorted(prog.block(0).vars.items()): ... print(value) ... for op in prog.block(0).ops: ... print("op type is {}".format(op.type)) ... print("op inputs are {}".format(op.input_arg_names)) ... print("op outputs are {}".format(op.output_arg_names)) ... for key, value in sorted(op.all_attrs().items()): ... if key not in ['op_callstack', 'op_role_var']: ... print(" [ attrs: {}: {} ]".format(key, value))
-
- To clone a test program, the sample code is:
-
>>> import paddle >>> import paddle.static as static >>> import paddle.utils as utils >>> import paddle.nn.functional as F >>> paddle.enable_static() >>> def print_prog(prog): ... for name, value in sorted(prog.block(0).vars.items()): ... print(value) ... for op in prog.block(0).ops: ... print("op type is {}".format(op.type)) ... print("op inputs are {}".format(op.input_arg_names)) ... print("op outputs are {}".format(op.output_arg_names)) ... for key, value in sorted(op.all_attrs().items()): ... if key not in ['op_callstack', 'op_role_var']: ... print(" [ attrs: {}: {} ]".format(key, value)) >>> train_program = static.Program() >>> startup_program = static.Program() >>> # startup_program is used to do some parameter init work, >>> # and main program is used to hold the network >>> with static.program_guard(train_program, startup_program): ... with utils.unique_name.guard(): ... img = static.data(name='image', shape=[None, 784]) ... hidden = static.nn.fc(x=img, size=200, activation='relu') ... hidden = F.dropout(hidden, p=0.5) ... loss = F.cross_entropy( ... input=static.nn.fc(x=hidden, size=10, activation='softmax'), ... label=static.data(name='label', shape=[1], dtype='int64')) ... avg_loss = paddle.mean(loss) ... test_program = train_program.clone(for_test=True) >>> print_prog(test_program) >>> # Due to parameter sharing usage for train and test, so we need to use startup program of train >>> # instead of using test startup program, while nothing is in test's startup program >>> # In Paddle we will share weights by using the same Tensor name. In train and test program >>> # all parameters will have the same name and this can make train and test program sharing parameters, >>> # that's why we need to use startup program of train. And for startup program of test, it has nothing, >>> # since it is a new program. >>> with static.program_guard(train_program, startup_program): ... with utils.unique_name.guard(): ... sgd = paddle.optimizer.SGD(learning_rate=1e-3) ... sgd.minimize(avg_loss)
-
- The clone method can be avoid if you create program for training and program for testing individually.
-
>>> import paddle >>> import paddle.static as static >>> import paddle.utils as utils >>> import paddle.nn.functional as F >>> paddle.enable_static() >>> def print_prog(prog): ... for name, value in sorted(prog.block(0).vars.items()): ... print(value) ... for op in prog.block(0).ops: ... print("op type is {}".format(op.type)) ... print("op inputs are {}".format(op.input_arg_names)) ... print("op outputs are {}".format(op.output_arg_names)) ... for key, value in sorted(op.all_attrs().items()): ... if key not in ['op_callstack', 'op_role_var']: ... print(" [ attrs: {}: {} ]".format(key, value)) >>> def network(): ... img = static.data(name='image', shape=[None, 784]) ... hidden = static.nn.fc(x=img, size=200, activation='relu') ... hidden = F.dropout(hidden, p=0.5) ... loss = F.cross_entropy( ... input=static.nn.fc(x=hidden, size=10, activation='softmax'), ... label=static.data(name='label', shape=[1], dtype='int64')) ... avg_loss = paddle.mean(loss) ... return avg_loss >>> train_program_2 = static.Program() >>> startup_program_2 = static.Program() >>> test_program_2 = static.Program() >>> with static.program_guard(train_program_2, startup_program_2): ... with utils.unique_name.guard(): ... avg_loss = network() ... sgd = paddle.optimizer.SGD(learning_rate=1e-3) ... sgd.minimize(avg_loss) >>> # the test startup program is not used. >>> with static.program_guard(test_program_2, startup_program_2): ... with utils.unique_name.guard(): ... avg_loss = network() >>> print_prog(test_program_2)
The two code snippets above will generate and print same programs.
-
static
parse_from_string
(
binary_str
)
parse_from_string¶
-
Note
All information about parameters will be lost after serialization;
This API has no effect in Dygraph mode.
Deserialize a Program from protobuf binary string. This method always use to save and load model
- Parameters
-
binary_str_type (str) – the binary protobuf string.
- Returns
-
A deserialized Program.
- Return type
-
Program
Examples
>>> import paddle >>> import paddle.static as static >>> paddle.enable_static() >>> startup_prog = static.Program() >>> main_prog = static.Program() >>> with static.program_guard(startup_prog, main_prog): ... x = static.data(name='X', shape=[1000, 784], dtype='float32') ... y = static.data(name='Y', shape=[784, 100], dtype='float32') ... z = paddle.matmul(x=x, y=y) ... binary_str = static.default_main_program().desc.serialize_to_string() ... prog_restored = static.default_main_program().parse_from_string(binary_str) ... print(static.default_main_program()) ... print(prog_restored)
- property num_blocks
-
The number of Block in this Program.
Note
This API has no effect in Dygraph mode.
- Returns
-
num of Block in current Program
- Return type
-
int(Platform-dependent size)
Examples
>>> import paddle >>> import paddle.static as static >>> paddle.enable_static() >>> prog = static.default_main_program() >>> num_blocks = prog.num_blocks >>> print(num_blocks) 1
- property random_seed
-
The default random seed for random operators in Program.
0
means get the random seed from random device.Note
It must be set before the operators have been added.
- Returns
-
Random seed in current Program
- Return type
-
int64
Examples
>>> import paddle >>> import paddle.static as static >>> import paddle.nn.functional as F >>> paddle.enable_static() >>> prog = static.default_main_program() >>> random_seed = prog.random_seed >>> x_var = static.data(name="X", shape=[3,3], dtype="float32") >>> print(random_seed) 0 >>> ## the default random seed is 0 >>> # Here we need to set random seed before we use paddle.nn.functional.dropout >>> prog.random_seed = 1 >>> z_var = F.dropout(x_var, 0.7) >>> print(prog.random_seed) 1 >>> ## the random seed is change to 1
-
global_block
(
)
global_block¶
-
Note
This API has no effect in Dygraph mode.
Get the first Block of this Program.
Examples
>>> import paddle >>> import paddle.static as static >>> paddle.enable_static() >>> prog = static.default_main_program() >>> gb_block = prog.global_block() >>> print(gb_block)
-
block
(
index
)
block¶
-
Note
This API has no effect in Dygraph mode.
Get the
index
Block of this ProgramExamples
>>> import paddle >>> import paddle.static as static >>> paddle.enable_static() >>> prog = static.default_main_program() >>> block_0 = prog.block(0) >>> print(block_0)
-
current_block
(
)
current_block¶
-
Note
This API has no effect in Dygraph mode.
Get the current Block . The
current
Block is the Block to append operators.Examples
>>> import paddle >>> import paddle.static as static >>> paddle.enable_static() >>> prog = static.default_main_program() >>> current_blk = prog.current_block() >>> print(current_blk)
-
list_vars
(
)
list_vars¶
-
Get all Tensors from this Program. A iterable object is returned.
- Returns
-
The Generator will yield every Tensor in this program.
- Return type
-
iterable Tensors
Examples
>>> import paddle >>> import paddle.static as static >>> paddle.enable_static() >>> prog = static.default_main_program() >>> img = static.data(name='img', shape=[None, 1,28,28], dtype='float32') >>> label = static.data(name='label', shape=[None,1], dtype='int64') >>> for var in prog.list_vars(): ... print(var) >>> # var img : LOD_TENSOR.shape(-1, 1, 28, 28).dtype(float32).stop_gradient(True) >>> # var label : LOD_TENSOR.shape(-1, 1).dtype(int64).stop_gradient(True)
-
all_parameters
(
)
all_parameters¶
-
Get all Model Parameters from this Program. A list object is returned.
- Returns
-
The list contains all parameters in this program.
- Return type
-
list[ Model Parameters ]
Examples
>>> import paddle >>> import paddle.static as static >>> paddle.enable_static() >>> program = static.default_main_program() >>> data = static.data(name='x', shape=[None, 13], dtype='float32') >>> hidden = static.nn.fc(x=data, size=10) >>> loss = paddle.mean(hidden) >>> paddle.optimizer.SGD(learning_rate=0.01).minimize(loss) >>> for param in program.all_parameters(): ... print(param) >>> # Here will print all parameters in current program, in this example, >>> # the result is like: >>> # >>> # persist trainable param fc_0.w_0 : LOD_TENSOR.shape(13, 10).dtype(float32).stop_gradient(False) >>> # persist trainable param fc_0.b_0 : LOD_TENSOR.shape(10,).dtype(float32).stop_gradient(False) >>> # >>> # Here print(param) will print out all the properties of a parameter, >>> # including name, type and persistable, you can access to specific >>> # property of a parameter, such as param.name, param.type
-
state_dict
(
mode='all',
scope=None
)
state_dict¶
-
Get parameters and persistable buffers of program as a dict. The key is the name of the parameter or the name of the buffer. The value is the tensor of this variable in the given scope.
Note
This function MUST called after run start_up_program
- Parameters
-
mode (str, optional) – Source of the obtained parameters and buffers. ‘opt’ : The return value only contains the variable in the optimizer. ‘param’ : The return value only contains the variable in the network, not the variable in the optimizer. ‘all’ : The return value contains the variable in the network and optimizer. Default: ‘all’
scope (Scope, optional) – If scope is None, state_dict will be set to global scope obtained through ‘paddle.static.global_scope()’. Otherwise, value will be set to scope. Default: None
- Returns
-
a dict contains the parameters and persistable buffers.
- Return type
-
dict
Examples
>>> import paddle >>> import paddle.static as static >>> paddle.enable_static() >>> x = static.data(name="x", shape=[10, 10], dtype='float32') >>> y = static.nn.fc(x, 10) >>> z = static.nn.fc(y, 10) >>> place = paddle.CPUPlace() >>> exe = static.Executor(place) >>> exe.run(static.default_startup_program()) >>> prog = static.default_main_program() >>> path = "./temp/model.pdparams" >>> paddle.save(prog.state_dict(), path)
-
set_state_dict
(
state_dict,
scope=None
)
set_state_dict¶
-
Set parameters and persistable buffers in state_dict to program. An exception will throw if shape or dtype of the parameters is not match.
Note
This function MUST called after run start_up_program
- Parameters
-
state_dict (dict) – the dict store parameters and persistable buffers. The key is the name of the parameter or the name of the buffer. The value is the tensor of this variable in the given scope.
scope (Scope, optional) – If scope is None, state_dict will be set to global scope obtained through ‘paddle.static.global_scope()’. Otherwise, value will be set to scope. Default: None
- Returns
-
None
Examples
>>> import paddle >>> import paddle.static as static >>> paddle.enable_static() >>> x = static.data(name="x", shape=[10, 10], dtype='float32') >>> y = static.nn.fc(x, 10) >>> z = static.nn.fc(y, 10) >>> place = paddle.CPUPlace() >>> exe = static.Executor(place) >>> exe.run(static.default_startup_program()) >>> prog = static.default_main_program() >>> path = "./temp/model.pdparams" >>> paddle.save(prog.state_dict(), path) >>> state_dict_load = paddle.load(path) >>> prog.set_state_dict(state_dict_load)