Fleet¶
- class paddle.distributed.fleet. Fleet [source]
-
Unified API for distributed training of PaddlePaddle Please reference the https://github.com/PaddlePaddle/FleetX for details
- Returns
-
A Fleet instance
- Return type
-
Fleet
Example for collective training:
import paddle paddle.enable_static() import paddle.distributed.fleet as fleet fleet.init(is_collective=True) strategy = fleet.DistributedStrategy() optimizer = paddle.optimizer.SGD(learning_rate=0.001) optimizer = fleet.distributed_optimizer(optimizer, strategy=strategy) # do distributed training
Example for parameter server training:
import paddle paddle.enable_static() import paddle.distributed.fleet as fleet strategy = fleet.DistributedStrategy() fleet.init(strategy=strategy) optimizer = paddle.optimizer.SGD(learning_rate=0.001) optimizer = fleet.distributed_optimizer(optimizer) if fleet.is_first_worker(): print("this is first worker") print("current node index: {}".format(fleet.worker_index())) print("total number of worker num: {}".format(fleet.worker_num())) if fleet.is_worker(): print("this is worker") print("worker endpoints: {}".format(fleet.worker_endpoints(to_string=True))) print("server num: {}".format(fleet.server_num())) print("server endpoints: {}".format(fleet.server_endpoints(to_string=True))) if fleet.is_server(): print("this is server") fleet.stop_worker()
-
init
(
role_maker=None,
is_collective=False,
strategy=None
)
[source]
init¶
-
Initialize role_maker in Fleet.
This function is responsible for the distributed architecture what you want to run your code behind.
- Parameters
-
role_maker (RoleMakerBase, optional) – A
RoleMakerBase
containing the configuration of environment variables related to distributed training.If you did not initialize the rolemaker by yourself, it will be automatically initialized to PaddleRoleMaker. The default value is None.is_collective (Boolean, optional) – A
Boolean
variable determines whether the program runs on the CPU or GPU. False means set distributed training using CPU, and True means GPU.The default value is False.The default value is False.strategy (DistributedStrategy) – Extra properties for distributed training. For details, please refer to paddle.distributed.fleet.DistributedStrategy. Default: None.
- Returns
-
None
Examples1:
import paddle.distributed.fleet as fleet fleet.init()
Examples2:
import paddle.distributed.fleet as fleet fleet.init(is_collective=True)
Examples3:
import paddle.distributed.fleet as fleet role = fleet.PaddleCloudRoleMaker() fleet.init(role)
Examples4:
import paddle.distributed.fleet as fleet strategy = fleet.DistributedStrategy() fleet.init(strategy=strategy)
-
is_first_worker
(
)
[source]
is_first_worker¶
-
Check whether the node is the first instance of worker.
- Returns
-
- True if this is the first node of worker,
-
False if not.
- Return type
-
bool
Examples
import paddle.distributed.fleet as fleet fleet.init() fleet.is_first_worker()
-
worker_index
(
)
[source]
worker_index¶
-
Get current worker index.
- Returns
-
node id
- Return type
-
int
Examples
import paddle.distributed.fleet as fleet fleet.init() fleet.worker_index()
-
worker_num
(
)
[source]
worker_num¶
-
Get current total worker number.
- Returns
-
worker numbers
- Return type
-
int
Examples
import paddle.distributed.fleet as fleet fleet.init() fleet.worker_num()
-
is_worker
(
)
[source]
is_worker¶
-
Check whether the node is an instance of worker.
- Returns
-
- True if this is a node of worker,
-
False if not.
- Return type
-
bool
Examples
import paddle.distributed.fleet as fleet fleet.init() fleet.is_worker()
-
worker_endpoints
(
to_string=False
)
[source]
worker_endpoints¶
-
Get current worker endpoints, such as [“127.0.0.1:1001”, “127.0.0.1:1002”].
- Returns
-
server endpoints
- Return type
-
list/string
Examples
import paddle.distributed.fleet as fleet fleet.init() fleet.worker_endpoints()
-
server_num
(
)
[source]
server_num¶
-
Get current total worker number.
- Returns
-
server number
- Return type
-
int
Examples
import paddle.distributed.fleet as fleet fleet.init() fleet.server_num()
-
server_index
(
)
[source]
server_index¶
-
Get current server index.
- Returns
-
node id
- Return type
-
int
Examples
import paddle.distributed.fleet as fleet fleet.init() fleet.server_index()
-
server_endpoints
(
to_string=False
)
[source]
server_endpoints¶
-
Get current server endpoints, such as [“127.0.0.1:1001”, “127.0.0.1:1002”].
- Returns
-
server endpoints
- Return type
-
list/string
Examples
import paddle.distributed.fleet as fleet fleet.init() fleet.server_endpoints()
-
is_server
(
)
[source]
is_server¶
-
Check whether the node is an instance of server.
- Returns
-
- True if this is a node of server,
-
False if not.
- Return type
-
bool
Examples
import paddle.distributed.fleet as fleet fleet.init() fleet.is_server()
-
init_worker
(
)
[source]
init_worker¶
-
initialize Communicator for parameter server training.
- Returns
-
None
Examples
import paddle.distributed.fleet as fleet fleet.init() # build net # fleet.distributed_optimizer(...) fleet.init_worker()
-
init_server
(
*args,
**kwargs
)
[source]
init_server¶
-
init_server executor to initialize startup program, if the args is not empty, it will run load_persistables for increment training.
- Returns
-
None
Examples
import paddle.distributed.fleet as fleet fleet.init() # build net # fleet.distributed_optimizer(...) fleet.init_server()
-
load_model
(
path,
mode
)
[source]
load_model¶
-
load fleet model from path
- Returns
-
None
Examples
import paddle.distributed.fleet as fleet fleet.init() # build net # fleet.distributed_optimizer(...) fleet.load_model("path", "mode")
-
run_server
(
)
[source]
run_server¶
-
run server will run pserver main program with executor.
- Returns
-
None
Examples
import paddle.distributed.fleet as fleet fleet.init() # build net # fleet.distributed_optimizer(...) if fleet.is_server(): fleet.init_server()
-
stop_worker
(
)
[source]
stop_worker¶
-
stop Communicator and give training complete notice to parameter server.
- Returns
-
None
Examples
import paddle.distributed.fleet as fleet fleet.init() # build net # fleet.distributed_optimizer(...) fleet.init_server()
-
save_inference_model
(
executor,
dirname,
feeded_var_names,
target_vars,
main_program=None,
export_for_deployment=True,
mode=0
)
[source]
save_inference_model¶
-
save inference model for inference.
- Returns
-
None
Examples
import paddle.distributed.fleet as fleet fleet.init() # build net # fleet.distributed_optimizer(...) fleet.init_server()
-
save_persistables
(
executor,
dirname,
main_program=None,
mode=0
)
[source]
save_persistables¶
-
saves all persistable tensors from
main_program
to the folderdirname
. You can refer toThe
dirname
is used to specify the folder where persistable tensors are going to be saved. If you would like to save tensors in separate files, setfilename
None.- Parameters
-
executor (Executor) – The executor to run for saving persistable tensors. You can refer to Executor for more details.
dirname (str, optional) – The saving directory path. When you need to save the parameter to the memory, set it to None.
main_program (Program, optional) – The program whose persistbale tensors will be saved. Default: None.
- Returns
-
None
Examples
import paddle paddle.enable_static() import paddle.distributed.fleet as fleet fleet.init() # build net # fleet.distributed_optimizer(...) exe = paddle.static.Executor(paddle.CPUPlace()) fleet.save_persistables(exe, "dirname", paddle.static.default_main_program())
-
distributed_optimizer
(
optimizer,
strategy=None
)
[source]
distributed_optimizer¶
-
Optimizer for distributed training.
For the distributed training, this method would rebuild a new instance of DistributedOptimizer. Which has basic Optimizer function and special features for distributed training.
- Parameters
-
optimizer (Optimizer) – The executor to run for init server.
strategy (DistributedStrategy) – Extra properties for distributed optimizer. It is recommended to use DistributedStrategy in fleet.init(). The strategy here is for compatibility. If the strategy in fleet.distributed_optimizer() is not None, then it will overwrite the DistributedStrategy in fleet.init(), which will take effect in distributed training.
- Returns
-
instance of fleet.
- Return type
-
Fleet
Examples
import paddle import paddle.distributed.fleet as fleet fleet.init(is_collective=True) strategy = fleet.DistributedStrategy() optimizer = paddle.optimizer.SGD(learning_rate=0.001) optimizer = fleet.distributed_optimizer(optimizer, strategy=strategy)
-
distributed_model
(
model
)
[source]
distributed_model¶
-
Return distributed data parallel model (Only work in dygraph mode)
- Parameters
-
model (Layer) – the user-defind model which inherits Layer.
- Returns
-
distributed data parallel model which inherits Layer.
Examples
import paddle import paddle.nn as nn from paddle.distributed import fleet class LinearNet(nn.Layer): def __init__(self): super(LinearNet, self).__init__() self._linear1 = nn.Linear(10, 10) self._linear2 = nn.Linear(10, 1) def forward(self, x): return self._linear2(self._linear1(x)) # 1. initialize fleet environment fleet.init(is_collective=True) # 2. create layer & optimizer layer = LinearNet() loss_fn = nn.MSELoss() adam = paddle.optimizer.Adam( learning_rate=0.001, parameters=layer.parameters()) # 3. get data_parallel model using fleet adam = fleet.distributed_optimizer(adam) dp_layer = fleet.distributed_model(layer) # 4. run layer inputs = paddle.randn([10, 10], 'float32') outputs = dp_layer(inputs) labels = paddle.randn([10, 1], 'float32') loss = loss_fn(outputs, labels) print("loss:", loss.numpy()) loss.backward() adam.step() adam.clear_grad()
-
state_dict
(
)
[source]
state_dict¶
-
Get state dict information from optimizer. (Only work in dygraph mode)
- Returns
-
dict contains all the Tensor used by optimizer
- Return type
-
state_dict(dict)
Examples
import numpy as np import paddle from paddle.distributed import fleet fleet.init(is_collective=True) value = np.arange(26).reshape(2, 13).astype("float32") a = paddle.to_tensor(value) layer = paddle.nn.Linear(13, 5) adam = paddle.optimizer.Adam(learning_rate=0.01, parameters=layer.parameters()) adam = fleet.distributed_optimizer(adam) dp_layer = fleet.distributed_model(layer) state_dict = adam.state_dict()
-
set_state_dict
(
state_dict
)
[source]
set_state_dict¶
-
Load optimizer state dict. (Only work in dygraph mode)
- Parameters
-
state_dict (dict) – Dict contains all the Tensor needed by optimizer
- Returns
-
None
Examples
import numpy as np import paddle from paddle.distributed import fleet fleet.init(is_collective=True) value = np.arange(26).reshape(2, 13).astype("float32") a = paddle.to_tensor(value) layer = paddle.nn.Linear(13, 5) adam = paddle.optimizer.Adam(learning_rate=0.01, parameters=layer.parameters()) adam = fleet.distributed_optimizer(adam) dp_layer = fleet.distributed_model(layer) state_dict = adam.state_dict() paddle.save(state_dict, "paddle_dy") para_state_dict = paddle.load("paddle_dy") adam.set_state_dict(para_state_dict)
-
set_lr
(
value
)
[source]
set_lr¶
-
Set the value of the learning rate manually in the optimizer. (Only work in dygraph mode)
- Parameters
-
value (float|Tensor) – the value of learning rate
- Returns
-
None
Examples
import numpy as np import paddle from paddle.distributed import fleet fleet.init(is_collective=True) value = np.arange(26).reshape(2, 13).astype("float32") a = paddle.to_tensor(value) layer = paddle.nn.Linear(13, 5) adam = paddle.optimizer.Adam(learning_rate=0.01, parameters=layer.parameters()) adam = fleet.distributed_optimizer(adam) dp_layer = fleet.distributed_model(layer) lr_list = [0.2, 0.3, 0.4, 0.5, 0.6] for i in range(5): adam.set_lr(lr_list[i]) lr = adam.get_lr() print("current lr is {}".format(lr)) # Print: # current lr is 0.2 # current lr is 0.3 # current lr is 0.4 # current lr is 0.5 # current lr is 0.6
-
get_lr
(
)
[source]
get_lr¶
-
Get current step learning rate. (Only work in dygraph mode)
- Returns
-
The learning rate of the current step.
- Return type
-
float
Examples
import numpy as np import paddle from paddle.distributed import fleet fleet.init(is_collective=True) value = np.arange(26).reshape(2, 13).astype("float32") a = paddle.to_tensor(value) layer = paddle.nn.Linear(13, 5) adam = paddle.optimizer.Adam(learning_rate=0.01, parameters=layer.parameters()) adam = fleet.distributed_optimizer(adam) dp_layer = fleet.distributed_model(layer) lr = adam.get_lr() print(lr) # 0.01
-
step
(
)
[source]
step¶
-
Execute the optimizer once. (Only work in dygraph mode)
- Returns
-
None
Examples
import paddle import paddle.nn as nn from paddle.distributed import fleet class LinearNet(nn.Layer): def __init__(self): super(LinearNet, self).__init__() self._linear1 = nn.Linear(10, 10) self._linear2 = nn.Linear(10, 1) def forward(self, x): return self._linear2(self._linear1(x)) # 1. initialize fleet environment fleet.init(is_collective=True) # 2. create layer & optimizer layer = LinearNet() loss_fn = nn.MSELoss() adam = paddle.optimizer.Adam( learning_rate=0.001, parameters=layer.parameters()) # 3. get data_parallel model using fleet adam = fleet.distributed_optimizer(adam) dp_layer = fleet.distributed_model(layer) # 4. run layer inputs = paddle.randn([10, 10], 'float32') outputs = dp_layer(inputs) labels = paddle.randn([10, 1], 'float32') loss = loss_fn(outputs, labels) print("loss:", loss.numpy()) loss.backward() adam.step() adam.clear_grad()
-
clear_grad
(
)
[source]
clear_grad¶
-
Clear the gradients of all optimized parameters for model. (Only work in dygraph mode)
- Returns
-
None
Examples
import paddle import paddle.nn as nn from paddle.distributed import fleet class LinearNet(nn.Layer): def __init__(self): super(LinearNet, self).__init__() self._linear1 = nn.Linear(10, 10) self._linear2 = nn.Linear(10, 1) def forward(self, x): return self._linear2(self._linear1(x)) # 1. initialize fleet environment fleet.init(is_collective=True) # 2. create layer & optimizer layer = LinearNet() loss_fn = nn.MSELoss() adam = paddle.optimizer.Adam( learning_rate=0.001, parameters=layer.parameters()) # 3. get data_parallel model using fleet adam = fleet.distributed_optimizer(adam) dp_layer = fleet.distributed_model(layer) # 4. run layer inputs = paddle.randn([10, 10], 'float32') outputs = dp_layer(inputs) labels = paddle.randn([10, 1], 'float32') loss = loss_fn(outputs, labels) print("loss:", loss.numpy()) loss.backward() adam.step() adam.clear_grad()
-
get_loss_scaling
(
)
get_loss_scaling¶
-
Return the real-time loss scaling factor.
-
amp_init
(
place,
scope=None,
test_program=None,
use_fp16_test=False
)
amp_init¶
-
Init the amp training, such as cast fp32 parameters to fp16 type.
- Parameters
-
place (CUDAPlace) – place is used to initialize fp16 parameters with fp32 values.
scope (Scope) – The scope is used to find fp32 parameters.
test_program (Program) – The program is used for testing.
use_fp16_test (bool) – Whether to use fp16 testing.
Examples
import numpy as np import paddle import paddle.nn.functional as F paddle.enable_static() def run_example_code(): place = paddle.CUDAPlace(0) exe = paddle.static.Executor(place) data = paddle.static.data(name='X', shape=[None, 1, 28, 28], dtype='float32') conv2d = paddle.static.nn.conv2d(input=data, num_filters=6, filter_size=3) # 1) Use fp16_guard to control the range of fp16 kernels used. with paddle.static.amp.fp16_guard(): bn = paddle.static.nn.batch_norm(input=conv2d, act="relu") pool = F.max_pool2d(bn, kernel_size=2, stride=2) hidden = paddle.static.nn.fc(pool, size=10) loss = paddle.mean(hidden) # 2) Create the optimizer and set `multi_precision` to True. # Setting `multi_precision` to True can avoid the poor accuracy # or the slow convergence in a way. optimizer = paddle.optimizer.Momentum(learning_rate=0.01, multi_precision=True) # 3) These ops in `custom_black_list` will keep in the float32 computation type. amp_list = paddle.static.amp.CustomOpLists( custom_black_list=['pool2d']) # 4) The entry of Paddle AMP. # Enable pure fp16 training by setting `use_pure_fp16` to True. optimizer = paddle.static.amp.decorate( optimizer, amp_list, init_loss_scaling=128.0, use_dynamic_loss_scaling=True, use_pure_fp16=True) # If you don't use the default_startup_program(), you sholud pass # your defined `startup_program` into `minimize`. optimizer.minimize(loss) exe.run(paddle.static.default_startup_program()) # 5) Use `amp_init` after FP32 parameters initialization(such as `exe.run(startup_program)`). # If you want to perform the testing process, you should pass `test_program` into `amp_init`. optimizer.amp_init(place, scope=paddle.static.global_scope()) if paddle.is_compiled_with_cuda() and len(paddle.static.cuda_places()) > 0: run_example_code()
-
minimize
(
loss,
startup_program=None,
parameter_list=None,
no_grad_set=None
)
[source]
minimize¶
-
Add distributed operations to minimize
loss
by updatingparameter_list
.- Parameters
-
loss (Tensor) – A
Tensor
containing the value to minimize.startup_program (Program, optional) – api_fluid_Program for initializing parameters in
parameter_list
. The default value is None, at this time api_fluid_default_startup_program will be used.parameter_list (Iterable, optional) – Iterable of
Tensor
orTensor.name
to update to minimizeloss
. The default value is None, at this time all parameters will be updated.no_grad_set (set, optional) – Set of
Tensor
orTensor.name
that don’t need to be updated. The default value is None.
- Returns
-
tuple (optimize_ops, params_grads), A list of operators appended by minimize and a list of (param, grad) tensor pairs, param is
Parameter
, grad is the gradient value corresponding to the parameter. The returned tuple can be passed tofetch_list
inExecutor.run()
to indicate program pruning. If so, the program will be pruned byfeed
andfetch_list
before run, see details inExecutor
. - Return type
-
tuple
Examples
import paddle paddle.enable_static() import paddle.distributed.fleet as fleet import paddle.nn.functional as F hid_dim = 10 label_dim = 2 input_x = paddle.static.data(name='x', shape=[None, 13], dtype='float32') input_y = paddle.static.data(name='y', shape=[None, 1], dtype='int64') fc_1 = paddle.static.nn.fc(x=input_x, size=hid_dim, activation='tanh') fc_2 = paddle.static.nn.fc(x=fc_1, size=hid_dim, activation='tanh') prediction = paddle.static.nn.fc(x=[fc_2], size=label_dim, activation='softmax') cost = F.cross_entropy(input=prediction, label=input_y) avg_cost = paddle.mean(x=cost) fleet.init(is_collective=True) strategy = fleet.DistributedStrategy() optimizer = paddle.optimizer.SGD(learning_rate=0.001) optimizer = fleet.distributed_optimizer(optimizer, strategy=strategy) optimizer.minimize(avg_cost) # for more examples, please reference https://github.com/PaddlePaddle/FleetX