all_gather¶
- paddle.distributed. all_gather ( tensor_list, tensor, group=None, use_calc_stream=True ) [source]
-
Gather tensors from all participators and all get the result.
- Parameters
-
tensor_list (list) – A list of output Tensors. Every element in the list must be a Tensor whose data type should be float16, float32, float64, int32 or int64.
tensor (Tensor) – The Tensor to send. Its data type should be float16, float32, float64, int32 or int64.
group (Group) – The group instance return by new_group or None for global default group.
use_calc_stream (bool) – Wether to use calculation stream (True) or communication stream (False). Default to True.
- Returns
-
None.
Examples
import numpy as np import paddle from paddle.distributed import init_parallel_env paddle.set_device('gpu:%d'%paddle.distributed.ParallelEnv().dev_id) init_parallel_env() tensor_list = [] if paddle.distributed.ParallelEnv().local_rank == 0: np_data1 = np.array([[4, 5, 6], [4, 5, 6]]) np_data2 = np.array([[4, 5, 6], [4, 5, 6]]) data1 = paddle.to_tensor(np_data1) data2 = paddle.to_tensor(np_data2) paddle.distributed.all_gather(tensor_list, data1) else: np_data1 = np.array([[1, 2, 3], [1, 2, 3]]) np_data2 = np.array([[1, 2, 3], [1, 2, 3]]) data1 = paddle.to_tensor(np_data1) data2 = paddle.to_tensor(np_data2) paddle.distributed.all_gather(tensor_list, data2)