ipu_shard_guard¶
- paddle.static. ipu_shard_guard ( index=- 1, stage=- 1 ) [source]
-
Used to shard the graph on IPUs. Set each Op run on which IPU in the sharding and which stage in the pipelining.
- Parameters
-
index (int, optional) – Specify which ipu the Tensor is computed on, (such as ‘0, 1, 2, 3’). The default value is -1, which means the Op only run on IPU 0.
stage (int, optional) – Specify the computation order of the sharded model(such as ‘0, 1, 2, 3’). The sharded model will be computed from small to large. The default value is -1, which means no pipelining computation order and run Ops in terms of graph.
Note
Only if the enable_manual_shard=True, the ‘index’ is able to be set not -1. Please refer to IpuStrategy. Only if the enable_pipelining=True, the ‘stage’ is able to be set not -1. Please refer to IpuStrategy. A index is allowed to match none stage or a stage. A stage is only allowed to match a new or duplicated index.
Examples
>>> >>> import paddle >>> paddle.device.set_device('ipu') >>> paddle.enable_static() >>> a = paddle.static.data(name='data', shape=[None, 1], dtype='int32') >>> with paddle.static.ipu_shard_guard(index=0, stage=0): ... b = a + 1 >>> with paddle.static.ipu_shard_guard(index=1, stage=1): ... c = b + 1 >>> with paddle.static.ipu_shard_guard(index=0, stage=2): ... d = c + 1