batch_norm¶
- paddle.nn.functional.batch_norm(x, running_mean, running_var, weight, bias, training=False, momentum=0.9, epsilon=1e-05, data_format='NCHW', name=None):
推荐使用 nn.BatchNorm1D,nn.BatchNorm2D, nn.BatchNorm3D,由内部调用此方法。
详情见 BatchNorm1D 。
参数¶
x (int) - 输入,数据类型为 float32, float64。
running_mean (Tensor) - 均值的 Tensor。
running_var (Tensor) - 方差的 Tensor。
weight (Tensor) - 权重的 Tensor。
bias (Tensor) - 偏置的 Tensor。
momentum (float,可选) - 此值用于计算
moving_mean
和moving_var
。默认值:0.9。更新公式如上所示。epsilon (float,可选) - 为了数值稳定加在分母上的值。默认值:1e-05。
data_format (string,可选) - 指定输入数据格式,数据格式可以为“NC", "NCL", "NCHW" 或者"NCDHW"。默认值:"NCHW"。
name (str,可选) - 具体用法请参见 Name,一般无需设置,默认值为 None。
返回¶
无
代码示例¶
import paddle
x = paddle.arange(12, dtype="float32").reshape([2, 1, 2, 3])
print(x)
# Tensor(shape=[2, 1, 2, 3], dtype=float32, place=Place(gpu:0), stop_gradient=True,
# [[[[0. , 1. , 2. ],
# [3. , 4. , 5. ]]],
# [[[6. , 7. , 8. ],
# [9. , 10., 11.]]]])
running_mean = paddle.to_tensor([0], dtype="float32")
running_variance = paddle.to_tensor([1], dtype="float32")
weight = paddle.to_tensor([2], dtype="float32")
bias = paddle.to_tensor([1], dtype="float32")
batch_norm_out = paddle.nn.functional.batch_norm(x, running_mean,
running_variance, weight, bias)
print(batch_norm_out)
# Tensor(shape=[2, 1, 2, 3], dtype=float32, place=Place(gpu:0), stop_gradient=True,
# [[[[1. , 2.99998999 , 4.99997997 ],
# [6.99996948 , 8.99995995 , 10.99994946]]],
# [[[12.99993896, 14.99992943, 16.99991989],
# [18.99990845, 20.99989891, 22.99988937]]]])