prelu¶
- paddle.static.nn. prelu ( x, mode, param_attr=None, data_format='NCHW', name=None ) [source]
-
prelu activation.
\[prelu(x) = max(0, x) + \alpha * min(0, x)\]There are three modes for the activation:
all: All elements share same alpha. channel: Elements in same channel share same alpha. element: All elements do not share alpha. Each element has its own alpha.
- Parameters
-
x (Tensor) – The input Tensor or LoDTensor with data type float32.
mode (str) – The mode for weight sharing.
param_attr (ParamAttr|None, optional) – The parameter attribute for the learnable
weight (alpha) –
information (For detailed) –
:param please refer to api_fluid_ParamAttr.: :param name: Name for the operation (optional, default is None). :type name: str, optional :param For more information: :param please refer to Name.: :param data_format: Data format that specifies the layout of input.
It may be “NC”, “NCL”, “NCHW”, “NCDHW”, “NLC”, “NHWC” or “NDHWC”. Default: “NCHW”.
- Returns
-
A tensor with the same shape and data type as x.
- Return type
-
Tensor
Examples
import paddle x = paddle.to_tensor([-1., 2., 3.]) param = paddle.ParamAttr(initializer=paddle.nn.initializer.Constant(0.2)) out = paddle.static.nn.prelu(x, 'all', param) # [-0.2, 2., 3.]