conv1d¶
- paddle.nn.functional. conv1d ( x, weight, bias=None, stride=1, padding=0, dilation=1, groups=1, data_format='NCL', name=None ) [source]
-
The convolution1D layer calculates the output based on the input, filter and strides, paddings, dilations, groups parameters. Input and Output are in NCL format, where N is batch size, C is the number of channels, L is the length of the feature. Filter is in MCK format, where M is the number of output image channels, C is the number of input image channels, K is the size of the kernel. If the groups is greater than 1, C will equal the number of input image channels divided by the groups. If bias attribution and activation type are provided, bias is added to the output of the convolution, and the corresponding activation function is applied to the final result.
For each input \(X\), the equation is:
\[Out = \sigma (W \ast X + b)\]Where:
\(X\): Input value, a tensor with NCL format.
\(W\): Kernel value, a tensor with MCK format.
\(\\ast\): Convolution operation.
\(b\): Bias value, a 2-D tensor with shape [M, 1].
\(\\sigma\): Activation function.
\(Out\): Output value, the shape of \(Out\) and \(X\) may be different.
Example
Input:
Input shape: \((N, C_{in}, L_{in})\)
Filter shape: \((C_{out}, C_{in}, L_f)\)
Output:
Output shape: \((N, C_{out}, L_{out})\)
Where
\[L_{out} = \frac{(L_{in} + 2 * padding - (dilation * (L_f - 1) + 1))}{stride} + 1\]- Parameters
-
x (Tensor) – The input is 3-D Tensor with shape [N, C, L], the data type of input is float16 or float32 or float64.
weight (Tensor) – The convolution kernel with shape [M, C/g, K], where M is the number of output channels, g is the number of groups, K is the kernel’s size.
bias (Tensor, optional) – The bias with shape [M,]. Default: None.
stride (int|list|tuple, optional) – The stride size. If stride is a list/tuple, it must contain one integers, (stride_size). Default: 1.
padding (int|str|tuple|list, optional) – The padding size. Padding could be in one of the following forms. 1. a string in [‘valid’, ‘same’]. 2. an int, which means the feature map is zero paded by size of padding on both sides. 3. a list[int] or tuple[int] whose length is 1, which means the feature map is zero paded by size of padding[0] on both sides. 4. a list[int] or tuple[int] whose length is 2. It has the form [pad_before, pad_after]. 5. a list or tuple of pairs of ints. It has the form [[pad_before, pad_after], [pad_before, pad_after], …]. Note that, the batch dimension and channel dimension are also included. Each pair of integers correspond to the amount of padding for a dimension of the input. Padding in batch dimension and channel dimension should be [0, 0] or (0, 0). The default value is 0.
dilation (int|list|tuple, optional) – The dilation size. If dilation is a list/tuple, it must contain one integer, (dilation_size). Default: 1.
groups (int, optional) – The groups number of the conv1d function. According to grouped convolution in Alex Krizhevsky’s Deep CNN paper: when group=2, the first half of the filters is only connected to the first half of the input channels, while the second half of the filters is only connected to the second half of the input channels. Default: 1.
data_format (str, optional) – Specify the data format of the input, and the data format of the output will be consistent with that of the input. An optional string from: “NCL”, “NLC”. The default is “NCL”. When it is “NCL”, the data is stored in the order of: [batch_size, input_channels, feature_length].
name (str, optional) – For detailed information, please refer to Name. Usually name is no need to set and None by default.
- Returns
-
A tensor representing the conv1d, whose data type is the same with input.
Examples
>>> import paddle >>> import paddle.nn.functional as F >>> x = paddle.to_tensor([[[4, 8, 1, 9], ... [7, 2, 0, 9], ... [6, 9, 2, 6]]], dtype="float32") >>> w = paddle.to_tensor([[[9, 3, 4], ... [0, 0, 7], ... [2, 5, 6]], ... [[0, 3, 4], ... [2, 9, 7], ... [5, 6, 8]]], dtype="float32") >>> y = F.conv1d(x, w) >>> print(y) Tensor(shape=[1, 2, 2], dtype=float32, place=Place(cpu), stop_gradient=True, [[[133., 238.], [160., 211.]]])