conv2d_transpose¶
- paddle.static.nn. conv2d_transpose ( input, num_filters, output_size=None, filter_size=None, padding=0, stride=1, dilation=1, groups=None, param_attr=None, bias_attr=None, use_cudnn=True, act=None, name=None, data_format='NCHW' ) [source]
-
The convolution2D transpose layer calculates the output based on the input, filter, and dilations, strides, paddings. Input(Input) and output(Output) are in NCHW or NHWC format. Where N is batch size, C is the number of channels, H is the height of the feature, and W is the width of the feature. Parameters(dilations, strides, paddings) are two elements. These two elements represent height and width, respectively. The details of convolution transpose layer, please refer to the following explanation and references therein. If bias attribution and activation type are provided, bias is added to the output of the convolution, and the corresponding activation function is applied to the final result.
For each input \(X\), the equation is:
\[Out = \sigma (W \ast X + b)\]Where:
\(X\): Input value, a 4-D Tensor with NCHW or NHWC format.
\(W\): Filter value, a 4-D Tensor with MCHW format.
\(\ast\): Convolution operation.
\(b\): Bias value, a 2-D Tensor with shape [M, 1].
\(\sigma\): Activation function.
\(Out\): Output value, a 4-D Tensor with data format ‘NCHW’ or ‘NHWC’, the shape of \(Out\) and \(X\) may be different.
Example
Input:
Input shape: \((N, C_{in}, H_{in}, W_{in})\)
Filter shape: \((C_{in}, C_{out}, H_f, W_f)\)
Output:
Output shape: \((N, C_{out}, H_{out}, W_{out})\)
Where
\[\begin{split}H^\prime_{out} &= (H_{in} - 1) * strides[0] - 2 * paddings[0] + dilations[0] * (H_f - 1) + 1 \\ W^\prime_{out} &= (W_{in} - 1) * strides[1] - 2 * paddings[1] + dilations[1] * (W_f - 1) + 1 \\ H_{out} &\in [ H^\prime_{out}, H^\prime_{out} + strides[0] ] \\ W_{out} &\in [ W^\prime_{out}, W^\prime_{out} + strides[1] ]\end{split}\]If padding = “SAME”:
\[\begin{split}H^\prime_{out} &= \frac{(H_{in} + stride[0] - 1)}{stride[0]} \\ W^\prime_{out} &= \frac{(H_{in} + stride[1] - 1)}{stride[1]}\end{split}\]If padding = “VALID”:
\[\begin{split}H^\prime_{out} &= (H_{in} - 1) * strides[0] + dilations[0] * (H_f - 1) + 1 \\ W^\prime_{out} &= (W_{in} − 1) * strides[1] + dilations[1] * (W_f − 1) + 1\end{split}\]If output_size is None, \(H_{out} = H^\prime_{out}, W_{out} = W^\prime_{out}\); else, the \(H_{out}\) of the output size must between \(H^\prime_{out}\) and \(H^\prime_{out} + strides[0]\), and the \(W_{out}\) of the output size must between \(W^\prime_{out}\) and \(W^\prime_{out} + strides[1]\),
Since transposed convolution can be treated as the inverse of convolution, and according to the input-output formula for convolution, different sized input feature layers may correspond to the same sized output feature layer, the size of the output feature layer for a fixed sized input feature layer is not unique to transposed convolution
If output_size is specified, conv2d_transpose can compute the kernel size automatically.
- Parameters
-
input (Tensor) – 4-D Tensor with [N, C, H, W] or [N, H, W, C] format where N is the batch_size, C is the input_channels, H is the input_height and W is the input_width. Its data type is float32 or float64.
num_filters (int) – The number of the filter. It is as same as the output image channel.
output_size (int|tuple, optional) – The output image size. If output size is a tuple, it must contain two integers, (image_height, image_width). None if use filter_size, padding, and stride to calculate output_size. If output_size and filter_size are specified at the same time, They should follow the formula above. Default: None. output_size and filter_size should not be None at the same time.
filter_size (int|tuple, optional) – The filter size. If filter_size is a tuple, it must contain two integers, (filter_size_height, filter_size_width). Otherwise, filter_size_height = filter_size_width = filter_size. None if use output size to calculate filter_size. Default: None. filter_size and output_size should not be None at the same time.
padding (str|int|list|tuple, optional) – The padding size. It means the number of zero-paddings on both sides for each dimension. If padding is a string, either ‘VALID’ or ‘SAME’ which is the padding algorithm. If padding is a tuple or list, it could be in three forms: (1) Contains 4 binary groups: when data_format is “NCHW”, padding can be in the form [[0,0], [0,0], [pad_height_top, pad_height_bottom], [pad_width_left, pad_width_right]]. when data_format is “NHWC”, padding can be in the form [[0,0], [pad_height_top, pad_height_bottom], [pad_width_left, pad_width_right], [0,0]]. (2) Contains 4 integer values:[pad_height_top, pad_height_bottom, pad_width_left, pad_width_right] (3) Contains 2 integer values:[pad_height, pad_width], in this case, padding_height_top = padding_height_bottom = padding_height, padding_width_left = padding_width_right = padding_width. If an integer, padding_height = padding_width = padding. Default: padding = 0.
stride (int|tuple, optional) – The stride size. It means the stride in transposed convolution. If stride is a tuple, it must contain two integers, (stride_height, stride_width). Otherwise, stride_height = stride_width = stride. Default: stride = 1.
dilation (int|tuple, optional) – The dilation size. It means the spacing between the kernel points. If dilation is a tuple, it must contain two integers, (dilation_height, dilation_width). Otherwise, dilation_height = dilation_width = dilation. Default: dilation = 1.
groups (int, optional) – The groups number of the Conv2d transpose layer. Inspired by grouped convolution in Alex Krizhevsky’s Deep CNN paper, in which when group=2, the first half of the filters is only connected to the first half of the input channels, while the second half of the filters is only connected to the second half of the input channels. Default: groups = 1.
param_attr (ParamAttr, optional) – The parameter attribute for learnable parameters/weights of conv2d_transpose. If it is set to None or one attribute of ParamAttr, conv2d_transpose will create ParamAttr as param_attr. If the Initializer of the param_attr is not set, the parameter is initialized with Xavier. Default: None.
bias_attr (ParamAttr|bool, optional) – Specifies the object for the bias parameter attribute. The default value is None, which means that the default bias parameter attribute is used. For detailed information, please refer to ParamAttr. The default bias initialisation for the conv2d_transpose operator is 0.0.
use_cudnn (bool, optional) – Use cudnn kernel or not, it is valid only when the cudnn library is installed. Default: True.
act (str, optional) – Activation type, if it is set to None, activation is not appended. Default: None.
name (str, optional) – For detailed information, please refer to Name. Usually name is no need to set and None by default.
data_format (str, optional) – Specify the data format of the input, and the data format of the output will be consistent with that of the input. An optional string from: “NCHW”, “NHWC”. The default is “NCHW”. When it is “NCHW”, the data is stored in the order of: [batch_size, input_channels, input_height, input_width].
- Returns
-
A Tensor representing the conv2d_transpose, whose data type is the same with input and shape is (num_batches, channels, out_h, out_w) or (num_batches, out_h, out_w, channels). If act is None, the tensor storing the transposed convolution result, and if act is not None, the tensor storing transposed convolution and non-linearity activation result.
- Raises
-
ValueError – If the type of use_cudnn is not bool.
ValueError – If data_format is not “NCHW” or “NHWC”.
ValueError – If padding is a string, but not “SAME” or “VALID”.
ValueError – If padding is a tuple, but the element corresponding to the input’s batch size is not 0 or the element corresponding to the input’s channel is not 0.
ValueError – If output_size and filter_size are None at the same time.
ShapeError – If the input is not 4-D Tensor.
ShapeError – If the input’s dimension size and filter’s dimension size not equal.
ShapeError – If the dimension size of input minus the size of stride is not 2.
ShapeError – If the number of input channels is not equal to filter’s channels.
ShapeError – If the size of output_size is not equal to that of stride.
Examples
>>> import paddle >>> paddle.enable_static() >>> data = paddle.static.data(name='data', shape=[None, 3, 32, 32], dtype='float32') >>> conv2d_transpose = paddle.static.nn.conv2d_transpose(input=data, num_filters=2, filter_size=3) >>> print(conv2d_transpose.shape) (-1, 2, 34, 34)