fused_linear¶
- paddle.incubate.nn.functional. fused_linear ( x, weight, bias=None, transpose_weight=False, name=None ) [source]
-
Fully-connected linear transformation operator. This method requires CUDA version >= 11.6.
- Parameters
-
x (Tensor) – the input Tensor to be multiplied.
weight (Tensor) – the weight Tensor to be multiplied. Its rank must be 2.
bias (Tensor, optional) – the input bias Tensor. If it is None, no bias addition would be performed. Otherwise, the bias is added to the matrix multiplication result. Default: None.
transpose_weight (bool, optional) – Whether to transpose \(weight\) before multiplication. Default: False.
name (str, optional) – For detailed information, please refer to Name . Usually name is no need to set and None by default.
- Returns
-
the output Tensor.
- Return type
-
Tensor
Examples
>>> >>> >>> import paddle >>> from paddle.incubate.nn.functional import fused_linear >>> paddle.set_device('gpu') >>> x = paddle.randn([3, 4]) >>> weight = paddle.randn([4, 5]) >>> bias = paddle.randn([5]) >>> out = fused_linear(x, weight, bias) >>> print(out.shape) [3, 5]