LogSoftmax¶
- class paddle.nn. LogSoftmax ( axis=- 1, name=None ) [source]
-
This operator implements the log_softmax layer. The calculation process is as follows:
\[\begin{split}\begin{array} {rcl} Out[i, j] &= &log(softmax(x)) \\ &= &log(\frac{\exp(X[i, j])}{\sum_j(\exp(X[i, j])}) \end{array}\end{split}\]- Parameters
-
axis (int, optional) – The axis along which to perform log_softmax calculations. It should be in range [-D, D), where D is the dimensions of the input Tensor . If
axis
< 0, it works the same way as \(axis + D\) . Default is -1.name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.
- Shape:
-
input: Tensor with any shape.
output: Tensor with the same shape as input.
Examples
import paddle x = [[[-2.0, 3.0, -4.0, 5.0], [3.0, -4.0, 5.0, -6.0], [-7.0, -8.0, 8.0, 9.0]], [[1.0, -2.0, -3.0, 4.0], [-5.0, 6.0, 7.0, -8.0], [6.0, 7.0, 8.0, 9.0]]] m = paddle.nn.LogSoftmax() x = paddle.to_tensor(x) out = m(x) # [[[ -7.1278396 -2.1278396 -9.127839 -0.12783948] # [ -2.1270514 -9.127051 -0.12705144 -11.127051 ] # [-16.313261 -17.313261 -1.3132617 -0.31326184]] # [[ -3.0518122 -6.051812 -7.051812 -0.051812 ] # [-12.313267 -1.3132664 -0.3132665 -15.313267 ] # [ -3.4401896 -2.4401896 -1.4401896 -0.44018966]]]
-
forward
(
x
)
forward¶
-
Defines the computation performed at every call. Should be overridden by all subclasses.
- Parameters
-
*inputs (tuple) – unpacked tuple arguments
**kwargs (dict) – unpacked dict arguments
-
extra_repr
(
)
extra_repr¶
-
Extra representation of this layer, you can have custom implementation of your own layer.