log_softmax¶
- paddle.nn.functional. log_softmax ( x, axis=- 1, dtype=None, name=None ) [source]
-
This operator implements the log_softmax layer. The calculation process is as follows:
\[\begin{split}\begin{aligned} log\_softmax[i, j] &= log(softmax(x)) \\ &= log(\frac{\exp(X[i, j])}{\sum_j(\exp(X[i, j])}) \end{aligned}\end{split}\]- Parameters
-
x (Tensor) – The input Tensor with data type float32, float64.
axis (int, optional) – The axis along which to perform log_softmax calculations. It should be in range [-D, D), where D is the dimensions of
x
. Ifaxis
< 0, it works the same way as \(axis + D\) . Default is -1.dtype (str|np.dtype|core.VarDesc.VarType, optional) – The desired data type of the output tensor. If dtype is specified,
x
is casted todtype
before the operation is performed. This is useful for preventing data type overflows. Supported dtype: float32, float64. Ifdtype
is None, the output Tensor has the same dtype as x. Default is None.name (str, optional) – For details, please refer to Name. Generally, no setting is required. Default: None.
- Returns
-
A Tensor with the same shape and data type (use
dtype
if it is specified) as x.
Examples
import paddle import paddle.nn.functional as F x = [[[-2.0, 3.0, -4.0, 5.0], [3.0, -4.0, 5.0, -6.0], [-7.0, -8.0, 8.0, 9.0]], [[1.0, -2.0, -3.0, 4.0], [-5.0, 6.0, 7.0, -8.0], [6.0, 7.0, 8.0, 9.0]]] x = paddle.to_tensor(x) out1 = F.log_softmax(x) out2 = F.log_softmax(x, dtype='float64') # out1's data type is float32; out2's data type is float64 # out1 and out2's value is as follows: # [[[ -7.1278396 -2.1278396 -9.127839 -0.12783948] # [ -2.1270514 -9.127051 -0.12705144 -11.127051 ] # [-16.313261 -17.313261 -1.3132617 -0.31326184]] # [[ -3.0518122 -6.051812 -7.051812 -0.051812 ] # [-12.313267 -1.3132664 -0.3132665 -15.313267 ] # [ -3.4401896 -2.4401896 -1.4401896 -0.44018966]]]