softmax

paddle.sparse.nn.functional. softmax ( x, axis=- 1, name=None ) [source]

sparse softmax activation, requiring x to be a SparseCooTensor or SparseCsrTensor.

Note

Only support axis=-1 for SparseCsrTensor, which is faster when read data by row (axis=-1).

From the point of view of dense matrix, for each row \(i\) and each column \(j\) in the matrix, we have:

\[softmax_ij = \frac{\exp(x_ij - max_j(x_ij))}{\sum_j(exp(x_ij - max_j(x_ij))}\]
Parameters
  • x (Tensor) – The input tensor. It can be SparseCooTensor/SparseCsrTensor. The data type can be float32 or float64.

  • axis (int, optional) – The axis along which to perform softmax calculations. Only support -1 for SparseCsrTensor.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

SparseCoo or SparseCsr, whose layout is the same with x .

Return type

Tensor

Examples

import paddle
paddle.seed(100)

mask = paddle.rand((3, 4)) < 0.5
x = paddle.rand((3, 4)) * mask
print(x)
# Tensor(shape=[3, 4], dtype=float32, place=Place(gpu:0), stop_gradient=True,
#        [[0.83438963, 0.70008713, 0.        , 0.88831252],
#         [0.02200012, 0.        , 0.75432241, 0.65136462],
#         [0.96088767, 0.82938021, 0.35367414, 0.86653489]])

csr = x.to_sparse_csr()
print(csr)
# Tensor(shape=[3, 4], dtype=paddle.float32, place=Place(gpu:0), stop_gradient=True,
#        crows=[0 , 3 , 6 , 10],
#        cols=[0, 1, 3, 0, 2, 3, 0, 1, 2, 3],
#        values=[0.83438963, 0.70008713, 0.88831252, 0.02200012, 0.75432241,
#                0.65136462, 0.96088767, 0.82938021, 0.35367414, 0.86653489])

out = paddle.sparse.nn.functional.softmax(csr)
print(out)
# Tensor(shape=[3, 4], dtype=paddle.float32, place=Place(gpu:0), stop_gradient=True,
#        crows=[0 , 3 , 6 , 10],
#        cols=[0, 1, 3, 0, 2, 3, 0, 1, 2, 3],
#        values=[0.34132850, 0.29843223, 0.36023921, 0.20176248, 0.41964680,
#                0.37859070, 0.30015594, 0.26316854, 0.16354506, 0.27313042])

coo = x.to_sparse_coo(sparse_dim=2)
print(coo)
# Tensor(shape=[3, 4], dtype=paddle.float32, place=Place(gpu:0), stop_gradient=True,
#        indices=[[0, 0, 0, 1, 1, 1, 2, 2, 2, 2],
#                 [0, 1, 3, 0, 2, 3, 0, 1, 2, 3]],
#        values=[0.83438963, 0.70008713, 0.88831252, 0.02200012, 0.75432241,
#                0.65136462, 0.96088767, 0.82938021, 0.35367414, 0.86653489])

out = paddle.sparse.nn.functional.softmax(coo)
print(out)
# Tensor(shape=[3, 4], dtype=paddle.float32, place=Place(gpu:0), stop_gradient=True,
#        indices=[[0, 0, 0, 1, 1, 1, 2, 2, 2, 2],
#                 [0, 1, 3, 0, 2, 3, 0, 1, 2, 3]],
#        values=[0.34132853, 0.29843226, 0.36023924, 0.20176250, 0.41964683,
#                0.37859073, 0.30015597, 0.26316857, 0.16354507, 0.27313042])