nll_loss

paddle.nn.functional. nll_loss ( input, label, weight=None, ignore_index=- 100, reduction='mean', name=None ) [source]

This api returns negative log likelihood. See more detail in NLLLoss .

Parameters
  • input (Tensor) – Input tensor, the shape is \([N, C]\), C is the number of classes. But in K-dimension situation, the shape is \([N, C, d_1, d_2, ..., d_K]\). The data type is float32, float64.

  • label (Tensor) – Label tensor, the shape is \([N,]\) or \([N, d_1, d_2, ..., d_K]\). The data type is int64.

  • weight (Tensor, optional) – Weight tensor, a manual rescaling weight given to each class. If given, it has to be a 1D Tensor whose size is [C, ]. Otherwise, it treated as if having all ones. the data type is float32, float64, Default is 'None'.

  • ignore_index (int, optional) – Specifies a target value that is ignored and does not contribute to the input gradient. Default is -100.

  • reduction (str, optional) – Indicate how to average the loss, the candicates are 'none' | 'mean' | 'sum'. If reduction is 'mean', the reduced mean loss is returned; if reduction is 'sum', the reduced sum loss is returned; if reduction is 'none', no reduction will be apllied. Default is 'mean'.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Tensor, the value of negative log likelihood loss.

Examples

import paddle
from paddle.nn.functional import nll_loss
log_softmax = paddle.nn.LogSoftmax(axis=1)

input = paddle.to_tensor([[0.88103855, 0.9908683 , 0.6226845 ],
          [0.53331435, 0.07999352, 0.8549948 ],
          [0.25879037, 0.39530203, 0.698465  ],
          [0.73427284, 0.63575995, 0.18827209],
          [0.05689114, 0.0862954 , 0.6325046 ]], "float32")
log_out = log_softmax(input)
label = paddle.to_tensor([0, 2, 1, 1, 0], "int64")
result = nll_loss(log_out, label)
print(result) # Tensor(shape=[], dtype=float32, place=CPUPlace, stop_gradient=True, 1.07202101)