kldiv_loss

paddle.fluid.layers.loss. kldiv_loss ( x, target, reduction='mean', name=None ) [source]

This operator calculates the Kullback-Leibler divergence loss between Input(X) and Input(Target). Notes that Input(X) is the log-probability and Input(Target) is the probability.

KL divergence loss is calculated as follows:

l(x,y)=y(log(y)x)

While x is Input(X) and y is Input(Target).

While reduction is none, output loss is in the same shape as Input(X), loss in each point is calculated seperately and no reduction is applied.

While reduction is mean, output loss is in shape of [1] and loss value is the mean value of all losses.

While reduction is sum, output loss is in shape of [1] and loss value is the sum value of all losses.

While reduction is batchmean, output loss is in shape of [1] and loss value is the sum value of all losses divided by batch size.

Parameters
  • x (Tensor) – The input tensor of KL divergence loss operator. This is a tensor with shape of [N, *], where N is the batch size, * means any number of additional dimensions. The data type is float32 or flaot64

  • target (Tensor) – The tensor of KL divergence loss operator. This is a tensor with shape of Input(X). The data type is same as Input(X)

  • reduction (Tensor) – The reduction type to apply to the output, available types are ‘none’ | ‘batchmean’ | ‘mean’ | ‘sum’, ‘none’ for no reduction, ‘batchmean’ for the sum of output divided by batch size, ‘mean’ for the average value of all output, ‘sum’ for the sum of the output

  • name (str, optional) – For detailed information, please refer to Name. Usually name is no need to set and None by default.

Returns

The KL divergence loss. The data type is same as input tensor

Return type

Tensor

Examples

import paddle
import paddle.fluid as fluid

x = paddle.rand(shape=[3,4,2,2], dtype='float32')
target = paddle.rand(shape=[3,4,2,2], dtype='float32')

# 'batchmean' reduction, loss shape will be [1]
loss = fluid.layers.kldiv_loss(x=x, target=target, reduction='batchmean')
print(loss.shape) # shape=[1]

# 'mean' reduction, loss shape will be [1]
loss = fluid.layers.kldiv_loss(x=x, target=target, reduction='mean')
print(loss.shape) # shape=[1]

# 'sum' reduction, loss shape will be [1]
loss = fluid.layers.kldiv_loss(x=x, target=target, reduction='sum')
print(loss.shape) # shape=[1]

# 'none' reduction, loss shape is same with X shape
loss = fluid.layers.kldiv_loss(x=x, target=target, reduction='none')
print(loss.shape) # shape=[3, 4, 2, 2]