KLDivLoss¶
- class paddle.nn. KLDivLoss ( reduction='mean' ) [source]
-
This interface calculates the Kullback-Leibler divergence loss between Input(X) and Input(Target). Notes that Input(X) is the log-probability and Input(Target) is the probability.
KL divergence loss is calculated as follows:
$$l(x, y) = y * (log(y) - x)$$
- Parameters
-
reduction (Tensor) – Indicate how to average the loss, the candicates are
'none'
|'batchmean'
|'mean'
|'sum'
. If reduction is'mean'
, the reduced mean loss is returned; If reduction is'batchmean'
, the sum loss divided by batch size is returned; if reduction is'sum'
, the reduced sum loss is returned; if reduction is'none'
, no reduction will be apllied. Default is'mean'
.
Shape:
Examples
import paddle import numpy as np import paddle.nn as nn shape = (5, 20) x = np.random.uniform(-10, 10, shape).astype('float32') target = np.random.uniform(-10, 10, shape).astype('float32') # 'batchmean' reduction, loss shape will be [1] kldiv_criterion = nn.KLDivLoss(reduction='batchmean') pred_loss = kldiv_criterion(paddle.to_tensor(x), paddle.to_tensor(target)) # shape=[1] # 'mean' reduction, loss shape will be [1] kldiv_criterion = nn.KLDivLoss(reduction='mean') pred_loss = kldiv_criterion(paddle.to_tensor(x), paddle.to_tensor(target)) # shape=[1] # 'sum' reduction, loss shape will be [1] kldiv_criterion = nn.KLDivLoss(reduction='sum') pred_loss = kldiv_criterion(paddle.to_tensor(x), paddle.to_tensor(target)) # shape=[1] # 'none' reduction, loss shape is same with X shape kldiv_criterion = nn.KLDivLoss(reduction='none') pred_loss = kldiv_criterion(paddle.to_tensor(x), paddle.to_tensor(target)) # shape=[5, 20]
-
forward
(
input,
label
)
forward¶
-
Defines the computation performed at every call. Should be overridden by all subclasses.
- Parameters
-
*inputs (tuple) – unpacked tuple arguments
**kwargs (dict) – unpacked dict arguments