binary_cross_entropy¶
- paddle.nn.functional. binary_cross_entropy ( input, label, weight=None, reduction='mean', name=None ) [source]
-
Measure the binary_cross_entropy loss between input predictions
input
and target labelslabel
. The binary_cross_entropy loss can be described as:If
weight
is set, the loss is:\[Out = -1 * weight * (label * log(input) + (1 - label) * log(1 - input))\]If
weight
is None, the loss is:\[Out = -1 * (label * log(input) + (1 - label) * log(1 - input))\]If
reduction
set to'none'
, the interface will return the original loss Out.If
reduction
set to'mean'
, the reduced mean loss is:\[Out = MEAN(Out)\]If
reduction
set to'sum'
, the reduced sum loss is:\[Out = SUM(Out)\]Note that the input predictions
input
always be the output of sigmoid, and the target labelslabel
should be numbers between 0 and 1.- Parameters
-
input (Tensor) – The input predications tensor. 2-D tensor with shape: [N, *], N is batch_size, * means number of additional dimensions. The
input
should always be the output of sigmod. Available dtype is float16, float32, float64.label (Tensor) – The target labels tensor. 2-D tensor with the same shape as
input
. The target labels which values should be numbers between 0 and 1. Available dtype is float16, float32, float64.weight (Tensor, optional) – A manual rescaling weight given to the loss of each batch element. If given, has to be a Tensor of size nbatch and the data type is float32, float64. Default is
'None'
.reduction (str, optional) – Indicate how to average the loss by batch_size, the candidates are
'none'
|'mean'
|'sum'
. Ifreduction
is'none'
, the unreduced loss is returned; Ifreduction
is'mean'
, the reduced mean loss is returned; Ifreduction
is'sum'
, the summed loss is returned. Default is'mean'
.name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.
- Returns
-
-
Tensor. If
reduction
is'none'
, the shape of output is -
same as
input
, else the shape of output is scalar.
-
Tensor. If
Examples
>>> import paddle >>> input = paddle.to_tensor([0.5, 0.6, 0.7], 'float32') >>> label = paddle.to_tensor([1.0, 0.0, 1.0], 'float32') >>> output = paddle.nn.functional.binary_cross_entropy(input, label) >>> print(output) Tensor(shape=[], dtype=float32, place=Place(cpu), stop_gradient=True, 0.65537095)