teacher_student_sigmoid_loss¶
- paddle.fluid.layers.loss. teacher_student_sigmoid_loss ( input, label, soft_max_up_bound=15.0, soft_max_lower_bound=- 15.0 ) [source]
-
Teacher Student Log Loss Layer
This layer accepts input predictions and target label and returns the teacher_student loss. Z is click or not, z’ is value of teacher loss, label = {-2, -1, [0, 2]} when z’ is not exist, clk = 0 : label = -2; when z’ is not exist, clk = 1 : label = -1; when z’ is exist , clk = 0 : label = 0 + z’; when z’ is exist , clk = 1 : label = 1 + z’
\[loss = max(x, 0) - x * z + log(1 + exp(-abs(x))) + max(x, 0) - x * z' + log(1 + exp(-abs(x)))\]- Parameters
-
input (Variable|list) – a 2-D tensor with shape [N x 1], where N is the batch size. This input is a probability computed by the previous operator.
label (Variable|list) – the ground truth which is a 2-D tensor with shape [N x 1], where N is the batch size.
soft_max_up_bound (float) – if input > soft_max_up_bound, will be bound
soft_max_lower_bound (float) – if input < soft_max_lower_bound, will be bound
- Returns
-
A 2-D tensor with shape [N x 1], the teacher_student_sigmoid_loss.
- Return type
-
Variable
Examples
import paddle.fluid as fluid import paddle paddle.enable_static() batch_size = 64 label = fluid.data( name="label", shape=[batch_size, 1], dtype="int64") similarity = fluid.data( name="similarity", shape=[batch_size, 1], dtype="float32") cost = fluid.layers.teacher_student_sigmoid_loss(input=similarity, label=label)