MetricBase¶
- class paddle.fluid.metrics. MetricBase ( name ) [source]
-
In many cases, we usually have to split the test data into mini-batches for evaluating deep neural networks, therefore we need to collect the evaluation results of each mini-batch and aggregate them into the final result. The paddle.fluid.metrics is designed for a convenient way of deep neural network evaluation.
The paddle.fluid.metrics contains serval different evaluation metrics like precision and recall, and most of them have the following functions:
1. take the prediction result and the corresponding labels of a mini-batch as input, then compute the evaluation result for the input mini-batch.
aggregate the existing evaluation results as the overall performance.
The class Metric is the base class for all classes in paddle.fluid.metrics, it defines the fundamental APIs for all metrics classes, including:
1. update(preds, labels): given the prediction results (preds) and the labels (labels) of some mini-batch, compute the evaluation result of that mini-batch, and memorize the evaluation result.
2. eval(): aggregate all existing evaluation result in the memory, and return the overall performance across different mini-batches.
reset(): empty the memory.
-
reset
(
)
reset¶
-
reset function empties the evaluation memory for previous mini-batches.
- Parameters
-
None –
- Returns
-
None
- Return types:
-
None
-
get_config
(
)
get_config¶
-
Get the metric and current states. The states are the members who do not has “_” prefix.
- Parameters
-
None –
- Returns
-
a python dict, which contains the inner states of the metric instance
- Return types:
-
a python dict
-
update
(
preds,
labels
)
update¶
-
Given the prediction results (preds) and the labels (labels) of some mini-batch, compute the evaluation result of that mini-batch, and memorize the evaluation result. Please notice that the update function only memorizes the evaluation result but would not return the score. If you want to get the evaluation result, please call eval() function.
- Parameters
-
preds (numpy.array) – the predictions of current minibatch
labels (numpy.array) – the labels of current minibatch.
- Returns
-
None
- Return types:
-
None
-
eval
(
)
eval¶
-
Aggregate all existing evaluation results in the memory, and return the overall performance across different mini-batches.
- Parameters
-
None –
- Returns
-
The overall performance across different mini-batches.
- Return types:
-
float|list(float)|numpy.array: the metrics via Python.