ChunkEvaluator¶
- class paddle.fluid.metrics. ChunkEvaluator ( name=None ) [source]
-
Accumulate counter numbers output by chunk_eval from mini-batches and compute the precision recall and F1-score using the accumulated counter numbers. ChunkEvaluator has three states: num_infer_chunks, num_label_chunks and num_correct_chunks, which correspond to the number of chunks, the number of labeled chunks, and the number of correctly identified chunks. For some basics of chunking, please refer to Chunking with Support Vector Machines . ChunkEvalEvaluator computes the precision, recall, and F1-score of chunk detection, and supports IOB, IOE, IOBES and IO (also known as plain) tagging schemes.
- Parameters
-
name (str, optional) – Metric name. For details, please refer to Name. Default is None.
Examples
import paddle.fluid as fluid # init the chunk-level evaluation manager metric = fluid.metrics.ChunkEvaluator() # suppose the model predict 10 chucks, while 8 ones are correct and the ground truth has 9 chucks. num_infer_chunks = 10 num_label_chunks = 9 num_correct_chunks = 8 metric.update(num_infer_chunks, num_label_chunks, num_correct_chunks) numpy_precision, numpy_recall, numpy_f1 = metric.eval() print("precision: %.2f, recall: %.2f, f1: %.2f" % (numpy_precision, numpy_recall, numpy_f1)) # the next batch, predicting 3 perfectly correct chucks. num_infer_chunks = 3 num_label_chunks = 3 num_correct_chunks = 3 metric.update(num_infer_chunks, num_label_chunks, num_correct_chunks) numpy_precision, numpy_recall, numpy_f1 = metric.eval() print("precision: %.2f, recall: %.2f, f1: %.2f" % (numpy_precision, numpy_recall, numpy_f1))
-
update
(
num_infer_chunks,
num_label_chunks,
num_correct_chunks
)
update¶
-
This function takes (num_infer_chunks, num_label_chunks, num_correct_chunks) as input, to accumulate and update the corresponding status of the ChunkEvaluator object. The update method is as follows:
beginarrayltextself.numinferchunks+=textnuminferchunkstextself.numLabelchunks+=textnumlabelchunkstextself.numcorrectchunks+=textnumcorrectchunksendarray- Parameters
-
num_infer_chunks (int|numpy.array) – The number of chunks in Inference on the given minibatch.
num_label_chunks (int|numpy.array) – The number of chunks in Label on the given mini-batch.
num_correct_chunks (int|float|numpy.array) – The number of chunks both in Inference and Label on the given mini-batch.
-
eval
(
)
eval¶
-
This function returns the mean precision, recall and f1 score for all accumulated minibatches.
- Returns
-
mean precision, recall and f1 score.
- Return type
-
float
-
get_config
(
)
get_config¶
-
Get the metric and current states. The states are the members who do not has “_” prefix.
- Parameters
-
None –
- Returns
-
a python dict, which contains the inner states of the metric instance
- Return types:
-
a python dict
-
reset
(
)
reset¶
-
reset function empties the evaluation memory for previous mini-batches.
- Parameters
-
None –
- Returns
-
None
- Return types:
-
None