plexus.analysis.metrics.precision module

Precision metric implementation.

This module provides a precision metric that calculates the ratio of true positives to all positive predictions.

class plexus.analysis.metrics.precision.Precision(positive_labels=None)

Bases: Metric

Implementation of precision metric for binary classification tasks.

Precision is calculated as the number of true positives divided by the total number of items predicted as positive (true positives + false positives). It represents the ability of a classifier to avoid labeling negative samples as positive.

For binary classification, labels must be strings like ‘yes’/’no’ or ‘true’/’false’. The first label in self.positive_labels is considered the “positive” class.

Initialize the Precision metric with specified positive labels.

Args:
positive_labels: List of values to consider as positive class.

If None, defaults to [‘yes’, ‘true’, ‘1’, 1, True]

__init__(positive_labels=None)

Initialize the Precision metric with specified positive labels.

Args:
positive_labels: List of values to consider as positive class.

If None, defaults to [‘yes’, ‘true’, ‘1’, 1, True]

calculate(input_data: Input) Result

Calculate precision between prediction and reference data.

Args:

input_data: Metric.Input containing reference and prediction lists

Returns:

Metric.Result with the precision value and metadata