plexus.analysis.metrics.recall module

Recall metric implementation.

This module provides a recall metric that calculates the ratio of true positives to all actual positive instances.

class plexus.analysis.metrics.recall.Recall(positive_labels=None)

Bases: Metric

Implementation of recall metric for binary classification tasks.

Recall is calculated as the number of true positives divided by the total number of actual positive instances (true positives + false negatives). It represents the ability of a classifier to find all positive samples.

For binary classification, labels must be strings like ‘yes’/’no’ or ‘true’/’false’. The first label in self.positive_labels is considered the “positive” class.

Initialize the Recall metric with specified positive labels.

Args:
positive_labels: List of values to consider as positive class.

If None, defaults to [‘yes’, ‘true’, ‘1’, 1, True]

__init__(positive_labels=None)

Initialize the Recall metric with specified positive labels.

Args:
positive_labels: List of values to consider as positive class.

If None, defaults to [‘yes’, ‘true’, ‘1’, 1, True]

calculate(input_data: Input) Result

Calculate recall between prediction and reference data.

Args:

input_data: Metric.Input containing reference and prediction lists

Returns:

Metric.Result with the recall value and metadata