plexus.analysis package
Analysis module for Plexus.
This module provides various analysis tools and metrics for evaluating agreement and performance.
- class plexus.analysis.Accuracy
Bases:
MetricImplementation of accuracy metric for classification tasks.
Accuracy is calculated as the number of correct predictions divided by the total number of predictions, expressed as a value between 0 and 1.
- class plexus.analysis.GwetAC1
Bases:
MetricImplementation of Gwet’s AC1 statistic for measuring inter-rater agreement.
Gwet’s AC1 is an alternative to Cohen’s Kappa and Fleiss’ Kappa that is more robust to the “Kappa paradox” where high observed agreement can result in low or negative Kappa values when there is high class imbalance.
References: - Gwet, K. L. (2008). Computing inter-rater reliability and its variance in the
presence of high agreement. British Journal of Mathematical and Statistical Psychology, 61(1), 29-48.
- class plexus.analysis.Precision(positive_labels=None)
Bases:
MetricImplementation of precision metric for binary classification tasks.
Precision is calculated as the number of true positives divided by the total number of items predicted as positive (true positives + false positives). It represents the ability of a classifier to avoid labeling negative samples as positive.
For binary classification, labels must be strings like ‘yes’/’no’ or ‘true’/’false’. The first label in self.positive_labels is considered the “positive” class.
Initialize the Precision metric with specified positive labels.
- Args:
- positive_labels: List of values to consider as positive class.
If None, defaults to [‘yes’, ‘true’, ‘1’, 1, True]
- __init__(positive_labels=None)
Initialize the Precision metric with specified positive labels.
- Args:
- positive_labels: List of values to consider as positive class.
If None, defaults to [‘yes’, ‘true’, ‘1’, 1, True]
- class plexus.analysis.Recall(positive_labels=None)
Bases:
MetricImplementation of recall metric for binary classification tasks.
Recall is calculated as the number of true positives divided by the total number of actual positive instances (true positives + false negatives). It represents the ability of a classifier to find all positive samples.
For binary classification, labels must be strings like ‘yes’/’no’ or ‘true’/’false’. The first label in self.positive_labels is considered the “positive” class.
Initialize the Recall metric with specified positive labels.
- Args:
- positive_labels: List of values to consider as positive class.
If None, defaults to [‘yes’, ‘true’, ‘1’, 1, True]
- __init__(positive_labels=None)
Initialize the Recall metric with specified positive labels.
- Args:
- positive_labels: List of values to consider as positive class.
If None, defaults to [‘yes’, ‘true’, ‘1’, 1, True]
Subpackages
- plexus.analysis.metrics package
- plexus.analysis.topics package