plexus.analysis.feedback_analyzer module

Core feedback analysis functionality for analyzing feedback items.

This module provides reusable functions for analyzing feedback items to calculate metrics like accuracy, Gwet’s AC1 agreement, confusion matrix, precision, and recall.

This code is shared between: - FeedbackAnalysis report block - Feedback evaluation type - CLI feedback analysis tools

plexus.analysis.feedback_analyzer.analyze_feedback_items(feedback_items: List[FeedbackItem], score_id: str | None = None) Dict[str, Any]

Analyze feedback items to produce summary statistics including confusion matrix, accuracy, AC1 agreement, precision/recall.

This is the core feedback analysis function that should be used by all components that need to analyze feedback data.

Args:

feedback_items: List of FeedbackItem objects to analyze score_id: Optional score ID for logging purposes

Returns:

Dictionary with analysis results including: - ac1: Gwet’s AC1 agreement coefficient (float or None) - accuracy: Accuracy percentage (float or None) - total_items: Number of valid feedback pairs (int) - agreements: Number of agreements (int) - disagreements: Number of disagreements (int) - confusion_matrix: Confusion matrix data structure - precision: Precision percentage (float or None) - recall: Recall percentage (float or None) - class_distribution: List of dicts with label and count - predicted_class_distribution: List of dicts with label and count - warning: Warning message if applicable (str or None)

plexus.analysis.feedback_analyzer.build_confusion_matrix(reference_values: List, predicted_values: List) Dict[str, Any]

Build a confusion matrix from reference and predicted values.

Args:

reference_values: List of reference (ground truth) values predicted_values: List of predicted values

Returns:

Dictionary representation of confusion matrix with: - labels: List of class labels - matrix: List of row objects with actualClassLabel and predictedClassCounts

plexus.analysis.feedback_analyzer.calculate_precision_recall(reference_values: List, predicted_values: List, classes: List[str]) Dict[str, float | None]

Calculate precision and recall metrics.

Args:

reference_values: List of reference (ground truth) values predicted_values: List of predicted values classes: List of class labels

Returns:

Dictionary with precision and recall percentages (or None if cannot be calculated)

plexus.analysis.feedback_analyzer.generate_recommendation(analysis: Dict[str, Any]) str

Generate actionable recommendations based on feedback analysis.

Args:

analysis: Dictionary containing analysis results

Returns:

String with actionable recommendations