plexus.cli.prediction.predictions module
- plexus.cli.prediction.predictions.create_feedback_comparison(current_prediction: dict, feedback_item: FeedbackItem, score_name: str) dict
Create a comparison between current prediction and historical feedback.
- Args:
current_prediction: Current prediction result dict feedback_item: FeedbackItem object from GraphQL score_name: Name of the score being compared
- Returns:
Dictionary with comparison data
- plexus.cli.prediction.predictions.create_score_input(sample_row, item_id, scorecard_class, score_name)
Create a Score.Input object from sample data.
- plexus.cli.prediction.predictions.get_score_instance(scorecard_identifier: str, score_name: str, no_cache=False, yaml_only=False)
Get a Score instance by loading individual score configuration.
- Args:
scorecard_identifier: A string that identifies the scorecard (ID, name, key, or external ID) score_name: Name of the specific score to load no_cache: If True, don’t cache API data to local YAML files (always fetch from API). yaml_only: If True, load only from local YAML files without API calls.
- Returns:
Score: An initialized Score instance
- plexus.cli.prediction.predictions.handle_exception(loop, context, scorecard_identifier=None, score_identifier=None)
Custom exception handler for the event loop
- plexus.cli.prediction.predictions.output_excel(results, score_names, scorecard_identifier)
- plexus.cli.prediction.predictions.output_yaml_prediction_results(results: list, score_names: list, scorecard_identifier: str, score_identifier: str = None, item_identifiers: list = None, include_input: bool = False, include_trace: bool = False)
Output prediction results in token-efficient YAML format.
- async plexus.cli.prediction.predictions.predict_impl(scorecard_identifier: str, score_names: list, item_identifiers: list = None, number_of_times: int = 1, excel: bool = False, use_langsmith_trace: bool = False, fresh: bool = False, no_cache: bool = False, yaml_only: bool = False, task_id: str = None, format: str = 'fixed', include_input: bool = False, include_trace: bool = False, version: str = None, latest: bool = False, compare_to_feedback: bool = False)
Implementation of predict command
- async plexus.cli.prediction.predictions.predict_score(score_name, scorecard_class, sample_row, used_item_id)
Predict a single score.
- async plexus.cli.prediction.predictions.predict_score_impl(scorecard_class, score_name, item_id, input_data, use_langsmith_trace=False, fresh=False)
- async plexus.cli.prediction.predictions.predict_score_with_individual_loading(scorecard_identifier, score_name, sample_row, used_item_id, no_cache=False, yaml_only=False, specific_version=None)
Predict a single score using Scorecard.score_entire_text with dependency backfilling.
- plexus.cli.prediction.predictions.select_sample(scorecard_identifier, score_name, item_identifier, fresh, compare_to_feedback=False, scorecard_id=None, score_id=None)
Select an item from the Plexus API using flexible identifier resolution.