plexus.scores.nodes.Classifier module
- class plexus.scores.nodes.Classifier.Classifier(**parameters)
Bases:
BaseNode
A node that performs binary classification using a LangGraph subgraph to separate LLM calls from parsing and retry logic.
- class ClassificationOutputParser(*, name: str | None = None, valid_classes: List[str], parse_from_start: bool = False)
Bases:
BaseOutputParser
Parser that identifies one of the valid classifications.
- __init__(**data)
- find_matches_in_text(text: str) List[Tuple[str, int, int]]
Find all matches in text with their line and position. Returns list of tuples: (valid_class, line_number, position)
- model_config: ClassVar[ConfigDict] = {'extra': 'ignore', 'protected_namespaces': ()}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- normalize_text(text: str) str
Normalize text by converting to lowercase and handling special characters.
- parse(output: str) Dict[str, Any]
Parse a single string model output into some structure.
- Args:
text: String output of a language model.
- Returns:
Structured output.
- parse_from_start: bool
- select_match(matches: List[Tuple[str, int, int, int]], text: str) str | None
Select the appropriate match based on parse_from_start setting.
- valid_classes: List[str]
- class GraphState(*, text: str, metadata: dict | None = None, results: dict | None = None, messages: ~typing.List[~typing.Dict[str, ~typing.Any]] | None = None, is_not_empty: bool | None = None, value: str | None = None, explanation: str | None = None, reasoning: str | None = None, chat_history: ~typing.List[~typing.Any] = <factory>, completion: str | None = None, classification: str | None = None, confidence: float | None = None, retry_count: int | None = 0, at_llm_breakpoint: bool | None = False, good_call: str | None = None, good_call_explanation: str | None = None, non_qualifying_reason: str | None = None, non_qualifying_explanation: str | None = None, **extra_data: ~typing.Any)
Bases:
GraphState
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'allow', 'validate_default': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class Parameters(*, model_provider: Literal['ChatOpenAI', 'AzureChatOpenAI', 'BedrockChat', 'ChatVertexAI'] = 'AzureChatOpenAI', model_name: str | None = None, base_model_name: str | None = None, model_region: str | None = None, temperature: float | None = 0, top_p: float | None = 0.03, max_tokens: int | None = 500, input: dict | None = None, output: dict | None = None, system_message: str | None = None, user_message: str | None = None, example_refinement_message: str | None = None, single_line_messages: bool = False, name: str | None = None, valid_classes: List[str], explanation_message: str | None = None, maximum_retry_count: int = 6, parse_from_start: bool | None = False)
Bases:
Parameters
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- explanation_message: str | None
- maximum_retry_count: int
- model_config: ClassVar[ConfigDict] = {'protected_namespaces': ()}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- parse_from_start: bool | None
- valid_classes: List[str]
- __init__(**parameters)
- add_core_nodes(workflow: StateGraph) StateGraph
Add core nodes to the workflow.
- batch: bool = False
- get_llm_call_node()
Node that handles the LLM call.
- get_llm_prompt_node()
Node that only handles the LLM request.
- get_max_retries_node()
Node that handles the case when max retries are reached.
- get_parser_node()
Node that handles parsing the completion.
- get_retry_node()
Node that prepares for retry by updating chat history.
- async handle_max_retries(state: GraphState) GraphState
- llm_call(state)
- llm_request(state)
- should_retry(state)
Determines whether to retry, end, or proceed based on state.