plexus.scores.AgenticExtractor module

class plexus.scores.AgenticExtractor.AgenticExtractor(scorecard_name, score_name, **kwargs)

Bases: LangGraphScore

Initialize the LangGraphScore.

This method sets up the score parameters and initializes basic attributes. The language model initialization is deferred to the async setup.

Parameters:

parameters – Configuration parameters for the score and language model.

class Parameters(*, scorecard_name: str | None = None, name: str | None = None, id: str | int | None = None, key: str | None = None, dependencies: List[dict] | None = None, data: dict | None = None, number_of_classes: int | None = None, label_score_name: str | None = None, label_field: str | None = None, model_provider: Literal['ChatOpenAI', 'AzureChatOpenAI', 'BedrockChat', 'ChatVertexAI'] = 'AzureChatOpenAI', model_name: str | None = None, model_region: str | None = None, temperature: float | None = 0, max_tokens: int | None = 500, graph: list[dict] | None = None, input: dict | None = None, output: dict | None = None, depends_on: List[str] | Dict[str, str | Dict[str, Any]] | None = None, single_line_messages: bool = False, checkpoint_db_path: str | None = './.plexus/checkpoints/langgraph.db', thread_id: str | None = None, postgres_url: str | None = None, prompt: str)

Bases: Parameters

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

model_config: ClassVar[ConfigDict] = {'protected_namespaces': ()}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

prompt: str
__init__(scorecard_name, score_name, **kwargs)

Initialize the LangGraphScore.

This method sets up the score parameters and initializes basic attributes. The language model initialization is deferred to the async setup.

Parameters:

parameters – Configuration parameters for the score and language model.

build_compiled_workflow(*, model_input: Input)

Build the LangGraph workflow.

static clean_quote(quote: str) str
evaluate_model()

This is a placeholder for the validation process. It doesn’t make sense to implement this yet, because we don’t yet have any ground-truth labels to use for validation for any extractor. #YAGNI

load_context(context)

Load the trained model and any necessary artifacts based on the MLflow context.

Parameters

contextmlflow.pyfunc.PythonModelContext

The context object containing artifacts and other information.

predict(context, model_input: Input)

Make predictions using the LangGraph workflow.

Parameters

model_inputScore.Input

The input data containing text and metadata

thread_idOptional[str]

Thread ID for checkpointing

batch_dataOptional[Dict[str, Any]]

Additional data for batch processing

**kwargsAny

Additional keyword arguments

Returns

Score.Result

The prediction result with value and explanation

class plexus.scores.AgenticExtractor.ExtractorState

Bases: TypedDict

entity: str | None
messages: Tag(tag=ToolMessageChunk)], FieldInfo(annotation=NoneType, required=True, discriminator=Discriminator(discriminator=<function _get_type at 0x7fd60a64fee0>, custom_error_type=None, custom_error_message=None, custom_error_context=None))]], <function add_messages at 0x7fd603671550>]
quote: str | None
text: str
validation_error: str | None