plexus.scores.nodes.BeforeAfterSlicer module

class plexus.scores.nodes.BeforeAfterSlicer.BeforeAfterSlicer(**parameters)

Bases: BaseNode, LangChainUser

A node that slices text input into ‘before’ and ‘after’ based on the provided prompt.

class GraphState(*, text: str, metadata: dict | None = None, results: dict | None = None, messages: ~typing.List[~typing.Dict[str, ~typing.Any]] | None = None, is_not_empty: bool | None = None, value: str | None = None, explanation: str | None = None, reasoning: str | None = None, chat_history: ~typing.List[~typing.Any] = <factory>, completion: str | None = None, classification: str | None = None, confidence: float | None = None, retry_count: int | None = 0, at_llm_breakpoint: bool | None = False, good_call: str | None = None, good_call_explanation: str | None = None, non_qualifying_reason: str | None = None, non_qualifying_explanation: str | None = None, before: str | None, after: str | None, **extra_data: ~typing.Any)

Bases: GraphState

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

after: str | None
before: str | None
model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'allow', 'validate_default': True}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class SlicingOutputParser(*args, name: str | None = None, text: str)

Bases: BaseOutputParser[dict]

class Config

Bases: object

arbitrary_types_allowed = True
underscore_attrs_are_private = True
FUZZY_MATCH_SCORE_CUTOFF: ClassVar[int] = 70
__init__(*args, **kwargs)
model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'ignore', 'protected_namespaces': (), 'underscore_attrs_are_private': True}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_post_init(context: Any, /) None

This function is meant to behave like a BaseModel method to initialise private attributes.

It takes context as an argument since that’s what pydantic-core passes when calling it.

Args:

self: The BaseModel instance. context: The context.

parse(output: str) Dict[str, Any]

Parse a single string model output into some structure.

Args:

text: String output of a language model.

Returns:

Structured output.

text: str
tokenize(text: str) list[str]
__init__(**parameters)
add_core_nodes(workflow: StateGraph) StateGraph

Build and return a core LangGraph workflow. The node name is available as self.node_name when needed.

get_slicer_node() LambdaType