plexus.scores.nodes.Generator module

class plexus.scores.nodes.Generator.Generator(**parameters)

Bases: BaseNode

A node that simply generates completions from LLM calls using a LangGraph subgraph. This is a simplified version of Classifier without classification logic. It just focuses on generating content to be aliased via output configuration.

class GraphState(*, text: str, metadata: dict | None = None, results: dict | None = None, messages: ~typing.List[~typing.Dict[str, ~typing.Any]] | None = None, is_not_empty: bool | None = None, value: str | None = None, explanation: str | None = None, reasoning: str | None = None, chat_history: ~typing.List[~typing.Any] = <factory>, completion: str | None = None, classification: str | None = None, confidence: float | None = None, retry_count: int = 0, at_llm_breakpoint: bool | None = False, good_call: str | None = None, good_call_explanation: str | None = None, non_qualifying_reason: str | None = None, non_qualifying_explanation: str | None = None, **extra_data: ~typing.Any)

Bases: GraphState

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

completion: str | None
explanation: str | None
model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'allow', 'validate_default': True}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

retry_count: int
class Parameters(*, model_provider: Literal['ChatOpenAI', 'AzureChatOpenAI', 'BedrockChat', 'ChatVertexAI', 'ChatOllama'] = 'AzureChatOpenAI', model_name: str | None = None, base_model_name: str | None = None, reasoning_effort: str | None = 'low', verbosity: str | None = 'medium', model_region: str | None = None, temperature: float | None = 0, top_p: float | None = 0.03, max_tokens: int | None = 500, logprobs: bool | None = False, top_logprobs: int | None = None, input: dict | None = None, output: dict | None = None, system_message: str | None = None, user_message: str | None = None, example_refinement_message: str | None = None, single_line_messages: bool = False, name: str | None = None, maximum_retry_count: int = 1)

Bases: Parameters

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

maximum_retry_count: int
model_config: ClassVar[ConfigDict] = {'protected_namespaces': ()}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

__init__(**parameters)
add_core_nodes(workflow: StateGraph) StateGraph

Add core nodes to the workflow.

batch: bool = False
get_llm_call_node()

Node that handles the LLM call.

get_llm_prompt_node()

Node that only handles the LLM request.

get_max_retries_node()

Node that handles the case when max retries are reached.

get_retry_node()

Node that prepares for retry by updating chat history.

should_retry(state)

Determines whether to retry, end, or proceed based on state.