plexus.LangChainUser module

class plexus.LangChainUser.LangChainUser(**parameters)

Bases: object

MAX_RETRY_ATTEMPTS = 20
class Parameters(*, model_provider: Literal['ChatOpenAI', 'AzureChatOpenAI', 'BedrockChat', 'ChatVertexAI', 'ChatOllama'] = 'AzureChatOpenAI', model_name: str | None = None, base_model_name: str | None = None, reasoning_effort: str | None = 'low', verbosity: str | None = 'medium', model_region: str | None = None, temperature: float | None = 0, top_p: float | None = 0.03, max_tokens: int | None = 500, logprobs: bool | None = False, top_logprobs: int | None = None)

Bases: BaseModel

Parameters for this node. Based on the LangGraphScore.Parameters class.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

base_model_name: str | None
logprobs: bool | None
max_tokens: int | None
model_config: ClassVar[ConfigDict] = {'protected_namespaces': ()}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_name: str | None
model_provider: Literal['ChatOpenAI', 'AzureChatOpenAI', 'BedrockChat', 'ChatVertexAI', 'ChatOllama']
model_region: str | None
reasoning_effort: str | None
temperature: float | None
top_logprobs: int | None
top_p: float | None
verbosity: str | None
__init__(**parameters)
async cleanup()

Clean up Azure credentials and any associated threads.

extract_reasoning_content(response) str

Extract reasoning content from thinking models responses for logging/debugging. Returns empty string for non-thinking models or if no reasoning content found.

get_azure_credential()

Get Azure credential for authentication.

get_token_usage()
is_gpt_oss_model() bool

Check if the current model is a gpt-oss model.

normalize_response_text(response) str

Extract a plain text string from a LangChain AIMessage-like response.