plexus.cli.procedure.model_config module
Model Configuration System for SOP Agent
This module provides a model-agnostic configuration system that allows different LLM models (GPT-4, GPT-5, O3, etc.) to be configured with their specific parameters without hardcoding model-specific logic.
The configuration system: 1. Allows dynamic parameter passing through model_kwargs 2. Supports different parameter sets for different models 3. Maintains backward compatibility with existing GPT-4 configurations 4. Provides clean separation between model configuration and business logic
- class plexus.cli.procedure.model_config.ModelConfig(model: str = 'gpt-5', temperature: float | None = None, max_tokens: int | None = None, reasoning_effort: str | None = None, verbosity: str | None = None, model_kwargs: ~typing.Dict[str, ~typing.Any] = <factory>, openai_api_key: str | None = None, stream: bool = False)
Bases:
objectModel configuration that supports any LLM model with dynamic parameters.
This approach lets configuration determine what parameters are valid rather than hardcoding model-specific parameter validation in code.
- __init__(model: str = 'gpt-5', temperature: float | None = None, max_tokens: int | None = None, reasoning_effort: str | None = None, verbosity: str | None = None, model_kwargs: ~typing.Dict[str, ~typing.Any] = <factory>, openai_api_key: str | None = None, stream: bool = False) None
- create_langchain_llm() ChatOpenAI
Create a LangChain ChatOpenAI instance with this configuration.
This method handles all the parameter passing complexity so that the calling code doesn’t need to know about model differences.
- classmethod from_dict(config_dict: Dict[str, Any]) ModelConfig
Create ModelConfig from a dictionary (e.g., loaded from YAML/JSON).
This allows configuration to be externalized to config files.
- max_tokens: int | None = None
- model: str = 'gpt-5'
- model_kwargs: Dict[str, Any]
- openai_api_key: str | None = None
- reasoning_effort: str | None = None
- stream: bool = False
- temperature: float | None = None
- to_langchain_kwargs() Dict[str, Any]
Convert this config to kwargs suitable for LangChain ChatOpenAI.
This method builds the parameter dict dynamically, only including parameters that are actually set. This allows: - GPT-4 configs to include temperature - GPT-5 configs to include reasoning_effort and verbosity - Any model to include custom parameters via model_kwargs - Invalid parameters to cause errors at the OpenAI API level
- verbosity: str | None = None
- class plexus.cli.procedure.model_config.ModelConfigs
Bases:
objectPredefined model configurations for common scenarios.
These serve as examples and can be customized or extended.
- static from_environment() ModelConfig
Create configuration from environment variables.
This allows runtime configuration without code changes: - MODEL_NAME=gpt-5 - MODEL_TEMPERATURE=0.3 - MODEL_REASONING_EFFORT=high - MODEL_VERBOSITY=medium - etc.
- static gpt_4o_default() ModelConfig
Standard GPT-4o configuration.
- static gpt_4o_precise() ModelConfig
GPT-4o with low temperature for precise tasks.
- static gpt_5_default() ModelConfig
Standard GPT-5 configuration with supported parameters.
- static gpt_5_high_reasoning() ModelConfig
GPT-5 configured for complex reasoning tasks.
- static gpt_5_mini_fast() ModelConfig
GPT-5 mini for fast responses.
- plexus.cli.procedure.model_config.create_configured_llm(model_config: ModelConfig | Dict[str, Any] | str | None = None, **override_kwargs) ChatOpenAI
Convenience function to create a configured LLM.
- Args:
- model_config: Can be:
ModelConfig instance
Dict to create ModelConfig from
String model name (uses defaults)
None (uses environment or defaults)
**override_kwargs: Override any parameters
- Returns:
Configured ChatOpenAI instance