plexus.scores.TactusScore module

TactusScore - Score implementation that executes Tactus DSL code.

Tactus is a Lua-based DSL for defining AI agent workflows. This score type allows embedding Tactus code directly in YAML configuration for classification.

Uses the Tactus runtime with in-process execution (no Docker containers) for high-volume Plexus scenarios with trusted code.

class plexus.scores.TactusScore.TactusScore(**parameters)

Bases: Score, LangChainUser

Score that executes embedded Tactus DSL code for classification.

Uses Tactus runtime with in-process execution (no containers) for high-volume Plexus scenarios with trusted code.

Data flow:
  1. Agent analyzes transcript and returns text response

  2. Lua code in Procedure parses/extracts what it needs

  3. Procedure returns {value, explanation, confidence}

  4. TactusScore maps Procedure output to Score.Result

Example YAML:

class: TactusScore model_provider: ChatOpenAI model_name: gpt-4o-mini tactus_code: |

classifier = Agent {

system_prompt = “Classify sentiment as positive, negative, or neutral…”

}

Procedure {

input = {text = field.string{required = true}}, output = {value = field.string{required = true}}, function(input)

local response = classifier({message = input.text}) – Parse the agent’s response to extract classification local value = “neutral” if response:lower():find(“positive”) then

value = “positive”

elseif response:lower():find(“negative”) then

value = “negative”

end return {value = value, explanation = response}

end

}

Initialize TactusScore with Tactus code and model configuration.

class Parameters(*, model_provider: Literal['ChatOpenAI', 'AzureChatOpenAI', 'BedrockChat', 'ChatVertexAI', 'ChatOllama'] = 'AzureChatOpenAI', model_name: str | None = None, base_model_name: str | None = None, reasoning_effort: str | None = 'low', verbosity: str | None = 'medium', model_region: str | None = None, temperature: float | None = 0, top_p: float | None = 0.03, max_tokens: int | None = 500, logprobs: bool | None = False, top_logprobs: int | None = None, scorecard_name: str | None = None, name: str | None = None, id: str | int | None = None, key: str | None = None, dependencies: List[dict] | None = None, data: dict | None = None, number_of_classes: int | None = None, label_score_name: str | None = None, label_field: str | None = None, tactus_code: str, valid_classes: List[str] | None = None, output: Dict[str, str] | None = None)

Bases: Parameters, Parameters

Configuration parameters for TactusScore.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

model_config = {'protected_namespaces': ()}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

output: Dict[str, str] | None
tactus_code: str
valid_classes: List[str] | None
__init__(**parameters)

Initialize TactusScore with Tactus code and model configuration.

async classmethod create(**parameters) TactusScore

Factory method for async initialization.

async predict(model_input: Input, **_kwargs: Any) Result | List[Result]

Execute Tactus procedure and return classification result.

Parameters

model_inputScore.Input

The input data containing text and metadata

Returns

Score.Result

The prediction result with value and explanation