15  Secretless Execution: Brokers, Tool Runners, and Trust Boundaries

Container isolation is a huge step forward: it keeps an agent from vandalizing your laptop and it makes filesystem side effects ephemeral by default.

But from an information security perspective, containers don’t automatically solve the most important question:

Where are the secrets?

If your runtime (container or process) contains OPENAI_API_KEY, SMTP credentials, database tokens, or cloud keys, then prompt injection and tool misuse can still become data exfiltration.

This chapter is about the guardrail that InfoSec teams love because it changes the game:

Make the runtime secretless.

When there’s nothing to steal, a whole class of attacks collapses.

15.1 Current Status: The Brokered Sandbox Runtime

Tactus’s “brokered sandbox runtime” changes the default local security posture:

  • the runtime container is secretless (Tactus refuses to pass common API-key env vars into the container)
  • the runtime container is networkless by default (sandbox.network: none)
  • privileged operations (currently: LLM API calls, plus a tiny allowlisted set of host tools) run in a host-side broker and stream results back to the runtime

In local development, the default uses sandbox.broker_transport: stdio, which means the runtime container does not need networking to talk to the broker.

If you run with a remote broker (for example, to explore cloud/Kubernetes-style deployment), you can use sandbox.broker_transport: tcp (or tls). In that mode the runtime needs network access (sandbox.network != none), so you rely on infrastructure controls (egress policies/security groups) to ensure the runtime can only reach the broker.

15.2 Roadmap: From Brokers to Full Tool Runners

The brokered runtime is the MVP step toward a more complete execution model:

What works today: - Local Docker sandbox with sandbox.network: none and sandbox.broker_transport: stdio (default) - Remote-style broker connectivity with sandbox.broker_transport: tcp|tls (requires sandbox.network != none) - Brokered host tools via a deny-by-default allowlist (currently very small)

What’s intentionally not done yet: - A full tool runner system (sandbox/isolated/host runners for arbitrary tools) - Tool discovery conventions and packaging/manifest workflows

15.3 The Core Idea: Treat the Runtime as Untrusted

By design, a Tactus procedure mixes:

  • untrusted components: model behavior, untrusted inputs, and .tac orchestration code
  • privileged components: model provider keys, email credentials, databases, internal APIs

The architectural goal is to make the boundary between those explicit and enforceable.

In practical terms:

  • the runtime runs .tac code and checkpoints
  • the runtime has no secrets
  • privileged actions happen in a broker or tool runner that enforces policy

This is a trust-boundary story, not a prompt story.

15.4 A Reference Architecture (Conceptual)

Here’s a common deployment shape:

  1. Runtime (untrusted): executes .tac code, runs agent turns, stores state/checkpoints, renders HITL requests.
  2. Broker (trusted): holds credentials and performs privileged operations (model calls, credentialed tools).
  3. Tool runners (mixed): execute tool code in different trust zones depending on risk.

You can picture the trust boundary like this:

             +------------------------------+
             |          TRUSTED             |
             |  Broker + credential store   |
             |  - model provider keys       |
             |  - SMTP / DB / internal APIs |
             +---------------+--------------+
                             |
                             | narrow RPC/tool interface
                             |
 +---------------------------v---------------------------+
 |                        UNTRUSTED                      |
 | Runtime container (per execution)                     |
 | - Lua VM + procedure orchestration                    |
 | - state/checkpoints                                   |
 | - NO secrets                                          |
 | - restricted filesystem/network                       |
 +-------------------------------------------------------+

Two properties are doing most of the security work here:

  • no ambient authority in the runtime (no secrets, no broad OS/network access)
  • narrow interfaces to privileged operations (tools that can be validated, logged, and policy-checked)

This is the principle of least privilege made concrete: remove ambient authority from the untrusted runtime, and force privileged work through narrow, auditable interfaces.

15.5 A Key Trust Boundary: .tac Code vs .tac.yml Config

Tactus draws a sharp line between sandboxed procedure code and trusted configuration:

  • .tac files are sandboxed Lua orchestration code — safe for user contributions, AI generation, and public sharing.
  • Sidecar YAML files (for example, procedure.tac.yml) are trusted configuration — they can mount host paths, configure network access, and change runtime settings.

If you accept user-contributed procedures, accept .tac files. Treat sidecar .yml as code review territory.

15.6 Volume Mounting: Sharing Files with Procedures

By default, Tactus mounts your current directory to /workspace:rw inside the container, making it easy for procedures to read and write project files. This design choice balances convenience with safety:

Why it’s safe: - Container isolation: The procedure can only access files in the mounted directory, not your entire filesystem - Git version control: You can review all changes with git diff and easily rollback if needed - Project scope: Only your current project is exposed, not your home directory or system files - Review workflow: Inspect changes before committing, just like reviewing code

Accessing files from Lua code:

-- Read a file from your project
local content = File.read("/workspace/data.txt")

-- Write results back to your project
File.write("/workspace/output.json", result_json)

-- List files in your project
local files = File.list("/workspace")

Mounting external data: If your procedure needs to access files from outside the current directory (like a sibling repository or shared data directory), use a sidecar configuration file:

# my_procedure.tac.yml
sandbox:
  volumes:
    - "../other-repo:/workspace/external:ro"  # Read-only access to sibling repo
    - "/data/shared:/data:ro"                  # Read-only shared data

Disabling the default mount: For procedures that should have limited filesystem access (like output-only workflows or untrusted procedures), you can disable the automatic current directory mount:

# restricted_procedure.tac.yml
sandbox:
  mount_current_dir: false  # Disable default mount
  volumes:
    - "./output:/workspace/output:rw"  # Only mount output directory

When to disable the default mount: - Running untrusted procedures from unknown sources - Output-only workflows (reports, builds) that don’t need source access - Production deployments with strict permission requirements - Multi-tenant systems where procedures share a runtime

The volume mounting system is an example of Tactus’s trust boundary philosophy: .tac procedure files are sandboxed and safe to share, while .tac.yml sidecar files are trusted configuration that control what the procedure can access.

15.7 Applying It to the Running Example

In Part II, our workflow can draft and send a recap email (stubbed). In production, “send email” is exactly the kind of step that wants to live behind a trusted boundary:

  • it uses credentials
  • it has external side effects
  • it needs policy (recipient restrictions, approvals, logging)

With a secretless runtime:

  • the runtime never sees email credentials
  • the runtime can’t directly talk to SMTP/email APIs
  • the only way to send is through a tool interface that the broker controls

At the .tac level, the workflow stays readable and high-level. The security posture comes from execution boundaries.

15.8 Tool Runners as Trust Zones

Not all tools are equal. Some are pure computation (safe). Some are side-effectful (dangerous). Some require secrets (sensitive).

The direction is a runner model where tool execution happens in different trust zones. A useful mental model is to define “runner” modes:

  • In-sandbox runner: tool executes alongside the runtime (safe tools, no secrets).
  • Isolated runner: tool executes in a separate container/process with scoped secrets (email, DB).
  • Host runner: tool executes on the host (only for explicitly trusted code; often avoided in multi-tenant systems).

The point isn’t the exact taxonomy; it’s that you treat tool execution as part of your threat model and choose a trust zone intentionally.

In practice, you often want multiple boundaries at the same time. For example: you might want to run tool code in an isolated environment (to protect the host and avoid cross-run leakage), while still keeping credentials out of the agent runtime entirely. That’s what the broker boundary enables: the runtime stays secretless, and privileged operations happen behind a narrow, auditable interface.

15.8.1 A practical rule of thumb

  • If a tool has secrets, keep it out of the runtime.
  • If a tool can write files or execute code, run it in an ephemeral environment.
  • If a tool can cause irreversible side effects, gate it with HITL and policy checks.

15.9 Preserving Streaming and Developer Experience

Security boundaries often fail adoption because they make development painful.

Tactus is designed so isolation doesn’t mean “black box execution.” You still want:

  • streaming model output in the IDE/CLI
  • a visible trace of tool calls and stages
  • durable checkpoints you can inspect

The broker boundary can preserve streaming by forwarding tokens/events rather than forcing the runtime to “poll” for results. Conceptually:

  1. runtime asks broker to execute a model call
  2. broker streams tokens back over the same channel
  3. runtime records the stream in the trace and forwards it to the UI

This matters for credibility: if your security model requires a totally different execution mode in production than in development, you won’t get parity—and parity is how you avoid security regressions.

15.10 Auditability and Policy Enforcement

Brokers and runners aren’t just about hiding secrets. They also give you places to enforce controls that are hard to bolt on later:

  • recipient allowlists and domain restrictions for email tools
  • rate limits and abuse controls
  • content policy checks (PII handling, redaction, blocklists)
  • structured logging of “who called what, when, with which args”
  • storing evidence for approvals/reviews

From an InfoSec perspective, this is moving from “best effort” controls to enforceable policy.

15.11 Multi-Tenant Isolation (Why Per-Execution Matters)

If you ever want to run agent workflows for multiple users, you need to prevent information linkage between sessions. Common failure modes include:

  • a session writes a file that a later session can read
  • shared tool runners cache data across tenants
  • logs and traces are accessible across tenants

The combination of:

  • per-execution containers
  • explicit volume mounts
  • secretless runtimes
  • per-tenant tool allowlists

…is how you build a multi-tenant story that security professionals can support.

The goal is not “agents can never do anything dangerous.” The goal is:

  • dangerous actions are explicit, gated, and auditable
  • secrets are held only in trusted components
  • tenant boundaries are enforced by isolation, not by convention

15.12 Looking Ahead

At this point, we have strong guardrails:

  • capability control and staged tool access
  • Lua sandboxing for untrusted orchestration code
  • container isolation for filesystem/code execution safety
  • secretless execution via brokers (and an expanding runner model)

Next, we switch from “don’t let it do something dangerous” to “make sure it reliably does the right thing”: behavior specifications and evaluations.