5 Your First Agent
This chapter is your hands-on starting point: define an agent, send it a message, and get a reliable, typed result back.
If you haven’t installed Tactus yet (or cloned this book repo), do that first in the Installation and Setup chapter.
By the end, you’ll have a small workflow that turns messy meeting notes into a draft recap email. It won’t be “production ready” yet—but it will be real, runnable, and it will establish the shape we’ll keep extending through Part II.
5.1 The Running Example: A Meeting Recap Draft
Most people’s first useful agent isn’t “research the web” or “plan a vacation.” It’s something boring and recurring:
- Someone pastes messy notes (a transcript, a rough outline, a set of bullets).
- You want a clean recap email.
- You want the workflow to be safe (no accidental sends) and reliable (no fragile scripts).
We’ll start with the simplest useful version: draft the email.
5.2 What an Agent { ... } Is (and Isn’t)
In Tactus, an agent is a configured LLM “worker”:
- It has a provider and model
- It has a system prompt (and optional initial message)
- It may have tools
- You invoke it with callable syntax:
recapper()orrecapper({message = "..."})
An agent is not your control flow. You still write the loop, the branching, the checks, and the safety gates in deterministic Lua code. That split is the whole point: use intelligence where it helps, use code where it must be correct.
5.3 A Minimal Drafting Agent
The example file for this chapter is code/chapter-06/20-meeting-recap-draft.tac. It:
- Accepts a
raw_notesinput string - Calls a
recapperagent once - Returns a single
draftstring (subject + body in plain text)
Run it (real model call):
tactus run code/chapter-06/20-meeting-recap-draft.tac \
--param recipient_name="Sam" \
--param raw_notes="Discussed Q1 launch timeline. Risks: vendor delays. Action: Sam to confirm dates by Friday."You should get output shaped like:
{"draft": "..."}
5.3.1 Mock run (fast, deterministic)
You can also test the file in mock mode:
tactus test code/chapter-06/20-meeting-recap-draft.tac --mockMock mode is how this book keeps examples runnable without requiring API keys or network calls.
5.4 Providers, Models, and “Good Defaults”
For Part II we’ll keep provider/model choices conservative:
- Use a capable, low-latency model (for example
gpt-4o-mini) - Keep temperature low for business writing (so drafts are stable)
You can override model settings per agent (or even per call), but don’t reach for that until you have a workflow shape you like. Prompt clarity and output validation usually buy you more reliability than tweaking temperature.
5.5 Prompts That Support Orchestration
When you’re using an LLM inside a deterministic workflow, a “good prompt” is less about being poetic and more about being operational:
- State the goal and the audience.
- Constrain format (what fields/sections exist).
- Say what to do when information is missing (don’t invent; mark as “TBD” or ask later).
- Keep the agent’s job narrow. (Drafting is one job; sending is another.)
In the next chapter, we’ll tighten this further by making the agent return structured data instead of free-form text.
5.6 Looking Ahead
Right now, our output is “typed” only in the simplest sense: it’s a string. That’s enough to get momentum, but not enough to build reliably on top of it.
Next up: procedures and outputs—how to give this workflow a clear input contract and a structured output contract, so downstream steps don’t have to scrape text.