What are Agents?
There are many definitions of AI agents depending on which tool, app, or vendor you use to create your "agents", but in Turingpaper they are pretty simple:
An agent is:
- An LLM model like
gpt-4.1oro3 - The model settings, like temperature or reasoning effort
- Instructions that contextualize the LLM model for specific tasks
- Tool access that allow the model to interact with the outside world
The instructions in a Turingpaper automation are not put into one place, they should be logically spread out over prompts, which are focused instructions of how to accomplish a specific task in your overall automation, and spread out over agents, which contain more generic but domain specific instructions.
Prompts are always evaluated by agents. The work of the automation gets done by agents following the instructions of prompts, which in turn instruct the agents to use various tools to interact with the outside world.
The automations created in Turingpaper can easily use multiple agents by simply assigning different agents to different prompts. Since prompts can call prompts in Turingpaper, its trivial for one agent to hand-off work to another and get back results.
When to Use Custom Agents?
The built in Generalist agent has pre-set access to many tools and generic instructions that are helpful when prototyping in Turingpaper. It's a great agent to start with. You can go to Settings → Agents to see its instructions, and even clone the Generalist to base your own custom agent on its instructions.
However, exactly because the Generalist has such generic instructions and access to more tools than most automations need, best practice is to eventually create your own agent and use it in your prompts.
What Model Should an Agent Use?
Every agent must use some LLM model like gpt-4.1 or o3. We recommend to
start with gpt-4.1 because it is well rounded and not too expensive per
token, though not cheep either.
After choosing the model, test your automation and see which of these is happening:
- The automation works well with the model.
- The automation DOES NOT seem to work well.
If the automation does not seem to work well, it's likely one of two things:
- The instructions are not sufficiently specific and need improvement.
- Or some parts of the automation need a more powerful model.
gpt-4.1, it usually means that the instructions in
the prompts or the agents are not sufficiently specific and simple. Only about 1%
of the time have we found that the automation or some part of the automation
needs a more powerful model, like o3.
When to Use Multiple Agents?
Some prompts have very trivial task instructions, such as fetching content and
extracting some text verbatim. For such tasks an agent based on a model like
o4-mini might be perfect, because it's inexpensive and sufficient to get the
job done.
Other prompts might contain task instructions that are more complex, require
good reasoning and broad real world knowledge. In such cases an agent based on
a model like o3 might be better at following the instructions in such a prompt.
How are Agents Different from Prompts?
Since prompts contain instructions, including instructions for tool use, how are agents different from prompts?Good question!
Agents are special because unlike prompts their instructions are considered the "system level" instructions or the "developer instructions". The agent instructions have the highest priority, higher than the prompt instructions.