- Getting started
- Prerequisites
- Building agents

Agents user guide
Best practices
This section explains how to design a robust agent that has all of the necessary control, action, and governance required to automate a business process.
Developing effective LLM agents requires a strategic approach that builds upon existing organizational workflows. Start by conducting a thorough audit of current automation processes, to identify repetitive, rule-based tasks that are prime candidates for agentic transformation. This approach minimizes risk and allows for gradual skill-building in agent design.
The initial phase of agent building involves a detailed workflow mapping, where you document each step of existing processes. Take note of decision points, input dependencies, and expected outputs. Look for workflows with clear, structured steps and well-defined success criteria. Examples might include customer support ticket routing, preliminary sales qualification, or standard compliance checking.
The agent design should be specifically focused on the set of tasks in existing automations. That way, the agent can easily be plugged into the agentic activity to run within that workflow.
Look for processes with:
- Repetitive, rule-based decision points
- Clear input and output parameters
- Predictable task sequences
Core principle: Agents are not workflows
- Consider agents as components of a workflow, not as a replacement for structured automation.
- Overloading an agent with too many responsibilities leads to reduced accuracy, increased complexity, and maintainability issues.
- The best practice is to minimize agent responsibilities by focusing on decision-making tasks rather than multi-step processing.
Criteria | Agent | Workflow |
---|---|---|
Decision-making (route, classify, summarize) | ||
Structured data processing (extracting values from a contract) | ||
Multi-step automation (UI interactions, API calls) | ||
Unstructured data reasoning (interpreting ambiguous user input) | ||
Repetitive, rule-based actions (data validation, transformation) | ||
Domain-specific knowledge application |
By starting small, you create a controlled environment for learning agent behavior, understanding prompt engineering nuances, and establishing evaluation frameworks.
By breaking complex workflows into small, specialized agent tasks, you can:
- Create plug-and-play agent configurations.
- Rapidly adapt to changing business requirements.
- Minimize implementation risks.
- Scale your intelligent automation incrementally.
When defining tools for an AI agent, use descriptive, concise names that follow these guidelines:
- Use lowercase, alphanumeric characters (a-z, 0-9)
- Do not use spaces or special characters
- The name should directly reflect the tool's function
- Example tools:
- web_search for internet queries
- code_interpreter for running code
- document_analysis for parsing documents
- data_visualization for creating charts
For details, go to the Tools section.
Bring humans into the agentic loop to help review, approve, and validate the agent's output. You achieve this through escalations. This is crucial to help institute best practices around controlled agency, and to ensure the agent is operating according to plan from the instructions provided in the prompt.
Use escalations to help inform agent memory. Agents learn from their interactions with tools and humans to help calibrate their plan and ongoing run execution. Each escalation is stored in agent memory, configurable from the escalations panel, and referred to before tools are called to help inform the function call and prevent similar escalations in the future. Agent Memory is scoped to runtime, the specific published version of the agent in a deployed process.
For details, go to the Contexts and Escalations and Agent Memory sections.
Prompt engineering is an iterative craft that demands experimentation and nuanced adjustments. Use the Agents workspace to test and iterate your prompt and prompt structure.
Agentic prompts aren't like traditional LLM interactions. They incorporate instruction sets that guide the agent through multi-step reasoning and task decomposition. Unlike basic prompts that request direct output, agentic prompts provide a comprehensive framework for problem-solving. This includes context setting, role definition, step-by-step instructions, and explicit reasoning requirements.
Clear goal and objective
Before developing an agent, you must define its purpose and desired outcomes. This means:
- Articulating specific, measurable objectives.
- Understanding the environment in which the agent will operate.
- Identifying key performance metrics.
- Establishing clear success criteria.
Prompt structure
A well-structured agentic prompt should include:
- Clear role and persona definition
- Explicit task breakdown
- Reasoning methodology instructions
- Error handling and self-correction mechanisms
- Output formatting requirements
- Contextual background information
For example, use the following do's and don'ts list to learn how to structure an effective prompt:
- Do:
- Role definition – Who is the AI acting as? ("You are a customer support assistant...")
- Goal specification – What should it do? ("Answer questions about product pricing and features...")
- Instructions and constraints – Any do’s and don’ts? ("Keep responses under 200 words, avoid technical jargon...")
- Don't:
- Format requirements – Don’t mention a specific structure for the output (for example, "Respond in a numbered list...") as this is already covered in the output.
- Examples – Don’t provide sample inputs and expected outputs as this is already covered in the input and output arguments.
Avoid vague instructions. Be explicit, use a clear tone, and follow the format and structure described in the following example:
#Role
#Goal
#Instructions
#Tools usage instructions
#Constraints
#Error handling and escalation
#Role
#Goal
#Instructions
#Tools usage instructions
#Constraints
#Error handling and escalation
User prompt
#Input variables
#Expected output variables
#Input variables
#Expected output variables
System prompts versus user prompts
-
System prompt: Instructions that guide the AI’s response—define variables (for example,
{{InputName}}
). - User prompt: Input from an end-user—natural and unstructured (but organized).
You can also implement techniques like chain-of-thought prompting, where you explicitly request the agent to articulate its reasoning process. This approach enhances transparency, allows for more precise error tracking, and enables more sophisticated task execution.
For instance, instead of simply asking "Summarize this document," an agentic prompt might specify: "You are a professional research analyst. Break down this complex technical document into key sections. For each section, provide a two-sentence summary and identify potential areas of further investigation. Explain your reasoning for section demarcation and summary approach."
Prompt iteration
Effective iteration involves systematic variation of prompt components:
- Adjust role instructions.
- Modify task decomposition strategies.
- Experiment with reasoning frameworks.
- Test different output formatting requirements.
- Introduce additional contextual details.
The goal is to discover the minimal set of instructions that consistently produce high-quality, reliable agent behaviors. Document the results of each iteration, tracking both qualitative performance and quantitative metrics like response accuracy, completeness, and adherence to specified constraints.
For details, go to the Prompts and arguments section.
Traces give a comprehensive view of the agent's run and what happened at each step of its loop. Traces provide a good way for you to review your agent's output, assess its plan, and iterate on its structure (such as prompt, tools, context used).
Trace logs are critical diagnostic tools for AI agents, offering:
- Detailed step-by-step execution breakdown
- Visibility into decision-making processes
- Identification of potential failure points or inefficiencies
Regular trace review is essential because:
- Agents evolve with changing requirements.
- Unexpected behaviors can emerge over time.
- Performance optimization requires continuous analysis.
- Tool effectiveness may degrade or become obsolete.
For details, go to the Traces section.
Create robust evaluation sets
Agent evaluation requires extensive, representative datasets that challenge the system across multiple dimensions. These datasets should simulate real-world complexity, incorporating variations in:
- Input complexity
- Contextual nuances
- Domain-specific challenges
- Potential edge cases and failure scenarios
Effective dataset development involves:
- Consulting domain experts
- Analyzing historical interaction logs
- Systematically generating synthetic test cases
- Incorporating adversarial examples
- Ensuring statistical diversity
Evaluate multiple characteristics of your agent
Agent evaluation extends beyond simple accuracy measurements. Develop holistic evaluations that consider:
- Accuracy and factual correctness
- Reasoning transparency
- Response creativity
- Contextual relevance
For details, go to the Evaluations section.
Move beyond isolated agent testing by embedding evaluation processes within broader automation contexts. This means creating an agent and including it in an automation workflow using the Run Agent activity. This approach ensures agents perform reliably when interconnected with other systems.
Testing strategies should include:
- End-to-end workflow simulations
- Integration point stress testing
- Cross-system communication validation
- Performance under variable load conditions
- Failure mode and recovery mechanism assessment
For details, see the Running agents section.
- Start from existing workflows
- Decision framework: agent versus workflow
- Build small, specialized agentic tasks
- Provide clear names and descriptions for your tools
- Involve humans to help the agent learn
- Create well-structured prompts and iterate on them
- Review traces and trace logs
- Evaluate your agent
- Test your agent