- Getting started
- Best practices
- Writing effective prompts
- Designing Autopilot-relevant automations
- Autopilot for developers
- Autopilot for testers
- Autopilot for everyone
- Autopilot plug-ins
- Data privacy
Writing effective prompts
Prompts serve as your communicating mechanism with Autopilot™, acting as the instructions that you use to obtain the desired output from Autopilot.
- Ensure that your instructions are clear and unambiguous.
- Create instructions that encourage action.
- Clearly state your expectations.
- Use active voice to enhance the clarity of your instructions.
- Define the desired format of the output.
- Incorporate relevant keywords to steer Autopilot’s response in a specific direction.
- Set boundaries and restrictions if necessary.
- Test different versions of your instructions and refine as needed.
- Pay attention to grammar and punctuation.
- Be mindful of Autopilot’s limitations.
Here are some examples of how you can write effective prompts for expressions in Studio.
- Find the next Sunday date.
- Download emails received today with the date format "dd/mm/yyyy".
- Convert from format "MM/dd/yyyy hh : mm : ss" to format "yyyy-MM-dd hh : mm : ss".
- Put a one second delay.
- Get last 4 digits.
- Get filename from full path.
- Verify if the result is a palindrome.
- Return the first palindrome number greater than 152.
- Fix the expression by declaring and initializing the variable before calling the first method on it.
- Store the list of strings inside an array.
Here are some examples of how you can write effective prompts for workflows in Studio.
- When a new PDF is created in OneDrive, split its pages into separate files.
- Combine all PDF files in a OneDrive folder into a single PDF file and upload the merged file to a specified folder.
- Every Saturday, connect to our OneDrive and backup to AWS cloud storage all the new files added in the 'Projects' folder during the week.
- Upload signed Documents from DocuSign to Dropbox.
- Send the recording on Slack once recording is ready on Zoom.
- Send an SMS message via Twilio when a high-priority incident is created in ServiceNow.
- When a new row is added to the vendors table notify the team over Slack and confirm via Microsoft Outlook.
- Add a new line to an Excel Spreadsheet for every unread email in an Microsoft Outlook folder, then mark the email as read.
- Create a flow that reads through the emails in a particular folder using Microsoft 365. Then Download the attachments, only consider those that are PDF. Then read the text from the PDF.
- Extract data from a new invoice file in once drive and store in Excel.
- Notify me on Teams when a critical bug is created in Jira.
- I need to extract the latest Bitcoin data from Yahoo Finance and write it to an Excel.
- Extract data from a new invoice file in Google Drive and store it in Google Sheets.
- Download new Zoom Recordings as video files and upload them to Google Drive.
- Trigger an automation from Gmail and store the attachment in the Google Drive.
- Create a new entry in Google Sheets for a new customer support ticket from Zendesk.
- Extract the latest 100 emails from Gmail from the current month and create a Google Sheets Report with the sender and subject.
- For new invoices received to Gmail create an expense report using Expensify.
- Summarize new Gmail email using OpenAI and share the summary via Slack.
- For a new Salesforce lead, generate a personalized email using OpenAI and send the email via Outlook.
- When a Salesforce opportunity is won, post a kudos message to Slack.
- Send me a message to Teams when a new lead is created in Salesforce.
- Whenever a lead's status changes in Salesforce, send a notification on Slack to the sales team with the lead's details.
- Security aspects like access, protection, authentication, vulnerability, and compliance.
- Performance aspects like response times, throughput, scalability, resource usage, and load handling.
You can use out-of-the-box prompts from the Prompt Library in Test Manager to help analyze your requirements, and you can also add your own custom prompts to the Prompt Library, for future requirement evaluations.
Visit Quality-check requirements - Best practices to check the best practices and guidelines available for evaluating requirements.
- Concise, user-focused statement that highlights the purpose of the requirement.
- Comprehensive description of the application logic showing the user journey.
- Clear, measurable acceptance criteria including both positive and negative scenarios.
You can provide supporting documents, such as process diagrams and mockups, compliance documents, and discussion transcripts, to give AutopilotTM additional context to generate more accurate and relevant test cases.
You can use out-of-the-box prompts from the Prompt Library in Test Manager to help generate manual tests, and you can also add your own custom prompts to the Prompt Library, for future test generations.
Visit Generate tests for requirement - Best practices to check the guidelines and best practices available for generating test cases using AutopilotTM.
To convert text into code, you can offer Autopilot instructions about generating any C# code, refactor existing code, or generate a UiPath automation.
For more information, visit Convert text into code - Best practices.
To convert manual test cases into automation, you need a consistent object repository, because Autopilot uses UI Automation capabilities to reference UI elements. It's important to maintain a consistent naming convention for UI elements within manual steps to ensure that the generated automation is relevant. You should also use common activity names in manual steps so they can be easily converted into corresponding UiPath APIs in Studio Desktop.
For more information, visit Automate manual tests - Best practices.
When you generate synthetic test data, Autopilot considers the existing arguments within your workflow and the additional instructions provided in the prompt to generate test data. You can also provide instructions to follow a certain combination of data, or to customize your data set.
For more information, visit Generate synthetic test data - Best practices.
AutopilotTM in Test Manager provides insights into failed test cases and recommendations for reducing the failure rate in your test portfolio. The more test results you provide, especially with failed test cases, when you generate the report, the more effective it is. The goal of the test insights is to help understand the main reasons why your tests are failing.
- Common Errors: groups similar error messages semantically to highlight the most frequent issues.
- Error Patterns: categorizes failed test cases into broader categories. These specific categories identify recurring themes and systemic problems, providing a more clear understanding of the underlying issues in your test execution.
- Recommendations: provides actionable recommendations for enhancements, designed to guide your next steps in optimizing the stability of your test execution.
- For expressions
- Date and time
- Text and number
- Data operations
- Files operations
- Excel operations
- For workflows
- Storage services
- Communication services
- Microsoft 365
- Google Workspace
- Salesforce
- Open AI
- For requirement evaluation
- For manual test generation
- For code
- For manual tests
- For test data
- For test result analysis