- Release Notes Cloud Test Manager
- Release notes Studio
- Release notes Cloud Orchestrator
- Release notes CI/CD integrations

Test Suite Release Notes
February 2025
Reporting with Insights
- Test Manager Execution Report: Helps you analyze important execution metrics like accumulated daily or weekly test results, automation rates for execution tests, and user or robot details for each test execution.
- Test Manager Additional Examples: Helps you evaluate critical Test Manager metrics, such as top machines with errors or results grouped by status.
For more information on enabling the integration, visit Tenant level settings, and for additional details on reporting with Insights, visit Reporting with Insights.
Selecting a robot account for executing test sets
To improve your experience, we have extended the abilities for configuring a test set execution. Now, in addition to selecting test cases from a specific Orchestrator folder and choosing a particular package version, you can also designate a specific robot account to execute the test set. For more information on configuring test set runs, visit Configuring test sets for specific execution folders and robots.
Use context grounding for generating test cases
Boost Autopilot's speed and efficiency when generating test cases, by using Context Grounding for generating tests for requirements and SAP transactions. The actual Context Grounding service that enhances the accuracy of your generated test cases is Retrieval Augmented Generation (RAG). The benefit of using Context Grounding, and RAG, when generating tests is to give Autopilot the context of your organization and applications, so it can generate more accurate tests in less time. If you are already using Autopilot, then you are also familiar with the AI Trust Layer, which Context Grounding is a subcomponent of. To start leveraging Context Grounding and RAG, visit About Context Grounding and Best practices. Also, for information on choosing Context Grounding when generating tests, visit Generate test cases for a specific transaction, Generate tests for impacted transactions, and Generate tests for requirements.
Considering existing tests during test generation
To streamline testing and avoid duplication, Autopilot now considers existing tests when generating new ones. This means that when you generate tests from a requirement, Autopilot knows to avoid duplicating existing manual and automated tests that are linked to the specific requirement. If you want Autopilot to dismiss already linked test cases, in the Provide additional guidance step, you can explicitly instruct it to not consider tests that are linked to the requirement. For more information, visit Generating tests for requirements.
Recommendations from the test insights report did not display the specific test cases they were associated with.
We recommend that you regularly check the deprecation timeline for any updates regarding features that will be deprecated and removed.