Running tests is the shared workflow used by both release executions and standalone test runs.
During testing, users open a selected test case, review its steps, record the result, and capture notes or evidence at the step where the observation happened.
To run a test case:
Before recording a result, review the available test case information, including preconditions, steps, expected results, and attachments.
At the test case level, choose one result:
Use Failed when the product behavior does not match the expected result. Use Blocked when the tester cannot proceed because of an environment, dependency, access, data, or setup issue.
Each test step can have its own result and notes.
Use step-level results to show exactly where testing passed, failed, became blocked, or was skipped.
Use step-level notes to explain:
Attachments can be added at the step level. This keeps proof close to the step where it matters.
Helpful attachments include:
Attach evidence for failed and blocked steps whenever possible. Evidence helps reviewers understand the issue without repeating the test.
Assignments help teams split execution work across users.
Users can assign individual test cases or select multiple test cases and assign them together. Assignment information is used in analysis views to show remaining work and ownership.
Where available, bulk updates help change multiple selected test cases at once.
Use bulk updates when:
Review the selected test cases before applying a bulk update. Bulk changes should reflect real testing outcomes, not just cleanup.
As testing progresses, review:
Use analysis views to spot trends before the end of the execution or test run.
Strong notes are:
Avoid vague notes such as “does not work.” Describe the observed behavior, expected behavior, and any conditions that affected the test.