Bugs reported from a failing test step are now saved and surfaced everywhere: on the test case detail, in the test run view, and in list views via a bug count column.
You can see the full history of bugs already reported for any test case and navigate to it in one click.
💡 This prevents duplicate bug reports and makes it easy to know whether a known issue is already being tracked.
Each test step now includes a short explanation of how the AI identified and selected the element it interacted with.
Click the icon on any step to see the reasoning: why the AI chose a specific button, field, or link.
💡 This makes it much faster to catch cases where the AI picked the wrong element, without digging through screenshots or logs.
Tests can now extract data from downloaded files (PDFs, Excel, CSV, and more) during execution, and store the extracted value in a variable for use in subsequent steps.
This enables automation of workflows that depend on the content of a downloaded document, such as reading a confirmation number from a receipt or extracting an ID from a generated report.
❗ You need to execute your test with “Firefox” to be able to use File Extraction
You can now create custom labels with colors, assign them to test cases, and filter by label across the test case list and test run list.
A full label management page is available for creating, editing, and deleting labels at the project level.
💡 Use labels like "Smoke", "Regression", or "Critical" to quickly find and run the right subset of tests as your suite grows.
Copilot can now gather context from your issue tracker to generate tests.
It can fetch details directly from a linked Linear or Jira ticket, and browse a live URL to understand the page it's being asked to test.
Previously, Copilot only had the text you typed.
Now it works with real context, producing steps that are more accurate and aligned with what's actually on the page or in the ticket.
❗ Generated test cases may still require manual review and adjustments.
You can now generate a PDF report from one or more selected test runs, formatted as a formal, shareable document for stakeholders and clients.
Select the runs you want to include, customize it by adding your logo then generate the report, and share it externally without any additional formatting work.
❗ Available on the client’s demand, talk to your CSM to activate it.
💡 Read the full article for more details about Test Run Report
You can now upload a CSV file and run the same test case once per row, with variable values set dynamically from each row.
Each test run shows which data row was used.
The CSV can include variables used in the test (such as username, email, or product ID) as well as execution settings, such as browser and environment.
Upload the CSV under Test Assets, then reference it by name in your CI pipeline OR upload it directly to Thunder before executing your test case.
Type / or [ in any step to trigger an auto-complete menu for variables.
The menu searches across project, environment, and step variables, and shows you the resolved value inline.
You can also create a new variable directly from the menu without leaving the step editor.
-> No more switching screens to look up or reuse a variable.
💡 Read the full article for more details about Variables and Variables Inline Editor
You can now trigger automated test runs directly from your CI pipeline and retrieve structured results in a single API call.
Supports both individual test cases and test sets.
Results include step-level detail and execution progress, no more summary-only output that's hard to parse in a CI report.
You can now compare two test runs side by side. The diff view shows differences across screenshots, logs, HTML, and run settings.
For screenshots, a diff overlay highlights the areas that changed between the two runs.
For logs, you can filter out timestamps to focus on meaningful differences like network calls.
Run settings are also displayed side by side, so you can spot configuration differences such as run location.
This feature is currently accessible via URL by placing both test run IDs in the path.
A more accessible entry point in the UI is coming soon ⌛