You can now compare two test runs side by side. The diff view shows differences across screenshots, logs, HTML, and run settings.
For screenshots, a diff overlay highlights the areas that changed between the two runs.
For logs, you can filter out timestamps to focus on meaningful differences like network calls.
Run settings are also displayed side by side, so you can spot configuration differences such as run location.
This feature is currently accessible via URL by placing both test run IDs in the path.
A more accessible entry point in the UI is coming soon ⌛
You can now upload a CSV file and run the same test case once per row, with variable values set dynamically from each row.
Each test run shows which data row was used.
The CSV can include variables used in the test case (such as first name, last name, or email) as well as execution settings like browser and environment.
Upload the CSV under Test Assets, then reference it by name in your CI pipeline.
❗This feature is currently available via CI only. UI support is coming soon.
💡 Read the full article for more details on How to batch execute variations of your Test Case
You can now set variables and credentials at the project level.
These apply across all environments and test cases within the project.
→ Variables follow a clear priority: Project, then Environment, then Test.
A variable defined at the project level is used by default. If the same variable is defined at the environment level, it overrides the project value for that specific environment.
💡This avoids duplicating the same variables across every environment. You define them once at the project level and only override where needed.
ClickUp is now available as an integration.
Once connected, bug reports from failing test runs can be sent directly to ClickUp as tasks, including repro steps and screenshots.
Setup can be done from the Knowledge base page.
Select your workspace, save the configuration, and you can start creating tickets from the bug report icon 🪲 in any failing test run.
Test case execution settings can now be saved per test case.
When you open a test case, settings are automatically loaded from the current project.
If you modify settings, you can either revert to the defaults or save them as the team's default for that test case.
You can now disable individual steps within a test case. Disabled steps are skipped during execution. Steps can be re-enabled at any time without needing to recreate them.
💡 This is useful for debugging or temporarily excluding steps from a run.
You can now upload a file and type additional instructions at the same time when creating a Test Plan with Copilot!
Previously, only one input method was available per creation.
Variable placeholders like [EMAIL] included in your prompt are now also preserved through scenario generation and into the resulting test cases, keeping them flexible and reusable.
You can now connect Linear & Jira projects as knowledge sources directly from the Copilot view.
How it works:
In the Copilot view, click the knowledge sources button to connect your Linear or Jira project. Once connected, you can prompt Copilot to reference a specific ticket, for example, by pointing it to a ticket ID and specifying which sections to focus on (such as success criteria).
Copilot then generates test cases based on the structured context in that ticket.
❗Generated test cases may still require manual review and adjustments.
It is now possible to modify existing variables during test execution using natural language, super handy to clean, reformat, combine, or reuse data without re-extracting or regenerating anything
Examples
[PHONE]
[USERNAME] to lowercase[TOKEN] into [AUTH_TOKEN]
[URL] and store in [DOMAIN]
[FIRST_NAME] and [LAST_NAME] into [FIRST_NAME]
[DISCOUNT] percent from [PRICE] and store in [PRICE]
The extract web action work exactly like before, it always retrieves the full text from the page.
If you need to adapt or clean the value, you must add a separate transform step.
💡 Read the full article for more details on Variable usage.
It allows your preferred AI tools to interact directly with the Thunders testing platform, using natural language.
Available as a remote MCP integration, it connects seamlessly with Claude, ChatGPT, Cursor, Windsurf, and any MCP-compatible client.
The server exposes powerful tools to create, manage, and execute automated test cases, while giving access to projects, environments, and personas.
This unlocks high-impact workflows such as:
Impact
MCP integration makes your existing workflow faster and sharper. You move from ticket to test in seconds, cut QA friction, and keep your test suite continuously aligned with what your teams are actually building.
💡 Read the full article for more details on Thunders MCP server