AgentBrains Synthetic QA
Run repeatable synthetic conversations against n8n workflows and score the outcome inside AgentBrains.
AgentBrains Synthetic QA
Use this node to validate workflow quality with repeatable, scored synthetic conversations instead of manual spot checks.
It connects n8n to the AgentBrains synthetic user engine so you can test behavior, monitor regressions, and compare outcomes after changes.
What it does
For each execution, the node:
- Loads available synthetic users from AgentBrains.
- Starts a test run with the selected persona, scoring tests, and conversation goals.
- Polls AgentBrains until the run completes or times out.
- Returns structured scoring results and a parsed report.
Parameters
| Parameter | Description |
|---|---|
| Synthetic QA Name or ID | Select the synthetic user persona for the run |
| Number of Test Conversations | How many conversations should be generated in the batch |
| Conversation Quality Tests | Which scoring dimensions to evaluate |
| Options → Conversation Goals | Optional manual goals shared across all conversations |
| Options → Promotion Details | Optional negotiation or promotion prompt applied during the run |
Available scoring tests
The node currently exposes these quality dimensions:
- Customers Mood Change
- Human-Free Issue Handling
- Information Completeness
- Making a Sale
- Objection Handling
- On Task
- Problem Solving
Goals and persona behavior
If you provide Conversation Goals, the node uses those goals directly.
If you leave the field empty, AgentBrains generates primary and secondary objectives automatically for the selected synthetic user. The node also sends persona details, industries, personalities, and employee role information from the selected synthetic profile so the run matches the intended behavior.
Output shape
The node returns a single object with:
| Field | Description |
|---|---|
scoringTests | A list of score entries with name and score |
report | A parsed QA report with a title and structured sections |
This makes it easy to send the result into Slack, dashboards, storage, or a release gate workflow.
Best use cases
Regression testing
Run this node after prompt, knowledge-base, or workflow logic changes to detect broken behavior before it reaches production.
Scheduled monitoring
Trigger the node daily or hourly to measure whether a live workflow still passes core scenarios.
Scenario-based validation
Use different synthetic users to simulate angry customers, curious leads, or other personas without rebuilding your workflow for each test case.
Example validation flow
- Start with a manual trigger, schedule trigger, or deployment webhook.
- Run AgentBrains Synthetic QA with the target synthetic user and scoring dimensions.
- Branch on returned scores.
- Send failures to Slack, ClickUp, or a dashboard.
Operational details
| Detail | Behavior |
|---|---|
| Polling interval | The node polls every 5 seconds |
| Maximum wait time | The node waits up to 15 minutes for completion |
| Failure handling | If AgentBrains returns an error or an invalid synthetic user is selected, the execution fails with a node error unless continue-on-fail is enabled |
Why this is better than manual testing
Manual validation in n8n usually means triggering workflows and reading raw JSON by hand. This node turns testing into a measurable process with reusable personas, repeatable conversation goals, and direct scoring.
Pairing recommendations
- Use AgentBrains Integration Trigger when you want the same workflow to serve both production traffic and test traffic.
- Use AgentBrains Knowledge Base or AgentBrains RAG in the tested flow so QA covers your real retrieval behavior, not a simplified mock.