Agent BrainsAgent Brains
n8n

AgentBrains Synthetic QA

Run repeatable synthetic conversations against n8n workflows and score the outcome inside AgentBrains.

AgentBrains Synthetic QA

Use this node to validate workflow quality with repeatable, scored synthetic conversations instead of manual spot checks.

It connects n8n to the AgentBrains synthetic user engine so you can test behavior, monitor regressions, and compare outcomes after changes.

What it does

For each execution, the node:

  1. Loads available synthetic users from AgentBrains.
  2. Starts a test run with the selected persona, scoring tests, and conversation goals.
  3. Polls AgentBrains until the run completes or times out.
  4. Returns structured scoring results and a parsed report.

Parameters

ParameterDescription
Synthetic QA Name or IDSelect the synthetic user persona for the run
Number of Test ConversationsHow many conversations should be generated in the batch
Conversation Quality TestsWhich scoring dimensions to evaluate
Options → Conversation GoalsOptional manual goals shared across all conversations
Options → Promotion DetailsOptional negotiation or promotion prompt applied during the run

Available scoring tests

The node currently exposes these quality dimensions:

  • Customers Mood Change
  • Human-Free Issue Handling
  • Information Completeness
  • Making a Sale
  • Objection Handling
  • On Task
  • Problem Solving

Goals and persona behavior

If you provide Conversation Goals, the node uses those goals directly.

If you leave the field empty, AgentBrains generates primary and secondary objectives automatically for the selected synthetic user. The node also sends persona details, industries, personalities, and employee role information from the selected synthetic profile so the run matches the intended behavior.

Output shape

The node returns a single object with:

FieldDescription
scoringTestsA list of score entries with name and score
reportA parsed QA report with a title and structured sections

This makes it easy to send the result into Slack, dashboards, storage, or a release gate workflow.

Best use cases

Regression testing

Run this node after prompt, knowledge-base, or workflow logic changes to detect broken behavior before it reaches production.

Scheduled monitoring

Trigger the node daily or hourly to measure whether a live workflow still passes core scenarios.

Scenario-based validation

Use different synthetic users to simulate angry customers, curious leads, or other personas without rebuilding your workflow for each test case.

Example validation flow

  1. Start with a manual trigger, schedule trigger, or deployment webhook.
  2. Run AgentBrains Synthetic QA with the target synthetic user and scoring dimensions.
  3. Branch on returned scores.
  4. Send failures to Slack, ClickUp, or a dashboard.

Operational details

DetailBehavior
Polling intervalThe node polls every 5 seconds
Maximum wait timeThe node waits up to 15 minutes for completion
Failure handlingIf AgentBrains returns an error or an invalid synthetic user is selected, the execution fails with a node error unless continue-on-fail is enabled

Why this is better than manual testing

Manual validation in n8n usually means triggering workflows and reading raw JSON by hand. This node turns testing into a measurable process with reusable personas, repeatable conversation goals, and direct scoring.

Pairing recommendations

On this page