# Welcome Welcome to Agent Brains [#welcome-to-agent-brains] Agent Brains is a platform for building, deploying, and managing AI agents with enterprise-grade reliability. Getting Started [#getting-started] Explore the sidebar to navigate through the documentation: * **Introduction** — Learn about Agent Brains and get up to speed quickly * **Integrations** — Connect with API, MCP, and n8n workflows * **Guides** — Deep dives into scoring tests and policy structuring Quick Links [#quick-links] | Resource | Description | | ---------------------------------------- | ---------------------------- | | [Quick Start](/introduction/quick-start) | Get started in minutes | | [API Reference](/integrations/api) | REST API documentation | | [MCP Integration](/integrations/mcp) | Model Context Protocol setup | # AI & LLMs AI & LLMs [#ai--llms] Our documentation is designed to be easily consumed by Large Language Models (LLMs) and AI agents. We provide dedicated endpoints that return the documentation in a format optimized for AI consumption. Available Endpoints [#available-endpoints] `llms.txt` [#llmstxt] A concise index of the documentation site, providing an overview of the available content and structure. [View llms.txt](/llms.txt) `llms-full.txt` [#llms-fulltxt] The complete documentation content concatenated into a single, AI-friendly text file. This is ideal for providing full context to an AI agent in a single request. [View llms-full.txt](/llms-full.txt) Usage [#usage] You can provide these URLs directly to AI agents (like ChatGPT, Claude, or custom tools) to give them instant context about the Agent Brains platform and API. For example, you can prompt an AI with: > "Read the documentation at [https://docs.agent-brains.com/llms-full.txt](https://docs.agent-brains.com/llms-full.txt) and help me write a script to create a new entity." # Access API Key Access API Key [#access-api-key] Before any external tool can talk to AgentBrains, you need an AgentBrains access token. In practice, this is the API key used to authenticate requests to the AgentBrains integration layer. Why the API key matters [#why-the-api-key-matters] AgentBrains is designed to be framework-agnostic. Whether you connect through n8n, direct API calls, or MCP, the API key is the credential that gives your workflow access to the same production backbone: * structured Knowledge Base access * semantic retrieval * image retrieval * conversation logging * automated scoring and QA services This is what lets you keep your workflow logic in your preferred builder while AgentBrains stays responsible for knowledge, retrieval, analytics, and operational tooling. Where you use it [#where-you-use-it] The same key pattern is used across AgentBrains integrations: | Integration path | How the key is used | | ---------------- | -------------------------------------------------------------------------------- | | n8n | Stored in the **AgentBrains Integration API** credential and reused across nodes | | Direct API usage | Sent in request authentication headers | | MCP | Used to authenticate tool access to the same underlying services | In the n8n node pack, the credential automatically sends the same secret as both: * a bearer token * an access-key header That means you only enter the value once in n8n and all AgentBrains nodes can reuse it. How to create it in AgentBrains [#how-to-create-it-in-agentbrains] Create the key from the AgentBrains **System Integration → API Keys** page in your Control Panel. Recommended flow [#recommended-flow] 1. Sign in to AgentBrains. 2. Open **API Keys**. 3. Enter a token name. 4. Choose the expiration date if you need. 5. Select the scopes needed for the integration. 6. Create the personal access token for the integration you want to connect. 7. Copy the generated key immediately and store it in a secure credential manager. 8. Add it to your target integration, such as n8n credentials, API clients, or MCP configuration. The generated key is shown only once at creation time. After you reload or leave the page, the same key value cannot be viewed again. If you manage more than one environment, create separate keys for each environment instead of reusing one token everywhere. What the key unlocks [#what-the-key-unlocks] Once the credential is added, your external tools can authenticate to AgentBrains services such as: | Capability | Typical usage | | ------------------------------------------ | -------------------------------------------------- | | Knowledge Base data access | Exact document, category, and attachment retrieval | | Semantic retrieval and index-backed search | RAG and AI tool workflows | | Webhook registration for live integrations | Production workflow entry points | | Synthetic user execution and scoring | QA, regression testing, and monitoring | Best practices [#best-practices] | Practice | Why it helps | | --------------------------------------------------- | ----------------------------------------------------- | | Store the token only in a secure credential manager | Keeps it out of workflow JSON, prompt text, and code | | Use separate keys for sandbox and production | Reduces accidental cross-environment access | | Rotate keys when ownership or environment changes | Limits long-term exposure | | Use a custom domain only when needed | Prevents misrouting requests to the wrong environment | Common setup pattern [#common-setup-pattern] 1. Create the API key in AgentBrains. 2. Add it to your integration layer of choice. 3. Reuse that credential consistently instead of copying raw secrets into multiple places. 4. Test the connection before activating production workflows. Troubleshooting [#troubleshooting] Invalid access key or scope [#invalid-access-key-or-scope] If a node or client returns an authentication error, the most common causes are: * the token was copied incorrectly * the key belongs to another environment * the custom domain points to the wrong AgentBrains environment * the key does not match the scope expected by the requested service Credential test fails [#credential-test-fails] Re-open the credential or integration settings, paste the token again, and confirm the environment or domain value is correct. Related pages [#related-pages] * [n8n Introduction](/integrations/n8n) * [API Reference](/integrations/api) * [MCP Integration](/integrations/mcp) # MCP Model Context Protocol (MCP) [#model-context-protocol-mcp] This gateway exposes a small, implementation-driven MCP surface for health checks and knowledge base access. There are currently **4 MCP tools** in the public gateway. There is currently **no active** `auth-whoami` or `auth-verify-key` MCP tool in the current gateway. Overview [#overview] The gateway currently exposes exactly four MCP tools focused on gateway utility and authenticated knowledge-base access. MCP sessions authenticate using `x-api-key`, `x-access-key`, or `Authorization: Bearer `. Knowledge-base tools resolve the tenant from auth headers, then proxy requests into the Knowledge Base service using that tenant id. Successful tool responses return MCP `content` with a single `text` item. Structured payloads are pretty-printed JSON strings. Authentication [#authentication] MCP sessions are authenticated on connection by gateway middleware. Accepted auth headers [#accepted-auth-headers] Authenticate using one of the following headers: * `x-api-key: ` * `x-access-key: ` * `Authorization: Bearer ` Header priority is: 1. `x-api-key` 2. `x-access-key` 3. `Authorization: Bearer ` If multiple values are present, the gateway uses the first non-empty header in that order. Session behavior [#session-behavior] * Initial MCP connection auth verifies the key through the Access Keys Manager. * Requests with an existing session id reuse that session and do **not** re-verify the token on every request. * Invalid or missing session ids return `401 Invalid or missing sessionId`. Authentication errors [#authentication-errors] Knowledge-base tool auth uses these exact error messages: * Missing token: `Missing API key` * Invalid token: `Invalid API key` Response And Error Format [#response-and-error-format] Successful tool responses are returned as MCP `content` with a single text item. For structured results, the gateway serializes the upstream payload as pretty-printed JSON: ```json { "content": [ { "type": "text", "text": "{\n \"...\": \"...\"\n}" } ] } ``` Validation failures return MCP tool errors with: * `isError: true` * a single text content item * serialized `zod` `flatten()` output as JSON Upstream Knowledge Base service failures are surfaced by the gateway as HTTP/MCP errors. Tool Summary [#tool-summary] | Tool | Auth Required | Purpose | | ------------------------------ | ------------- | --------------------------------------------------------- | | `search-knowledge-base` | Yes | Semantic vector search using a natural language query | | `get-entity` | Yes | Retrieve a full knowledge base entity by id or exact name | | `get-knowledge-base-structure` | Yes | Browse the category tree or drill into one category | `search-knowledge-base` [#search-knowledge-base] **Purpose** Semantic vector search over the knowledge base using natural language queries. **Auth** Required. **Input schema** | Field | Type | Required | Validation | Default | | ----------- | --------- | -------- | ------------------ | ------- | | `namespace` | `string` | Yes | non-empty | none | | `query` | `string` | Yes | non-empty | none | | `metadata` | `object` | No | arbitrary object | omitted | | `topK` | `integer` | No | positive, max `50` | `10` | **Behavior** * Verifies tenant from MCP auth headers * Validates input with `zod` * Forwards the validated search request to the Knowledge Base retrieval API * Sends this body upstream: * `namespace` * `query` * `topK` * `metadata` when provided This tool uses the RAG retrieval model to search the vector database and return the most relevant matches. **Example request** ```json { "name": "search-knowledge-base", "arguments": { "namespace": "products", "query": "Find pressure washer models under 2000 PSI", "topK": 5, "metadata": { "category": "washers" } } } ``` **Example response** ```json { "content": [ { "type": "text", "text": "[\n {\n \"id\": \"...\",\n \"score\": 0.91,\n \"metadata\": {}\n }\n]" } ] } ``` **Possible errors** * `Missing API key` * `Invalid API key` * Validation errors returned as `isError: true` with serialized flattened Zod output * Upstream Knowledge Base service errors `get-entity` [#get-entity] **Purpose** Retrieve a full knowledge base entity by id or exact name. **Auth** Required. **Input schema** | Field | Type | Required | Validation | | ------ | -------- | -------- | ---------- | | `id` | `string` | No | non-empty | | `name` | `string` | No | non-empty | Validation rule: * At least one of `id` or `name` must be provided **Behavior** * Verifies tenant from MCP auth headers * Validates input with `zod` * Uses `id` if present, otherwise `name` * URL-encodes the selected key * Uses the public Knowledge Base entity endpoint: `GET /knowledge-base/entities/{idOrAlias}` **Example request** ```json { "name": "get-entity", "arguments": { "id": "68dfc8fd5e40fe1ae99c6b5b" } } ``` **Example response** ```json { "content": [ { "type": "text", "text": "{\n \"_id\": \"68dfc8fd5e40fe1ae99c6b5b\",\n \"name\": \"Example Entity\"\n}" } ] } ``` **Possible errors** * `Missing API key` * `Invalid API key` * Validation error if both `id` and `name` are missing * Upstream Knowledge Base service errors `get-knowledge-base-structure` [#get-knowledge-base-structure] **Purpose** Browse the knowledge base category tree (optionally drill into a category). **Auth** Required. **Input schema** | Field | Type | Required | Validation | | ------------ | -------- | -------- | ---------- | | `categoryId` | `string` | No | non-empty | **Behavior** * Verifies tenant from MCP auth headers * Validates input with `zod` * If `categoryId` is provided, it maps to: `GET /knowledge-base/categories/{key}` * Otherwise it maps to: `GET /knowledge-base/categories` **Example request** ```json { "name": "get-knowledge-base-structure", "arguments": { "categoryId": "68dfc8fd5e40fe1ae99c6b5b" } } ``` **Example response** ```json { "content": [ { "type": "text", "text": "{\n \"_id\": \"68dfc8fd5e40fe1ae99c6b5b\",\n \"name\": \"Support Policies\",\n \"children\": []\n}" } ] } ``` **Possible errors** * `Missing API key` * `Invalid API key` * Validation errors returned as `isError: true` with serialized flattened Zod output * Upstream Knowledge Base service errors Notes [#notes] * Knowledge-base tools rely on tenant resolution from auth headers and upstream Knowledge Base proxying. * Successful responses are text-wrapped JSON, not native structured MCP objects. * Validation errors are returned as MCP errors with serialized Zod flatten output. * The currently exposed MCP surface is intentionally small and includes only the 4 tools documented on this page. # Quick Start Quick Start [#quick-start] Welcome to AgentBrains! Follow these steps to build, run, and manage your production-level AI agents. Structure Your Knowledge Base [#structure-your-knowledge-base] Start by uploading your raw PDFs, URLs, and documents. AgentBrains will automatically parse and embed them into clean, LLM-ready Vector Databases. You can easily see what your agent knows and edit it on the fly. Build Your AI Team [#build-your-ai-team] Create your AI agents. With AgentBrains, it's not about pushing a button—it's about crafting AI employees that truly represent your brand and values. Validate with Digital Twins [#validate-with-digital-twins] Agent testing is not an option—it's a requirement. Create custom conversations between your Agents and Synthetic Users. Customize Synthetic QAs, attach them to your specific agent, and set up custom scoring tests to ensure performance, accuracy, and compliance. Integrate Workflows [#integrate-workflows] Connect your agents to your existing processes using our n8n nodes. Pull specific data files or grab groups of similar data with a few clicks. RAG is automatic—your vectorized index is accessed with a single node. Monitor and Manage [#monitor-and-manage] Use the Owner's Control Panel and Inbox to manage your AI agents just like real employees. Gain instant access to conversation tracking, automated QC scoring, performance dashboards, and analysis reports for batches of conversations. Next Steps [#next-steps] Now that you understand the basics, dive deeper into our documentation to learn about [Structuring Policies](/guides/structuring-policies/general-document-policy) and [Scoring Tests](/guides/scoring-tests/on-task). # What is Agent Brains What is Agent Brains? [#what-is-agent-brains] **AgentBrains** is the first full-stack platform to **Build, Run, and Manage** production-level AI agents. We equip you with everything required for enterprise-grade deployments, from structuring your Knowledge Base to providing the operational layer. Owners gain instant access to conversation tracking, automated scoring, performance dashboards, and the controls to manage AI agents just like real employees. Core Capabilities [#core-capabilities] Structure your files into clean, LLM-ready documents. We auto-generate vector indexes that update instantly when you edit your documents, and ensure images are properly labeled and indexed. A conversation-managing Inbox with powerful filters. Securely stream, log, and manage conversations while generating analysis reports for batches of interactions. Agent testing is not an option—it's a requirement. Validate your agents with Digital Twins and Synthetic QAs. AI-driven testing ensures every conversation meets performance, accuracy, and compliance standards. Keep building great workflows with our n8n nodes. Pull specific data files, grab groups of similar data, and access your vectorized index with a single node—RAG is automatic. An easy-to-understand Admin Control Panel featuring a Tree of Knowledge and simple numerical performance scores for each conversation. Why AgentBrains? [#why-agentbrains] With AgentBrains, it's not about pushing a button—it's about crafting AI employees that truly represent your brand and values. Whether you are powering sales, support, or internal operations, AgentBrains provides the scaffolding to ensure your AI team is reliable, tested, and continuously improving. # Customers Mood Change What it measures [#what-it-measures] This score tracks whether the customer's mood improves or worsens during the conversation. It looks at the emotional "trajectory" (neutral → positive, positive → negative, etc.) and whether the agent successfully recovers after frustration. What "good" looks like [#what-good-looks-like] * Customer stays neutral-to-positive. * If frustration appears, the agent recovers it quickly. * Conversation ends with satisfaction or calm clarity. Common reasons for lower scores [#common-reasons-for-lower-scores] * The agent ignores confusion or frustration. * The customer repeats themselves or becomes annoyed. * The conversation ends negatively. Examples [#examples] **High (9–10):** "Customer starts neutral, ends happy ('Perfect, thanks!'). Any frustration is resolved quickly." **Mid (6–7):** "Customer mood is mixed; they don't get upset, but also don't end clearly happy." **Low (1–3):** "Customer becomes frustrated/angry and stays that way, or leaves the conversation upset." How to read the scale [#how-to-read-the-scale] | Score | Description | | ----- | ----------------------------------------------------------- | | 10 | Ends clearly positive; no unrecovered negative moments. | | 9 | One small negative moment, fully recovered; ends positive. | | 8 | Overall improves; ends neutral-positive. | | 7 | Mostly steady; ends neutral or mildly positive. | | 6 | Mixed; some frustration; recovery partial. | | 5 | Mixed leaning negative or unresolved tension. | | 4 | Ends noticeably negative. | | 3 | Multiple negative moments; little recovery. | | 2 | Strong negative tone; frustration dominates. | | 1 | Severe negative escalation; customer clearly unhappy/angry. | # Human-free Issue Handling What it measures [#what-it-measures] This score measures whether the agent can handle the conversation without unnecessary handoffs to a human. It rewards autonomy when the agent should be able to solve the request, and only "escalates" when it's truly needed. What "good" looks like [#what-good-looks-like] * The agent can handle common issues end-to-end. * It only escalates when truly needed (and does it smoothly). * The customer gets usable next steps without extra friction. Common reasons for lower scores [#common-reasons-for-lower-scores] * The agent hands off too early or too often. * The user asks for a human because the agent isn't helping. * The bot hits avoidable dead ends. Examples [#examples] **High (9–10):** "Agent resolves the issue fully on its own, or hands off only at the final step (e.g., 'connecting you to finalize purchase')." **Mid (6–7):** "One unnecessary handoff happens, but the agent still helps and the customer gets useful resolution." **Low (1–3):** "The agent quickly gives up or repeatedly pushes the user to a human due to avoidable failures." How to read the scale [#how-to-read-the-scale] | Score | Description | | ----- | ---------------------------------------------------------- | | 10 | Fully autonomous; no unnecessary handoffs; user satisfied. | | 9 | Almost fully autonomous; tiny stumble recovered. | | 8 | High autonomy; one minor limitation. | | 7 | Mostly autonomous; minor reliance on handoff. | | 6 | One clear unnecessary handoff or limitation. | | 5 | Mixed; handoff used as a common escape hatch. | | 4 | Frequent handoffs; weak autonomy. | | 3 | Major autonomy failures; user repeatedly blocked. | | 2 | Nearly always requires human help. | | 1 | Immediate failure/instant handoff with no progress. | # Information Completeness What it measures [#what-it-measures] This score checks whether the agent fully answers everything the customer asked. It rewards complete, accurate answers (not partial replies, not dodging, not "I don't know" where the bot should know). What "good" looks like [#what-good-looks-like] * Every question the customer asks is addressed. * Answers include enough detail to act (not just general statements). * If the agent doesn't know something, it explains what it can do next (without guessing). Common reasons for lower scores [#common-reasons-for-lower-scores] * The agent answers only part of the question. * It skips important details (compatibility, pricing, steps, requirements). * It gives "I don't know" too often for basic questions. Examples [#examples] **High (9–10):** "Customer asks about shipping + compatibility + warranty; agent answers all three correctly and clearly." **Mid (6–7):** "Agent answers most questions, but misses one part (e.g., provides price but not availability or compatibility)." **Low (1–3):** "Agent leaves key questions unanswered, gives vague responses, or provides incorrect info." How to read the scale [#how-to-read-the-scale] | Score | Description | | ----- | ------------------------------------------------------------------ | | 10 | All questions fully and correctly answered with actionable detail. | | 9 | Everything answered; one small detail could be clearer. | | 8 | Very complete; a small non-critical gap. | | 7 | Mostly complete; a few noticeable gaps. | | 6 | Several missing details; customer may need follow-up. | | 5 | Mixed; some questions answered, others incomplete. | | 4 | Many gaps; customer likely still unsure. | | 3 | Most questions not fully answered; lots of vagueness. | | 2 | Very little is answered; customer stays blocked. | | 1 | Almost nothing is answered or answers are mostly wrong. | # Making a Sale What it measures [#what-it-measures] This score measures how well the agent moves the customer toward buying, from early interest (questions) to commitment (checkout/demo/payment). It rewards clear sales progress: answering buying questions, handling objections, and setting a next step. What "good" looks like [#what-good-looks-like] * The agent answers buying questions clearly (price, availability, options). * It guides the customer to the next step (checkout, payment method, booking, quote). * It confirms the customer's intent and reduces uncertainty. Common reasons for lower scores [#common-reasons-for-lower-scores] * No clear next step ("let me know if you need anything" and nothing else). * The agent avoids pricing or key purchase details. * The conversation ends without progress toward purchase. Examples [#examples] **High (9–10):** "Customer says 'I'll take it', agent confirms price, collects payment method, and guides them to checkout." **Mid (6–7):** "Agent gives correct price and key details, but never asks for the next step or confirms purchase." **Low (1–3):** "Agent avoids pricing, gives confusing answers, or the customer leaves without any buying progress." How to read the scale [#how-to-read-the-scale] | Score | Description | | ----- | ----------------------------------------------------------------------- | | 10 | Sale is completed or the customer clearly commits and next step is set. | | 9 | Very close; intent is clear and next step is almost fully locked in. | | 8 | Strong progress; customer is likely to buy, minor missing piece. | | 7 | Good progress; info delivered, but no firm next step. | | 6 | Some progress; still missing important buying info or confidence. | | 5 | Mixed; conversation doesn't reliably move toward purchase. | | 4 | Weak; little sales movement. | | 3 | Poor; stalled or generic responses. | | 2 | Very poor; errors or confusion push customer away. | | 1 | Sale actively harmed (misleading info or strong frustration). | # Objection Handling What it measures [#what-it-measures] This score measures how well the agent responds to customer concerns about buying (e.g., price, value, fit/compatibility, timing, risk). It rewards acknowledging the concern, giving a strong answer, and keeping the conversation moving forward. What "good" looks like [#what-good-looks-like] * Acknowledges the concern respectfully. * Addresses it with clear reasoning, facts, options, or alternatives. * Suggests a next step after resolving the concern. Common reasons for lower scores [#common-reasons-for-lower-scores] * Ignoring objections or responding defensively. * Giving generic "we're the best" claims without substance. * Not offering alternatives or next steps. Examples [#examples] **High (9–10):** "Customer says 'Too expensive'; agent explains value, offers an alternative, and customer continues toward purchase." **Mid (6–7):** "Agent responds but the answer is generic or only partially addresses the concern." **Low (1–3):** "Agent ignores the objection, argues, or the customer disengages." How to read the scale [#how-to-read-the-scale] | Score | Description | | ----- | --------------------------------------------------------- | | 10 | Objection fully resolved; customer clearly moves forward. | | 9 | Strong resolution; tiny gap but forward progress remains. | | 8 | Good handling; customer stays engaged. | | 7 | Addresses objection; progress is slower but positive. | | 6 | Partially addressed; some doubt remains. | | 5 | Mixed; responses feel generic. | | 4 | Weak; objection mostly remains. | | 3 | Poor; deflects or misses the point. | | 2 | Very poor; worsens trust or frustration. | | 1 | Completely mishandled; customer drops or rejects. | # On Task What it measures [#what-it-measures] This score measures whether the agent stays focused on the customer's topic and gives clear, specific answers (instead of generic "fluff" or drifting off-topic). It rewards relevance and specificity. What "good" looks like [#what-good-looks-like] * Directly answers the question asked. * Avoids long generic text. * Keeps the conversation moving step-by-step. Common reasons for lower scores [#common-reasons-for-lower-scores] * Generic "I'm here to help" responses without substance. * Off-topic explanations or irrelevant questions. * Repeated clarification loops without progress. Examples [#examples] **High (9–10):** "Customer asks how to register a product; agent gives the exact steps and link, without unnecessary filler." **Mid (6–7):** "Agent is mostly helpful but sometimes rambles or asks too many unrelated questions." **Low (1–3):** "Agent repeatedly gives generic replies ('I'm here to help!') without actionable details." How to read the scale [#how-to-read-the-scale] | Score | Description | | ----- | ----------------------------------------------------------------- | | 10 | Always focused and specific; every response moves things forward. | | 9 | Nearly perfect focus; tiny bit of extra filler. | | 8 | Strong focus; minor drift quickly corrected. | | 7 | Good; a few vague moments but still helpful. | | 6 | Some drift/vagueness slows progress. | | 5 | Mixed; several responses feel generic. | | 4 | Frequently vague/off-topic. | | 3 | Mostly generic; user has to push for specifics. | | 2 | Almost entirely fluff or irrelevant. | | 1 | Off-task to the point of being unusable. | # Problem Solving What it measures [#what-it-measures] This score measures whether the agent actually resolves the customer's problem, and how confidently we can say the issue is fixed. It also considers whether the solution is practical and safe. What "good" looks like [#what-good-looks-like] * The agent identifies the real issue quickly. * The steps are clear and safe to follow. * The customer confirms the fix worked (or the conversation ends naturally with confidence). Common reasons for lower scores [#common-reasons-for-lower-scores] * Steps are incomplete, too vague, or skip critical details. * The agent suggests actions that don't address the real problem. * The customer stays stuck or repeats the same issue. Examples [#examples] **High (9–10):** "Customer confirms the fix works ('That solved it'), and the steps are correct and safe." **Mid (6–7):** "Agent helps partly, but the customer still has steps to try or unresolved questions." **Low (1–3):** "Agent's advice doesn't work, is unclear, or the customer leaves still stuck." How to read the scale [#how-to-read-the-scale] | Score | Description | | ----- | --------------------------------------------------------------------------- | | 10 | Fully solved and customer clearly confirms success; no loose ends. | | 9 | Solved with very minor uncertainty (e.g., one small extra check suggested). | | 8 | Solved and very likely correct, but customer confirmation is indirect. | | 7 | Mostly solved; small missing step or minor confusion. | | 6 | Partial fix; progress made but customer still has a key blocker. | | 5 | Mixed; some helpful guidance but unclear if it will work. | | 4 | Weak attempt; likely still unresolved or missing key steps. | | 3 | Poor; minimal progress or repeated confusion. | | 2 | Very poor; customer remains blocked and frustrated. | | 1 | No progress or misleading/unsafe guidance. | # Customer Support Policy When to use it [#when-to-use-it] Select this policy for any document containing FAQs, return and refund policies, warranty terms, shipping guidelines, or known issue resolutions that are normally handled by customer support. How the Customer Support Policy Protects Your Data [#how-the-customer-support-policy-protects-your-data] 1\. The Pure-Support Filter [#1-the-pure-support-filter] Marketing fluff and general business history are useless to a customer asking for a refund. This policy aggressively filters the source document, extracting only information directly related to customer service, support procedures, and issue resolution. Anything unrelated to support is deleted, keeping the AI's context window razor-sharp. 2\. The Financial Firewall [#2-the-financial-firewall] Helpdesk articles often contain outdated references to things like "$50 restocking fees" or "replacement part costs." This policy enforces a strict redaction of all financial data—including service charges, part prices, and shipping fees. **Why?** Relational Integrity: By stripping costs out of support articles, we force the LLM to query your master Price List or Billing policies for fees. This ensures a support bot never promises a customer an outdated repair fee found in a legacy PDF. 3\. Correct-Match Policy Enforcement (No Paraphrasing) [#3-correct-match-policy-enforcement-no-paraphrasing] When dealing with legal or binding support terms (like warranties), the AI cannot be allowed to get "creative." This policy is strictly forbidden from rewriting, interpreting, or summarizing. It copies the exact wording, day-counts (e.g., "within 30 days"), and conditions from the source document to ensure your agent's responses are legally and operationally compliant. 4\. Dynamic Categorization & Token Optimization [#4-dynamic-categorization--token-optimization] Support documents vary wildly in structure. This engine intelligently sorts extracted information into required sections (e.g., "Returns," "Exchanges," "Damaged Goods"). * **Zero Empty Space:** If a document doesn't contain info for a specific section, that section is entirely omitted rather than populated with "N/A," saving valuable tokens. * **Dynamic Headers:** The engine is authorized to generate new, specific headers if necessary to accurately organize the business logic, ensuring the final output is flawlessly categorized for chunking. What the Output Looks Like [#what-the-output-looks-like] When your n8n workflow queries the Knowledge Base to resolve a user issue, it receives a clean, definitive policy document rather than a messy web scrape. ```markdown # Support Article: Return and Exchange Policy ## Return Eligibility Items may be returned within 30 days of the original delivery date. To be eligible for a return, the item must be unused, in its original packaging, and include all original accessories and manuals. ## Non-Returnable Items - Custom-configured or personalized devices. - Items marked as "Final Sale" or "Clearance." - Opened software or downloadable digital assets. ## Warranty Claims (Damaged Upon Arrival) If a product arrives damaged, the customer must report the issue to the support team within 48 hours of delivery. A photographic record of the damaged shipping box and the damaged unit is required before a Return Merchandise Authorization (RMA) number can be issued. ``` Why This Matters for Automation Developers [#why-this-matters-for-automation-developers] By applying the Customer Support Policy during ingestion, you build a significantly safer and more accurate support agent: * **Reduced Hallucinations:** Because the policy enforces exact-match wording and strips out non-support noise, the LLM stops "guessing" how a return works and quotes the actual policy. * **Separation of Concerns:** By stripping out financial data and inventory levels, you ensure your support bot doesn't accidentally offer a replacement for an item that is out of stock, or quote an incorrect repair fee. * **Plug-and-Play Context:** You don't need to write complex prompts telling the LLM "Ignore the marketing text and just find the refund rule." The AgentBrains policy has already done the filtering for you. # General Business & Doc Review Policy When to use it [#when-to-use-it] Select this policy for any document containing "About Us" pages, corporate policies, broad service overviews, company histories, mission statements, or market positioning documents. How the General Business Policy Protects Your Data [#how-the-general-business-policy-protects-your-data] 1\. Holistic Synthesis & Categorization [#1-holistic-synthesis--categorization] Instead of blindly splitting text every 500 tokens, this policy reads the entire document to form a holistic understanding of the company. It then restructures the unstructured narrative into logical, distinct categories (e.g., Company Profile, Core Services, Corporate Policies), ensuring the LLM can instantly locate specific business facts. 2\. Key Differentiator Extraction [#2-key-differentiator-extraction] Sales and Support agents are constantly asked, "Why should I choose your company over a competitor?" This policy actively hunts the document to identify and separate the company's "Key Differentiators." By structuring these points clearly, your agent is always armed with the correct value propositions to close a sale or defend the brand. 3\. Exact-Match Wording [#3-exact-match-wording] When AI summarizes business text, it tends to embellish or invent corporate jargon. This policy is strictly forbidden from writing "creative" summaries. It copies the exact wording, terminology, and brand language from the source document. If a service guarantee or legal claim is stated, the policy preserves the exact phrasing, numbers, and dates. 4\. Title Generation & Noise Redaction [#4-title-generation--noise-redaction] To make the document highly searchable in the Vector Database, the policy generates a highly accurate, specific title for the output. Simultaneously, it redacts irrelevant data—such as product inventory levels or web navigation noise—keeping the context window focused purely on the business intelligence. What the Output Looks Like [#what-the-output-looks-like] When your n8n workflow queries the Knowledge Base for general company information, it receives a categorized, factual Markdown document rather than a rambling PR article. ```markdown # Company Profile: Bugsy's HVAC Services & Maintenance ## Corporate Overview Bugsy's HVAC Services is a family-owned commercial heating and cooling provider operating in the Greater Chicago area since 1998. ## Core Services - **Commercial Installation:** Full-building HVAC system design and installation. - **Preventative Maintenance:** Bi-annual inspection and filter replacement contracts. - **Emergency Repair:** 24/7 on-call repair services for commercial accounts. ## Key Differentiators - 24/7 guaranteed emergency response time under 2 hours. - All technicians are NATE-certified and fully insured. - Upfront, flat-rate pricing with no hidden weekend emergency fees. ``` Why This Matters for Automation Developers [#why-this-matters-for-automation-developers] By applying the General Business & Doc Review Policy during ingestion, you drastically improve the baseline intelligence of your agents: * **Structured Narrative:** You turn messy web scrapes and PDF brochures into clean, categorized data. If a user asks "What services do you offer?", the LLM finds a direct bulleted list. * **Brand Voice Preservation:** Because the policy preserves exact terminology and limits paraphrasing, your agent speaks using the client's approved brand language, not generic AI speak. * **Simple Extraction:** You don't need to build complex LLM extraction loops to figure out what the company does. # General Document Policy When to use it [#when-to-use-it] Select this policy for any miscellaneous files, internal memos, press releases, custom forms, or general knowledge documents that do not fit the specialized categories but must be preserved exactly as written. How the General Document Policy Protects Your Data [#how-the-general-document-policy-protects-your-data] 1\. The "Literal Transcription" Rule [#1-the-literal-transcription-rule] LLMs naturally want to summarize, paraphrase, and embellish text. In a RAG pipeline, summarization is dangerous because it destroys nuance. This policy enforces an absolute, hard constraint: DO NOT EVER rewrite, paraphrase, embellish, or interpret. It copies the exact wording, preserving the original meaning, terminology, dates, and number formats exactly as found in the source file. 2\. Auto-Titling for Semantic Search [#2-auto-titling-for-semantic-search] To make miscellaneous documents highly retrievable, the policy analyzes the raw text and generates an accurate, slightly specific title. This title acts as a heavy semantic weight in the Vector Database, ensuring that when an n8n workflow searches for a specific concept, this document ranks correctly. 3\. Structural Cleanup (No Duplication) [#3-structural-cleanup-no-duplication] Miscellaneous documents often contain repetitive headers, messy OCR scans, or duplicated clauses. The engine acts as an information organizer, ensuring that each data point appears only once in the most appropriate section. It converts the chaos into clean, LLM-readable Markdown. 4\. The Universal Inventory Firewall [#4-the-universal-inventory-firewall] As a standard AgentBrains safety measure, this policy actively hunts for and redacts any mention of product inventory or stock levels. Even if a press release mentions "we have 5,000 units in our warehouse," it is stripped out. This ensures your AI agent never promises a customer stock based on an outdated memo. What the Output Looks Like [#what-the-output-looks-like] When your n8n workflow retrieves a General Document, it receives a perfectly literal, structurally clean Markdown file, complete with a semantic title. ```markdown # Press Release: 2025 Warehouse Expansion Initiative ## Overview Bugsy's Company officially broke ground on the new 50,000-square-foot logistics center located in the West Loop industrial sector on March 15, 2025. ## Timeline and Investment The project represents a $2.4 million capital investment. Phase 1 of construction is scheduled for completion on August 30, 2025. The facility will exclusively handle outbound commercial freight and will not be open for retail customer pickups. ## Environmental Compliance The facility conforms strictly to the 2025 EPA Green Building Code (Title 24, Part 6), utilizing high-efficiency R-410A commercial HVAC systems. ``` Why This Matters for Automation Developers [#why-this-matters-for-automation-developers] By applying the General Document Policy as your fallback ingestor, you guarantee a baseline of quality across your entire RAG architecture: * **No Data Loss:** Because the engine is forbidden from summarizing, you never have to worry that the AI "compressed" a crucial legal clause out of a document during ingestion. * **Universal Formatting:** Whether the client gives you a messy .txt file, a scraped URL, or a poorly formatted Word doc, your workflows will always receive clean, predictable Markdown in return. * **Semantic Anchoring:** The auto-generated titles ensure that even vaguely named source files (like doc\_v4\_final.pdf) are transformed into highly searchable vector assets (like 2025 Warehouse Expansion Initiative). # The Instruction Manual Policy When to use it [#when-to-use-it] Select this policy for any document containing assembly instructions, user manuals, maintenance procedures, setup guides, or troubleshooting documents. How the Manual Policy Protects Your Data [#how-the-manual-policy-protects-your-data] 1\. Actionable Sequencing (Preserving Order) [#1-actionable-sequencing-preserving-order] The most critical failure of a Support Agent is giving a customer Step 3 before Step 1. This policy actively looks for setup, operation, and maintenance instructions and forces them into strict numbered lists. It preserves the chronological sequence of the original document, ensuring the AI agent guides the user step-by-step without hallucinating the order of operations. 2\. Troubleshooting Extraction [#2-troubleshooting-extraction] Manuals often bury the "FAQ" or "Troubleshooting" sections at the very end. The policy actively hunts for these sections—the "If X happens, do Y" logic—and structures them into clear problem/solution pairings, making them highly retrievable for customer support queries. 3\. Noise Reduction [#3-noise-reduction] To optimize the context window and lower token costs, the policy aggressively strips out metadata that is useless to an LLM: * **Table of Contents:** TOCs and page numbers confuse LLMs and pollute vector searches. The policy deletes them entirely. * **Financials & Inventory:** Even though manuals shouldn't contain pricing or stock levels, the policy acts as a firewall. If it finds any, it redacts them to maintain separation of concerns. 4\. Smart Identifier Mapping (The "SKU" Anchor) [#4-smart-identifier-mapping-the-sku-anchor] Just like our other policies, this engine anchors the manual to a specific product ID. It scans the document to find the primary identifier, allowing your workflow to link this manual directly to a specific product in your database. * **Priority 1:** Searches for 'Item' or 'Item Number'. * **Priority 2:** If absent, searches for 'Model' or 'Item ID'. * **Fallback:** Defaults to the most prominent product name on the manual's cover or title. 5\. Pure Transcription [#5-pure-transcription] Technical instructions must be exact. The policy is strictly forbidden from rewriting, paraphrasing, or interpreting the instructions. It copies the exact terminology, warnings, and formatting of the source data so the agent's advice is technically sound and legally compliant. What the Output Looks Like [#what-the-output-looks-like] When your n8n workflow retrieves instructions, it receives a perfectly sequenced Markdown document, completely free of PDF formatting artifacts. ```markdown # Model: X500-Pro ## Setup Instructions 1. Unbox the device and locate the primary power cable (Part A). 2. Connect the power cable to the rear terminal before turning on the device. 3. Press and hold the power button for 5 seconds until the LED indicator flashes blue. 4. Download the companion app to complete the Wi-Fi pairing process. ## Troubleshooting **Issue: The LED indicator is flashing red.** - Ensure the battery is fully inserted. - Check the rear terminal for dust or debris. - If the issue persists, perform a hard reset by holding the power button for 15 seconds. **Issue: Device will not connect to Wi-Fi.** - Verify your router is broadcasting a 2.4GHz network. The device does not support 5GHz networks. ``` Why This Matters for Automation Developers [#why-this-matters-for-automation-developers] By applying the Instruction Manual Policy during ingestion, you drastically improve the performance of Technical Support agents: * **Improved Chunking:** Because we strip the Table of Contents and convert the layout into linear Markdown headings, your Vector DB chunks the data logically by section, rather than randomly splitting a sentence across two columns. * **Step-by-Step Reliability:** By forcing numbered lists, the LLM is naturally prompted to deliver step-by-step instructions to the user in the exact order intended by the manufacturer. * **Ease of Use:** You do not need to build complex OCR pipelines or Python scripts to clean up messy PDF manual layouts. We hand you clean data. # The Price List Policy When to use it [#when-to-use-it] Select this policy for any data source containing pricing catalogs, SKU lists, discount tables, or promotional codes - preferably have one document that contains all of the above information. Our structuring policy will organize all your price lists into one single document with reference excerpt with explanations of particular categories (example: military discount). If an agent hallucinates a greeting, it's annoying. If it hallucinates a price, it costs the business money. To prevent this, the Price List Policy applies a highly restrictive, JSON-based structuring pipeline to your raw data. It forces the LLM to read a rigid database rather than guessing across paragraphs of text. How the Price List Policy Protects Your Data [#how-the-price-list-policy-protects-your-data] 1\. Deterministic Extraction [#1-deterministic-extraction] This policy is strictly constrained to extract only what is explicitly written in your uploaded document. It is forbidden from paraphrasing, embellishing, or inferring missing values. If a promotional discount is not explicitly listed next to a product, the engine will not generate one. 2\. Smart Identifier Mapping [#2-smart-identifier-mapping] To ensure the AI never mixes up the price of one product with another, the engine forces a hierarchy to identify products. It scans the raw data and anchors every row to a primary ID using strict fallback logic: * **Priority 1:** It searches for specific columns like 'Item' or 'Item Number'. * **Priority 2:** If absent, it searches for 'Model' or 'Item ID'. * **Fallback:** If headers are messy or missing, it anchors to the very first column of the file. 3\. Noise Reduction & Redaction [#3-noise-reduction--redaction] Raw exports from CRMs or ERPs often contain internal data that public-facing agents shouldn't see. The Price List Policy acts as a filter: * **Curated Columns Only:** It explicitly ignores irrelevant data columns, extracting only Product Names, Pricing, and authorized Discounts (e.g., military discounts, promo codes). * **Inventory Stripping:** The policy is hard-coded to omit and redact inventory levels or stock counts, keeping the AI's context window focused strictly on pricing. * **Formatting Preservation:** Currency symbols, decimal places, and date formats are preserved exactly as found in the source document to prevent conversion errors. 4\. Pure JSON Output [#4-pure-json-output] The final output is stripped of all conversational text, introductory sentences, or summaries. The engine outputs a clean, machine-readable JSON array. What the Output Looks Like [#what-the-output-looks-like] When your n8n workflow queries the Knowledge Base for a price, it receives a perfectly structured JSON object rather than a messy paragraph. ```json [ { "item": "SKU-1002", "product_name": "Thermal Imaging Camera Model X", "price": "$1,299.00", "military_discount_%": "10%", "promo_code": "FALL2025" } ] ``` Why This Matters for Automation Developers [#why-this-matters-for-automation-developers] By applying the Price List Policy during ingestion, we solve the hardest part of RAG for you: * **Zero Formatting Logic:** You don't need to write regex or JavaScript functions in n8n to parse messy CSVs or Excel exports. * **Lower Token Usage:** By stripping out empty rows and irrelevant columns (like inventory counts), you consume fewer tokens per retrieval. * **Predictable Prompts:** Because the data is returned in standard JSON, you can pass it directly into your agent's context window with confidence that the LLM will map the right price to the right product every single time. # The Product Spec Sheet Policy When to use it [#when-to-use-it] Select this policy for any data source containing product landing pages, technical data sheets, service descriptions, brochures, or feature comparisons. How the Spec Sheet Policy Protects Your Data [#how-the-spec-sheet-policy-protects-your-data] 1\. The "No Financials" Firewall [#1-the-no-financials-firewall] The most important feature of this policy is what it removes. Marketing PDFs and scraped web pages often contain outdated MSRPs, old promo banners, or shipping estimates. This policy applies a strict redaction of all financial data—including prices, discounts, and costs. **Why?** Relational Integrity: By stripping prices from spec sheets, we force your AI agent to retrieve financial data only from your master Price List. This guarantees the AI never quotes a 2023 brochure price to a 2025 customer. 2\. Smart Identifier Mapping (The "SKU" Anchor) [#2-smart-identifier-mapping-the-sku-anchor] Just like our Price List policy, this engine forces a strict hierarchy to identify products. It scans the text and anchors the document to a primary ID so it can be cross-referenced with your other databases (like pricing or inventory APIs). * **Priority 1:** It searches for 'Item' or 'Item Number'. * **Priority 2:** If absent, it searches for 'Model' or 'Item ID'. * **Fallback:** It defaults to the most prominent H1 heading or Page Title. 3\. Availability & URL Routing [#3-availability--url-routing] The policy enforces two mandatory metadata fields for every product it processes, ensuring the LLM always has operational context: * **Availability Status:** It analyzes the text to determine if a product is "Available" or "Not Available." (Note: It is smart enough to differentiate between a product being temporarily out of stock versus being permanently archived/discontinued). * **Product URL:** It actively hunts for the direct link to the specific product page so the agent can provide direct routing to the customer. 4\. Technical Data Consolidation (No Hallucinations) [#4-technical-data-consolidation-no-hallucinations] The engine reads the entire document, consolidating paragraphs, bulleted lists, and messy tables into clean, structured Markdown text. It is strictly forbidden from paraphrasing, embellishing, or inferring details. If a specification isn't explicitly written in the source, it will not be generated. Inventory levels are also strictly redacted. What the Output Looks Like [#what-the-output-looks-like] When your n8n workflow queries the Knowledge Base for product information, it receives a perfectly structured Markdown document optimized for the LLM's context window. ```markdown # Model: X500-Pro **Availability:** Available **Product Url:** `https://bugsys-company.com/products/x500-pro` ## Product Description The X500-Pro is a heavy-duty thermal imaging camera designed for industrial inspections and HVAC diagnostics. ## Technical Specifications - **Resolution:** 320 x 240 pixels - **Thermal Sensitivity:** <0.04°C - **Battery Life:** 8 hours continuous use - **Drop Test Rating:** 2 meters ## Included Accessories - Hard transport case - 2x Lithium-ion batteries - Charging dock ``` Why This Matters for Automation Developers [#why-this-matters-for-automation-developers] By applying the Product Spec Sheet Policy during ingestion, you build a much more resilient RAG architecture: * **Separation of Concerns:** By keeping technical data (Spec Sheets) entirely separate from financial data (Price Lists), your agents become immune to version-conflict hallucinations. * **Ready for Tool Use:** Because every spec sheet forces a "Product URL" field, you can easily instruct your agent to output that URL as a clickable link to the user in the chat interface. * **Clean Markdown:** You don't have to write scripts to clean up HTML tags, messy PDF line breaks, or navigation bars. You get pure, factual text. # Introduction API Introduction [#api-introduction] Welcome to the Agent Brains REST API documentation. The base URL for requests is: `https://api.agent-brains.com` Use the sections below to navigate by domain. Authenticate every protected request using one of these headers: * `x-api-key: ` * `x-access-key: ` * `Authorization: Bearer ` Header priority is `x-api-key` -> `x-access-key` -> `Authorization: Bearer`. After extraction, the token is verified against the Access Keys Manager for the required scope. *** Knowledge Base [#-knowledge-base] Browse and fetch knowledge-base entities, categories, attachments, and indexes. Entities [#entities] Categories [#categories] Attachments [#attachments] Indexes [#indexes] Employees [#-employees] Access employee configuration. Company [#-company] Fetch tenant-level company information. # Introduction Introduction to AgentBrains n8n Nodes [#introduction-to-agentbrains-n8n-nodes] The AgentBrains node pack connects n8n workflows to the parts of AgentBrains that matter in production: deterministic knowledge access, semantic retrieval, inbound workflow triggering, and synthetic conversation testing. Why use the node pack [#why-use-the-node-pack] You can call the AgentBrains APIs with raw HTTP nodes, but the dedicated nodes remove repetitive setup and make workflows easier to maintain. | Benefit | What you get in n8n | | ----------------------- | --------------------------------------------------------------------------------------------------------------------- | | Credential handling | Access tokens live in the n8n credential store instead of being repeated in every HTTP node. | | Better AI ergonomics | Knowledge and retrieval responses are already shaped for workflow logic and LLM tooling. | | Visual selection | Categories, indexes, and synthetic users can be selected from dropdowns instead of hardcoded IDs. | | Faster production setup | You can connect a live workflow, search your knowledge base, and run QA without building custom request chains first. | Install the package [#install-the-package] AgentBrains ships as an n8n community node package: `n8n-nodes-agent-brains`. n8n supports three installation paths for community nodes: | Path | Best for | Notes | | -------------------------- | ------------------------------------------- | ------------------------------------------------------------ | | Nodes panel | Verified community nodes | Available from the canvas search panel for verified packages | | Settings → Community Nodes | Self-hosted npm installation through the UI | Requires an Owner or Admin account | | Manual npm install | Queue mode or private-package setups | Self-hosted only | Recommended path for self-hosted n8n [#recommended-path-for-self-hosted-n8n] 1. Open **Settings**. 2. Go to **Community Nodes**. 3. Select **Install**. 4. Enter `n8n-nodes-agent-brains`. 5. Confirm the community-node risk prompt. Manual installation [#manual-installation] If your n8n deployment runs in queue mode or you install packages manually inside the runtime container, install it from npm in `~/.n8n/nodes`, then restart n8n: ```bash mkdir -p ~/.n8n/nodes cd ~/.n8n/nodes npm install n8n-nodes-agent-brains ``` If you run n8n in Docker, open the container shell first and then run the same commands there. Important limitations [#important-limitations] * Manual npm installation is available on self-hosted n8n instances. * Unverified community nodes are not available on n8n Cloud. * Verified community nodes can also be installed directly from the nodes panel by instance owners or admins. Configure credentials [#configure-credentials] Create one **AgentBrains Integration API** credential and reuse it across all nodes. | Field | Description | | ------------- | ------------------------------------------------------------------------------------------------- | | Access Token | Generate it from the AgentBrains system integration page. | | Custom Domain | Optional override for sandbox or a custom environment. Leave it empty to use the package default. | The credential sends the same token both as a bearer token and as an access-key header, so you do not need to manage request headers manually. If you need help creating the token first, open [Access API Key](/integrations/access-api-key). Choose the right node [#choose-the-right-node] | Node | Best for | Typical outcome | | ------------------------------------------------------------------------ | -------------------------------------------------- | ---------------------------------------------------------------------------------------- | | [AgentBrains Integration Trigger](/integrations/n8n/integration-trigger) | Starting an n8n workflow from AgentBrains | A live AgentBrains conversation or external integration invokes your workflow webhook. | | [AgentBrains Knowledge Base](/integrations/n8n/knowledge-base) | Deterministic document, image, and category access | Your flow reads exact records, categories, attachments, or company data. | | [AgentBrains RAG](/integrations/n8n/rag) | Semantic retrieval across indexes | Your AI agent searches large knowledge collections without managing embeddings yourself. | | [AgentBrains Synthetic QA](/integrations/n8n/synthetic-qa) | Automated validation of agent behavior | You get scored test conversations and a structured QA report. | Recommended workflow patterns [#recommended-workflow-patterns] Deterministic answer flow [#deterministic-answer-flow] Use **Integration Trigger** to receive the user request, then call **Knowledge Base** when the answer must come from a specific policy, price list, instruction manual, or product sheet. Hybrid agent flow [#hybrid-agent-flow] Use **RAG** as an AI tool for broad semantic search, then call **Knowledge Base** when the model needs an exact source document or image. Quality gate flow [#quality-gate-flow] Use **Synthetic QA** in a validation workflow after prompt, policy, or automation changes. This gives you repeatable scoring instead of manually reading JSON outputs. What the nodes map to [#what-the-nodes-map-to] The node pack is built on top of the AgentBrains integration APIs and helper routes: * The Knowledge Base node maps to entity, category, attachment, and company helper routes. * The RAG node uses index discovery and retrieval endpoints. * The Synthetic QA node uses AgentBrains external QA services to start and poll test runs. * The Integration Trigger registers your n8n webhook with AgentBrains when the workflow is activated. Node Interface [#node-interface] Here is how the **AgentBrains** nodes look in n8n: AgentBrains n8n Nodes Example Next steps [#next-steps] * Open [Access API Key](/integrations/access-api-key) if you still need to create or organize your integration credentials. * Open [AgentBrains Integration Trigger](/integrations/n8n/integration-trigger) if you need to connect a production workflow. * Open [AgentBrains Knowledge Base](/integrations/n8n/knowledge-base) if you need exact documents or image metadata. * Open [AgentBrains RAG](/integrations/n8n/rag) if you want semantic search as an AI tool. * Open [AgentBrains Synthetic QA](/integrations/n8n/synthetic-qa) if you want repeatable testing and scoring. # AgentBrains Integration Trigger AgentBrains Integration Trigger [#agentbrains-integration-trigger] Use this node when AgentBrains should initiate the workflow. It is the bridge between a live AgentBrains integration and your n8n automation. What it does [#what-it-does] When the workflow is activated, the node registers the production webhook URL with AgentBrains. When AgentBrains sends a `POST` request to that webhook, the workflow starts immediately. This is the right node for: * AI employees that should hand execution to n8n * Production support or sales flows managed in AgentBrains but executed in n8n * Synthetic user tests that must hit the same production-style workflow entry point How activation works [#how-activation-works] The node handles webhook lifecycle tasks for you: | Stage | Behavior | | --------------- | ------------------------------------------------------------------------------------- | | Activation | Registers the workflow ID, workflow name, and production webhook URL with AgentBrains | | Active workflow | Accepts `POST` requests on the node webhook path | | Reconfiguration | Re-activate the workflow if credentials or webhook URLs change | The node converts n8n test webhook URLs into production webhook URLs before registration, so AgentBrains always stores the production endpoint. Parameters [#parameters] | Parameter | Description | | ------------------ | ---------------------------------------------------------------------------------------------------------------------------- | | Respond | Choose whether the webhook response comes from a **Respond to Webhook** node or from the **Last Node** in the execution path | | Additional Headers | Optional custom headers for webhook requests | Input and output [#input-and-output] This is a trigger node, so it has no incoming connection. When a request arrives, the node emits one item with a `data` object that contains: * The incoming request body from AgentBrains * n8n workflow metadata merged into the same object That makes it easy to route the payload into downstream branching, AI, or formatting nodes. Request and Response Format [#request-and-response-format] Data Sent to Your Webhook [#data-sent-to-your-webhook] When AgentBrains triggers the workflow, it sends an HTTP `POST` request. You should validate the header before processing the body. **Headers** Every request contains: * `X-Api-Key`: Your Webhook Secret. **Body** A JSON object with conversation data. The most important fields are: * `message`: The newest customer message. * `history`: The complete chat history for context. * `conversation_id`: The unique conversation ID. * `employeeName`, `role`, `tonality`, etc.: Configuration parameters of the AI employee. ```json { "employeeName": "ATN sales expert", "role": "You are a helpful and knowledgeable Information Specialist...", "tonality": "You are a Crazy crazy sales person who always answers in jokes", "message": "What does he look like?", "conversation_id": "68debab6d8a1a1163e976969", "history": "Agent: Hey chat with me - here to help. Sales\nCustomer: I want to buy thor 5 320 3-12x\n..." } ``` How to Send a Response Back [#how-to-send-a-response-back] After processing the request in your workflow, you must respond with an HTTP `200` status and a JSON object containing a single `message` key. ```json { "message": "This is the generated reply from your custom AI that will be sent to the customer." } ``` Recommended setup [#recommended-setup] 1. Add **AgentBrains Integration Trigger** as the first node in the workflow. 2. Choose the response strategy. 3. Build the rest of the flow exactly as you want the live integration to run. 4. Activate the workflow so AgentBrains can register the production webhook. If your flow needs to return a specific payload to AgentBrains, use **Respond to Webhook**. If you want the final downstream item to become the response automatically, use **Last Node**. Typical pattern [#typical-pattern] 1. **Integration Trigger** receives the inbound request. 2. Your workflow enriches the request with CRM, ERP, or internal business logic. 3. Optional AgentBrains nodes retrieve policies, product data, images, or RAG context. 4. The workflow returns a final response to AgentBrains. Operational notes [#operational-notes] | Note | Why it matters | | -------------------------------- | --------------------------------------------------------------------------------- | | Workflow must be active | Registration happens during activation, not while editing the workflow | | Credentials are required | The node checks and registers against the AgentBrains admin integration endpoints | | Best for production entry points | This node is for inbound execution, not for querying data or acting as an AI tool | When to pair it with other nodes [#when-to-pair-it-with-other-nodes] * Pair it with [AgentBrains Knowledge Base](/integrations/n8n/knowledge-base) for deterministic policy or product answers. * Pair it with [AgentBrains RAG](/integrations/n8n/rag) when the workflow needs semantic search over the full knowledge base. * Pair it with [AgentBrains Synthetic QA](/integrations/n8n/synthetic-qa) when you want to validate the same workflow behavior repeatedly. Node Interface [#node-interface] Here is how the **AgentBrains Integration Trigger** node looks in n8n: Integration Trigger Example # AgentBrains Knowledge Base AgentBrains Knowledge Base [#agentbrains-knowledge-base] Use this node when the workflow must read the exact record you curated in AgentBrains instead of relying on semantic search alone. This node is designed for deterministic retrieval: * price lists * product specification sheets * instruction manuals * customer support policies * image assets attached to knowledge base entities Why this node matters [#why-this-node-matters] If the workflow already knows the category, document family, or exact record it needs, deterministic retrieval is usually better than broad search. It reduces hallucinations and gives downstream LLM steps cleaner, more controlled context. Resources and operations [#resources-and-operations] The node exposes several resource types through a single interface. | Resource | What it returns | Available operations | | ------------ | ---------------------------------------------------------------- | ---------------------------------------------------------------------------- | | Documents | Knowledge base entities such as documents, products, or services | Get, Get Many, Get Related Entities, Get by Category Type, Get All Documents | | Category | Folder-like groupings used to organize entities | Get, Get Many, Get by Type | | Images | Attachments and media linked to entities | Get Many, Get | | Company Data | High-level company profile fields | Direct fetch without an operation selector | Relationship type and category type values still appear where they are used as filters or operation inputs, but they are not standalone resources in the node. Core parameters [#core-parameters] Documents [#documents] | Parameter | Used in | Description | | ------------------------ | ------------------------------ | ----------------------------------------------------------------------- | | ID | Get, Get Related Entities | Fetch a specific entity or use it as the source for relationship lookup | | Category Names or IDs | Get Many | Filter documents by one or more categories selected from the dropdown | | Search | Get Many | Case-insensitive search across `name`, `description`, and `details` | | Recursive | Get Many | Include documents from nested categories under the selected category | | Category Type Name or ID | Get by Category Type | Query documents by category alias, such as a document family | | Additional Fields | Get Many, Get by Category Type | Filter with `fields`, `sku`, `source`, or `tags` | | Merge Documents | Get All Documents | Combine many document records into one `mergedContent` output | Categories [#categories] | Parameter | Used in | Description | | ------------------------ | ----------- | ----------------------------------------------------------------------------------- | | ID | Get | Fetch a single category | | Category Type Name or ID | Get by Type | List categories belonging to one category alias | | Additional Fields | Get Many | Filter with `categoryAlias`, `policy`, `parent`, `search`, `fields`, and `extended` | Images and metadata [#images-and-metadata] | Parameter | Used in | Description | | ----------------- | -------- | ---------------------------------------------- | | ID | Get | Fetch a single attachment by ID | | Additional Fields | Get Many | Restrict the response fields for image records | Output behavior [#output-behavior] The node returns slightly different shapes depending on the operation: | Operation type | Output shape | | ---------------------------------------------- | ---------------------------------------------------------------- | | Single record operations | A normal object for the entity, category, image, or company data | | List operations | An object with an `items` array | | Get All Documents with Merge Documents enabled | A single object with `mergedContent` and `documentCount` | That output shape is useful in n8n because list responses stay explicit and easy to pass into later branching or formatting steps. Best use cases [#best-use-cases] Exact policy retrieval [#exact-policy-retrieval] Use **Documents** with category filters when the workflow must quote a known policy, instruction manual, or price list exactly as it exists in AgentBrains. Structured product lookup [#structured-product-lookup] Use **Documents** with `sku`, `search`, or `tags` when the workflow already has a product identifier and needs the cleanest possible source record. Folder-driven browsing [#folder-driven-browsing] Use **Category** to build UI-like selection logic inside n8n, especially if you want users or agents to choose from known document families. Image access [#image-access] Use **Images** when the workflow needs attachment metadata or an asset already linked to a document entity. Company context [#company-context] Use **Company Data** to fetch tenant-level profile data such as business context for personalization or routing. Node Interface [#node-interface] Here is how the **AgentBrains Knowledge Base** node looks in n8n: Knowledge Base Example Practical examples [#practical-examples] Retrieve a strict source document [#retrieve-a-strict-source-document] Select: * **Resource:** Documents * **Operation:** Get Many * **Category Names or IDs:** a folder like instruction manuals or support policies * **Search:** an exact product or topic phrase This is the best setup when the answer should come from a known document family and not from broad semantic search. Build one prompt from many documents [#build-one-prompt-from-many-documents] Select: * **Resource:** Documents * **Operation:** Get All Documents * **Merge Documents:** enabled This produces one combined body of content that can be sent into an LLM or stored as a downstream artifact. Find all related items [#find-all-related-items] Select: * **Resource:** Documents * **Operation:** Get Related Entities * **ID:** source entity ID * **Additional Fields → Type:** optional relationship type like `is-accessory-for` This is useful for recommendation, upsell, troubleshooting, or related-document flows. API concepts behind the node [#api-concepts-behind-the-node] The node is backed by the AgentBrains integration layer for knowledge-base scope. It maps to: * entity listing and entity-by-ID retrieval * category listing and category-alias discovery * attachment retrieval * helper company-info retrieval The OpenAPI specification for the underlying data APIs confirms the same filtering model used in the node, including `search`, `fields`, `tags`, `sku`, `source`, and recursive category traversal. Knowledge Base vs RAG [#knowledge-base-vs-rag] Use this node when you know what should be fetched. Use [AgentBrains RAG](/integrations/n8n/rag) when the agent only knows the user intent and must semantically search across an index first. # AgentBrains RAG AgentBrains RAG [#agentbrains-rag] Use this node when the workflow needs semantic search instead of exact document lookup. The node is built for AI-agent usage in n8n. It can be attached directly as a tool, which makes it a good fit for LangChain-based agents and n8n AI Agent flows. What it does [#what-it-does] The node supports two retrieval modes: | Operation | Purpose | Typical result | | -------------- | ------------------------------------------------ | --------------------------------------------------------------------------- | | Retrieve Text | Search a text index for the most relevant chunks | Context passages for product, policy, troubleshooting, or support questions | | Retrieve Image | Search the image index by semantic intent | Image metadata or image URLs that match labels generated during ingestion | Why teams use it [#why-teams-use-it] The RAG node gives you semantic retrieval without setting up a separate vector database workflow in n8n. You do not need to manage embeddings, a Pinecone client, or custom retrieval scripts in the workflow itself. Parameters [#parameters] | Parameter | Description | | ------------------ | -------------------------------------------------------------------------------- | | Operation | Choose between text retrieval and image retrieval | | Index Name or ID | Select the text index to search when using **Retrieve Text** | | Query | The semantic search prompt or user question | | Extended Response | For image retrieval, choose between the full response payload or only image URLs | | Tool Description | Explains to an AI agent when it should use this tool | | Options → Top K | Number of matches to return | | Options → Metadata | Optional JSON filter passed to retrieval | Index behavior [#index-behavior] For text retrieval, the node loads available indexes dynamically. It always includes a built-in global option: * **Core Text Index (All Documents)** for broad semantic search across the full document corpus The underlying API exposes index collections and metadata such as index names, notes, category membership, and vectorisation status. That makes this node a practical handoff point between AgentBrains knowledge organization and n8n execution. Image retrieval [#image-retrieval] Image mode uses the dedicated `images` namespace automatically. This is especially useful when the user asks to see something rather than only describe it, for example: * wiring diagrams * product comparison images * labeled reference photos * marketing or catalog visuals If **Extended Response** is disabled, the node returns only image URLs. If it stays enabled, the workflow receives the full retrieval payload. Output shape [#output-shape] | Mode | Output | | ---------------------------------------------- | ----------------------------------------------------------------- | | Retrieve Text | The raw retrieval response from the AgentBrains retriever service | | Retrieve Image with Extended Response enabled | The full image retrieval payload | | Retrieve Image with Extended Response disabled | A simplified array of image URLs extracted from result metadata | Best workflow patterns [#best-workflow-patterns] AI tool for broad questions [#ai-tool-for-broad-questions] Use this node as a tool when the user asks something broad like: * "How do I troubleshoot the thermal sensor?" * "What should I know before installing this unit?" * "Show me images for the blue and red variants." The AI agent can call the node only when needed instead of loading a large static prompt every time. Hybrid retrieval [#hybrid-retrieval] Use **RAG** first to find the most relevant area of the knowledge base, then call [AgentBrains Knowledge Base](/integrations/n8n/knowledge-base) for exact document fetches when the answer must reference a specific record. Visual answer flows [#visual-answer-flows] Use **Retrieve Image** when the chat or workflow should return a hosted image URL directly into the response. Metadata filtering [#metadata-filtering] The **Metadata** option accepts JSON and is passed through to the retriever. This lets you narrow results by structured properties if your ingestion pipeline stores those fields in the index metadata. API concepts behind the node [#api-concepts-behind-the-node] The node uses: * the index listing API to populate selectable indexes * the retriever API to perform semantic lookup with `namespace`, `query`, `metadata`, and `topK` The OpenAPI spec documents the index-management side of this model, including index status and category assignment. The retriever route in the integration service confirms the request shape used by the node. Node Interface [#node-interface] Here is how the **AgentBrains RAG** node looks in n8n: RAG Example RAG vs Knowledge Base [#rag-vs-knowledge-base] Use [AgentBrains Knowledge Base](/integrations/n8n/knowledge-base) when you already know which record to fetch. Use this node when you want the workflow or AI agent to discover the best matching context semantically. # AgentBrains Synthetic QA AgentBrains Synthetic QA [#agentbrains-synthetic-qa] Use this node to validate workflow quality with repeatable, scored synthetic conversations instead of manual spot checks. It connects n8n to the AgentBrains synthetic user engine so you can test behavior, monitor regressions, and compare outcomes after changes. What it does [#what-it-does] For each execution, the node: 1. Loads available synthetic users from AgentBrains. 2. Starts a test run with the selected persona, scoring tests, and conversation goals. 3. Polls AgentBrains until the run completes or times out. 4. Returns structured scoring results and a parsed report. Parameters [#parameters] | Parameter | Description | | ---------------------------- | --------------------------------------------------------------- | | Synthetic QA Name or ID | Select the synthetic user persona for the run | | Number of Test Conversations | How many conversations should be generated in the batch | | Conversation Quality Tests | Which scoring dimensions to evaluate | | Options → Conversation Goals | Optional manual goals shared across all conversations | | Options → Promotion Details | Optional negotiation or promotion prompt applied during the run | Available scoring tests [#available-scoring-tests] The node currently exposes these quality dimensions: * Customers Mood Change * Human-Free Issue Handling * Information Completeness * Making a Sale * Objection Handling * On Task * Problem Solving Goals and persona behavior [#goals-and-persona-behavior] If you provide **Conversation Goals**, the node uses those goals directly. If you leave the field empty, AgentBrains generates primary and secondary objectives automatically for the selected synthetic user. The node also sends persona details, industries, personalities, and employee role information from the selected synthetic profile so the run matches the intended behavior. Output shape [#output-shape] The node returns a single object with: | Field | Description | | -------------- | ------------------------------------------------------- | | `scoringTests` | A list of score entries with `name` and `score` | | `report` | A parsed QA report with a title and structured sections | This makes it easy to send the result into Slack, dashboards, storage, or a release gate workflow. Best use cases [#best-use-cases] Regression testing [#regression-testing] Run this node after prompt, knowledge-base, or workflow logic changes to detect broken behavior before it reaches production. Scheduled monitoring [#scheduled-monitoring] Trigger the node daily or hourly to measure whether a live workflow still passes core scenarios. Scenario-based validation [#scenario-based-validation] Use different synthetic users to simulate angry customers, curious leads, or other personas without rebuilding your workflow for each test case. Example validation flow [#example-validation-flow] 1. Start with a manual trigger, schedule trigger, or deployment webhook. 2. Run **AgentBrains Synthetic QA** with the target synthetic user and scoring dimensions. 3. Branch on returned scores. 4. Send failures to Slack, ClickUp, or a dashboard. Operational details [#operational-details] | Detail | Behavior | | ----------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- | | Polling interval | The node polls every 5 seconds | | Maximum wait time | The node waits up to 15 minutes for completion | | Failure handling | If AgentBrains returns an error or an invalid synthetic user is selected, the execution fails with a node error unless continue-on-fail is enabled | Why this is better than manual testing [#why-this-is-better-than-manual-testing] Manual validation in n8n usually means triggering workflows and reading raw JSON by hand. This node turns testing into a measurable process with reusable personas, repeatable conversation goals, and direct scoring. Pairing recommendations [#pairing-recommendations] * Use [AgentBrains Integration Trigger](/integrations/n8n/integration-trigger) when you want the same workflow to serve both production traffic and test traffic. * Use [AgentBrains Knowledge Base](/integrations/n8n/knowledge-base) or [AgentBrains RAG](/integrations/n8n/rag) in the tested flow so QA covers your real retrieval behavior, not a simplified mock. # Get employee # List employees # Get entity attachments {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Get an entity by ID {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Get entity relationships {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # List and filter entities {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # List entities by category {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # List indexes # Retrieve matches # Get company info # Get an attachment by ID {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Get attachments for a tenant {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Link or unlink an attachment {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # Get a category by ID {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # List all categories {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} # List category aliases {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */}