Building a Multi-Agent Pipeline with the n8n MCP Client Tool
Practical Guide to Standard Protocols for AI Agent Automation
The era of AI agents using only a single tool is passing. To solve real-world business problems, agents capable of navigating CRMs, ticketing systems, and knowledge bases simultaneously are required, along with a pipeline where these agents collaborate. However, directly implementing different API specifications for each system leads to the paradox where the integration code exceeds the business logic.
The combination of the Model Context Protocol (MCP), announced by Anthropic in late 2024, and the n8n MCP Client Tool node directly resolves this issue. While the existing OpenAI Function Call method required implementing different tool schemas for each model provider, MCP provides a common standard that allows a server implemented once to be connected directly to any MCP client. In this article, you can grasp everything at once, from the core principles of MCP to the configuration of the Orchestrator-Worker pattern, context optimization strategies, and operational precautions. Since the MCP Client Tool and MCP Server Trigger were released as default built-in nodes on n8n in April 2025, an n8n instance of that version or higher is a prerequisite.
If you are new to n8n, we recommend that you first check out how to create a basic workflow in the n8n Official Quickstart. If you have experience using n8n, you may proceed directly to the practical application section.
Key Concepts
What is MCP — Understanding via the USB-C Analogy
Just as USB-C connects various devices through a single connector, MCP serves as a common interface between the LLM and external systems. Previously, each tool had to implement a different integration method, but with MCP, tools can be exposed on the server side in a standard manner and called on the client side in a consistent way.
MCP (Model Context Protocol): An open standard protocol that unifies how AI models interact with external tools and data sources. It standardizes the entire process in which a client (agent) discovers what tools are available on a server (external system), selects one, and calls it.
MCP's tool discovery works specifically by having the client send a {"method": "tools/list"} request to the server's POST /mcp endpoint, and the server returning a list of available tools and the JSON Schema for each tool. Backend developers can understand this as a concept similar to the automatic discovery feature of the REST API OpenAPI specification.
MCP is broadly divided into two roles.
| Role | n8n Node | Relationship with AI Agent | Description |
|---|---|---|---|
| MCP Client | MCP Client Tool Node | Connected to the Subnode of the AI Agent Node | Channel through which the agent calls the tool on the external MCP server |
| MCP Server | MCP Server Trigger Node | Independent Entry Point (Separate Workflow) | Expose the n8n workflow itself externally as an MCP Server |
With the combination of these two nodes, n8n becomes a bidirectional MCP hub. It can consume external tools while simultaneously serving itself as a tool to external AI clients (Claude Desktop, Cursor, etc.).
Multi-Agent Orchestration Architecture
The core pattern for configuring multi-agent pipelines in n8n is the Orchestrator-Worker structure. The MCP Client Tool is connected as a sub-node below the AI Agent node, and the structure allows the orchestrator to directly call multiple external systems.
[파이프라인 내부 흐름]
[사용자 입력]
│
▼
[오케스트레이터 AI Agent]
├──[MCP Client Tool] → MCP Server A (CRM)
├──[MCP Client Tool] → MCP Server B (Ticketing)
└──[MCP Client Tool] → MCP Server C (Knowledge Base)
│
▼
[최종 결과 반환][외부 노출 — 별도 독립 워크플로]
[MCP Server Trigger] ← 외부 AI 클라이언트 (Claude Desktop, Cursor 등)
│
└──[위 파이프라인 워크플로 실행]An MCP Server Trigger is not the output of a pipeline, but an independent entry point. When you want to expose a completed pipeline workflow externally as a single tool, configure it by placing the MCP Server Trigger in a separate workflow and calling that pipeline.
- Orchestrator Agent: Receives user requests, breaks down the entire task, and delegates it to the appropriate worker.
- Worker Agent: An agent specialized in a specific domain (search, database lookup, email sending, etc.) that the orchestrator calls like a tool.
- MCP Client Tool Node: Connected as a subnode of the AI Agent node, it automatically discovers and calls the list of tools on the MCP server at runtime.
Transport Methods — HTTP Streamable vs SSE
Starting with the MCP specification update (v2025-03-26) on March 26, 2025, there were significant changes to the transport method.
| Method | Status | Remarks |
|---|---|---|
| HTTP Streamable | Officially Recommended | Support added via n8n PR #15454 |
| SSE(Server-Sent Events) | Deprecated | Use only for legacy environment compatibility |
| STDIO | Not supported built-in | Requires community node (n8n-nodes-mcp) |
HTTP Streamable (Streamable HTTP): This is a transport method that supports bidirectional streaming over standard HTTP, improving upon the limitations of existing SSE, which relied on unidirectional server-to-client streaming. It operates more stably than SSE in proxy and load balancer environments and supports stateless requests when stateful connections are unnecessary.
Practical Application
Example 1: Customer Support Automation Pipeline
The Core of This Example: The Basic Pattern of Processing Multiple Systems with a Single Natural Language Processor Using a Single AI Agent + 3 MCP Client Tools
A single AI Agent, connected as sub-nodes with three MCP Client Tool nodes, simultaneously processes CRM, ticketing, and knowledge bases using natural language. This is the starting point for experiencing the benefits of MCP integration most quickly.
[고객 문의 웹훅]
│
▼
[AI Agent (GPT-4o)] ──── 서브노드 연결
├──[MCP Client Tool: CRM] → CRM MCP 서버 (고객 정보 조회)
├──[MCP Client Tool: Ticketing] → 티켓팅 MCP 서버 (오픈 이슈 확인)
└──[MCP Client Tool: KB] → 지식베이스 MCP 서버 (해결책 검색)
│
▼
[답변 초안 생성 → 슬랙 전송]How to configure the MCP Client Tool node: When you open the MCP Client Tool node on the n8n canvas, enter the following items in the Parameters tab.
| Field Path | Input Value | Description |
|---|---|---|
Transport Type |
Streamable HTTP |
HTTP Streamable recommended (SSE is legacy) |
URL |
https://your-crm-mcp-server.com/mcp |
MCP Server Endpoint |
Authentication |
Select from Credential dropdown | Select a pre-generated credential from Bearer Token, Header Auth, or OAuth2 |
Authentication information (API key, token) is stored separately in the n8n Credentials management screen and then linked via the node's Authentication dropdown. Directly writing the token value to JSON is not supported.
Once the setup is complete, n8n automatically calls the /tools/list on the MCP server to browse the tool list and automatically exposes it to the AI Agent.
If you expose the entire workflow configured in this way as an MCP Server Trigger (a separate workflow), external AI clients such as Claude Desktop or Cursor can invoke this entire pipeline with a single tool called "Customer Inquiry Resolution."
A single-agent structure is suitable when there are 3 to 5 tools. If the number of domains increases or the processing logic of each domain becomes complex, the following pattern is more effective.
Example 2: Multi-Agent Research Pipeline
Key to this example: Orchestrator + Specialized Worker Agent Hierarchy — Each worker takes an independent context to distribute the orchestrator load.
This is a pattern for automating large-scale research tasks using a hierarchical agent structure. Instead of the orchestrator directly managing all tools, they delegate to specialized worker agents.
[리서치 주제 입력]
│
▼
[리서치 리더 Agent — 오케스트레이터]
│
├──[AI Agent Tool] ──▶ 리서치 어시스턴트 1 (웹 검색 전문)
│ └──[MCP Client Tool] → Brave Search MCP
│
├──[AI Agent Tool] ──▶ 리서치 어시스턴트 2 (기술 분석 전문)
│ └──[MCP Client Tool] → Elasticsearch MCP
│
└──[AI Agent Tool] ──▶ 에디터 Agent
└──[MCP Client Tool] → 이메일 발송 MCPAI Agent Tool Node: A node that connects other AI Agents like tools. It allows orchestrators to invoke worker agents in the same way as regular tools. Communication between agents can be implemented using only the MCP Trigger-Client combination, without webhooks.
The core configuration structure of the Orchestrator AI Agent node is represented as follows in the n8n workflow JSON.
{
"nodes": [
{
"name": "리서치 리더 Agent",
"type": "@n8n/n8n-nodes-langchain.agent",
"parameters": {
"options": {
"systemMessage": "당신은 리서치 리더입니다. 주제를 분석하고 적절한 워커 에이전트에 작업을 위임하세요."
}
}
},
{
"name": "리서치 어시스턴트 1",
"type": "@n8n/n8n-nodes-langchain.agent",
"parameters": {
"options": {
"systemMessage": "웹 검색 전문 에이전트입니다. Brave Search를 활용해 최신 정보를 수집하세요."
}
}
}
],
"connections": {
"리서치 어시스턴트 1": {
"ai_tool": [[{"node": "리서치 리더 Agent", "type": "ai_tool", "index": 0}]]
}
}
}You can see that the ai_tool connection type registers the worker agent as a subnode (tool) of the orchestrator.
The key to this pattern is that each worker agent has an independent context. The orchestrator does not need to directly manage the contexts of all tools. However, what happens if the MCP server that the worker agents connect to contains dozens to hundreds of tools? The following pattern solves that problem.
Example 3: Context Reducer Pattern
Key Point of This Example: An Intermediate Layer Pattern to Prevent Context Explosion When Connecting to a Large MCP Server with Over 100 Tools
LLM generates a response containing the entire list of available tools in the prompt context. The description and JSON schema for a single tool average 100
Since it consumes 300 tokens, if you connect an Elasticsearch MCP server with 100 tools directly to the orchestrator, the tool list alone is 10,000
30,000 tokens are consumed. This means that the tool list occupies approximately 10–25% of GPT-4o’s 128K contexts, and in practice, symptoms such as reduced tool selection accuracy and increased response generation costs appear.
This can be mitigated using the Context Reducer Pattern introduced in n8n Workflow Template #4475.
[오케스트레이터 Agent]
│ (워커에 "이런 정보가 필요해"라고 자연어 위임)
└──[AI Agent Tool] ──▶ 서브 에이전트 (컨텍스트 리듀서)
│ (대형 MCP 서버와 직접 상호작용)
└──[MCP Client Tool] → 대형 MCP 서버 (100+ 도구)
│
▼
[필요한 결과만 요약해 오케스트레이터에 반환]It is a structure where sub-agents interact directly with a large MCP server as an intermediate layer and deliver only compressed results to the orchestrator. The orchestrator does not need to know about the existence of 100 tools; it only needs to recognize a single tool called a worker agent.
When exposing tools with MCP Server Triggers, writing concise descriptions is also important for saving context. When creating tool schemas, you should maintain a concise structure as follows.
{
"name": "search_customer_issues",
"description": "고객 ID로 오픈 이슈를 검색합니다. 복잡한 필터 대신 자연어 쿼리를 입력하세요.",
"inputSchema": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "자연어 검색 쿼리 (예: '최근 30일 내 미해결 결제 오류')"
}
},
"required": ["query"]
}
}Pros and Cons Analysis
Advantages
| Item | Content |
|---|---|
| Standardization | Connect all external tools in the same way with the MCP protocol — no custom integration code is required |
| Modularity | Sub-agents can be added and removed like tools, allowing for feature expansion without redesigning the entire workflow. |
| Bidirectionality | n8n operates as both a consumer (Client) and a provider (Server) simultaneously |
| Visual Editing | Configure multi-agent pipelines with GUI without code |
| 600+ Templates | Rapid prototyping is possible with community templates |
| Flexible Authentication | Supports Bearer Token, Generic Header, and OAuth2 |
Disadvantages and Precautions
| Item | Content | Response Plan |
|---|---|---|
| STDIO Not Supported | Built-in MCP Client Tool supports HTTP/SSE only. Cannot connect to local command-line MCP server | Use Community Node n8n-nodes-mcp |
| Environment Variable Required | If N8N_COMMUNITY_PACKAGES_ALLOW_TOOL_USAGE=true is not configured, the MCP Client Tool will not appear in the AI Agent tool list. This is because n8n's security policy blocks the execution of tool-type nodes by default. |
Pre-add to the docker-compose.yml environment variable entry in a self-hosted environment. |
| SSE→HTTP Streamable Transition Bug | Reported bug where it operates in SSE even when HTTP Streamable is selected in the UI (GitHub Issue #18938, #24967) | As a temporary workaround, a fallback setting to SSE for both server and client can be applied. A fundamental solution requires updating to the version where the issue was closed and re-verifying. |
| Context Cost | Tool lists can consume over 10,000 LLM context tokens when connecting to large MCP servers | Utilize Context Reducer Pattern (Template #4475) |
| Ecosystem Maturity | Variations exist in the quality and stability of third-party MCP servers | Prioritize the use of officially supported servers (GitHub, Gmail, Google Calendar, etc.) |
STDIO (Standard Input/Output): An inter-process communication method primarily used in MCP servers running locally, such as command-line tools. Unlike HTTP-based MCP servers, settings are transmitted via environment variables and command-line arguments.
The Most Common Mistakes in Practice
- Missing Environment Variables: The most frequent issue is that the MCP Client Tool does not appear in the AI Agent's tool list because
N8N_COMMUNITY_PACKAGES_ALLOW_TOOL_USAGE=trueis not configured on self-hosted n8n. This setting is unnecessary for cloud users. - Transport Method Mismatch: This occurs when the MCP server is running on SSE but the client attempts to connect via HTTP Streamable, or conversely, when a new server is running on legacy SSE. The transport methods of the server and the client must match.
- Connecting too many tools directly to the orchestrator: Connecting all MCP client tools directly to a single orchestrator leads to an explosive increase in context and a decrease in tool selection accuracy. A hierarchical structure is more stable where worker agents are placed by domain and the orchestrator calls the workers as tools.
In Conclusion
The n8n MCP Client Tool node is not just a simple integration connector, but a core component of an orchestration platform where AI agents collaborate using standard protocols. Since its launch as an embedded node in April 2025, the barrier to entry has been significantly lowered, and ecosystem stability is rapidly improving as the HTTP Streamable standard takes hold.
There are 3 steps you can start right now.
- Add environment variables. If you are in a self-hosted environment, add
N8N_COMMUNITY_PACKAGES_ALLOW_TOOL_USAGE=truetodocker-compose.ymland restart the container to be ready. If you are an n8n Cloud user, you can skip this step. - Load n8n Workflow Template #4475 (MCP Server with AI Agent as a Tool — Context Reducer) and examine its structure. This is the fastest way to check the Orchestrator-Worker pattern and the Context Reducer structure at once.
- Connect one of the officially supported MCP servers (Gmail, Google Calendar, GitHub) to the MCP Client Tool node to verify that tool discovery works. Once you have personally verified that the agent automatically retrieves the list of tools at runtime, you can naturally extend to connecting a custom MCP server.
If you have completed Step 3, we recommend that you replace the CRM MCP server URL from Example 1 with your internal system. By proceeding in the order of Defining the Tool Schema → Testing the Connection → Adjusting the Orchestrator Prompt, you can create the first version of the actual production pipeline.
Next Post: How to Create Your Own Custom MCP Server with n8n MCP Server Trigger and Connect It to Claude Desktop — A step-by-step guide to the entire process of exposing n8n workflows to external AI client tools.
Reference Materials
If you are a beginner, we recommend starting with the materials marked with ★, and if you want to understand the internal implementation, we recommend referring to the materials marked with ▲.
- ★ MCP Client Tool Node Official Documentation | n8n Docs — Reference material for node parameters and authentication methods
- ★ MCP Server Trigger Node Official Documentation | n8n Docs — Reference material for server exposure methods
- ★ MCP Server with AI Agent Workflow Template #4475 | n8n.io — Starting point for Context Reducer Pattern practice
- n8n MCP Server Configuration Guide | n8n Docs — Includes Self-hosted Environment Variable Settings
- n8n as Agentic MCP Hub | Infralovers — Bidirectional Hub Architecture Overview
- Implementing Multi-Agent Pattern with MCP Trigger·Client | n8n Community — Example of Implementing Inter-Agent Communication Without Webhooks
- Multi-agent system tutorial | n8n Official Blog — Orchestrator-Worker Pattern Official Tutorial
- Orchestrating Agentic AI with n8n and MCP | Medium, Data Reply IT
- Supercharge AI Agents with n8n and MCP | Medium, Leandro Calado
- ▲ Background of MCP Streamable HTTP Introduction | fka.dev — Reasons for SSE Deprecated and HTTP Streamable Design Philosophy
- ▲ MCP Official Transport Spec | modelcontextprotocol.io — Original Transport Layer Spec
- ▲ n8n + Elasticsearch MCP Agent Case Study | Elastic — Practical Case Study of Large-Scale MCP Server Integration
- n8n-mcp GitHub (czlonkowski) — An MCP server that builds and runs n8n workflows directly in Claude Desktop