Oh this is where it becomes properly architectural. I love it. 🔥
Up to now, your orchestrator:
- talks directly to models
- writes files
- posts to Slack
That’s fine for a tutorial.
But in a serious system, the orchestrator should not directly perform external actions.
It should only:
Decide what to do.
Actual side-effects should be isolated behind tools.
That’s where MCP comes in.
Below is Post 6, focused only on tool isolation using FastMCP.
Clean. Pedagogical. No extra noise.
---
layout: post
title: "Tool Isolation with MCP: Secure Boundaries for AI Workflows"
date: 2026-02-18
lastmod: 2026-02-18
published: false
image: "https://daehnhardt.com/images/ai_art/flux/mcp-tool-boundary.jpg"
image_title: "Editorial illustration of a workflow graph separated from external tools by a secure boundary wall with labeled tool endpoints, modern clean style, box format"
thumb_image: "https://daehnhardt.com/images/thumbnails/mcp-tool-boundary.jpg"
tags:
- AI
- Python
- Automation
- Infrastructure
- Security
- Series
keywords: "MCP Python example, FastMCP tutorial, AI tool isolation, secure AI workflow architecture"
---
Tool Isolation with MCP: Secure Boundaries for AI Workflows
So far, our orchestrator:
- drafts content
- supervises quality
- pauses for Slack approval
- resumes safely
- writes files
That is structured.
But it is not yet isolated.
Right now, the orchestrator:
- calls Slack directly
- writes files directly
- could theoretically send emails directly
In production systems, this is dangerous.
We want:
The orchestrator decides. Tools execute.
This separation is called tool isolation.
Why Tool Isolation Matters
Without isolation:
- Prompt injection can trigger unintended actions.
- The model can request arbitrary file writes.
- Business logic leaks into LLM prompts.
With isolation:
- Tools have explicit contracts.
- Inputs are validated.
- External actions are controlled.
- The orchestrator cannot “accidentally” do more than allowed.
MCP gives us this clean boundary.
Architecture After MCP
Before:
LangGraph → Slack API
LangGraph → File writes
After:
LangGraph → MCP Server → Slack Tool
→ File Tool
→ (Future) Email Tool
LangGraph no longer touches external systems directly.
That is the boundary.
Step 1 — Add a Separate MCP Tool Server
Create a new folder:
ai-editorial-orchestrator/
mcp_server/
server.py
requirements.txt
Step 2 — MCP Server Dependencies
mcp_server/requirements.txt
fastmcp>=0.1.0
requests>=2.31
Step 3 — Minimal FastMCP Server
mcp_server/server.py
from fastmcp import FastMCP
import os
import requests
from pathlib import Path
mcp = FastMCP("editorial-tools")
@mcp.tool()
def write_markdown_file(filename: str, content: str) -> str:
"""
Writes a Markdown file to the out/ directory.
"""
out_dir = Path("../out")
out_dir.mkdir(exist_ok=True)
file_path = out_dir / filename
file_path.write_text(content, encoding="utf-8")
return f"Wrote {filename}"
@mcp.tool()
def post_slack_message(text: str) -> str:
"""
Posts a simple Slack message via webhook.
"""
webhook = os.getenv("SLACK_WEBHOOK_URL")
if not webhook:
return "Slack webhook not configured."
payload = {"text": text}
requests.post(webhook, json=payload, timeout=10)
return "Slack message sent."
Run MCP server:
cd mcp_server
python server.py
By default, FastMCP exposes a local HTTP server.
Step 4 — Update Docker Compose
Add MCP service:
services:
orchestrator:
build: .
env_file:
- .env
depends_on:
- mcp
volumes:
- ./out:/app/out
ports:
- "8000:8000"
command: ["uvicorn", "app.server:app", "--host", "0.0.0.0", "--port", "8000"]
mcp:
build: ./mcp_server
env_file:
- .env
ports:
- "9000:9000"
Now tools run separately.
Step 5 — Call MCP Tools from LangGraph
Inside app/graph.py, replace direct Slack/file calls.
Instead of:
post_slack_message(...)
Use HTTP call to MCP server:
import requests
def call_mcp(tool_name: str, args: dict):
response = requests.post(
"http://mcp:9000/tools/invoke",
json={"tool": tool_name, "arguments": args},
timeout=10,
)
return response.json()
Example usage:
call_mcp(
"write_markdown_file",
{"filename": "newsletter.md", "content": state["newsletter_md"]}
)
And:
call_mcp(
"post_slack_message",
{"text": "Newsletter draft approved."}
)
What Changed Conceptually
The orchestrator now:
- cannot access Slack directly
- cannot write arbitrary files
- cannot extend itself silently
It must:
- Explicitly call a tool
- Pass structured arguments
- Respect the tool contract
That is security by design.
Why This Is Powerful
You now have:
- Clear execution boundary
- Isolated side effects
- Replaceable tool layer
- Extendable system (email, Medium, analytics)
And if one day you want:
- Cursor
- Claude Desktop
- Another orchestrator
They can all reuse the same MCP tool server.
What Comes Next
We have completed the architecture layer.
Now the final posts can focus on:
- Production hardening (idempotency + per-run isolation)
- Observability and structured logging
- Final series overview and recap
You’ve moved from:
“Let’s try agents”
to
“Let’s design an orchestration system.”
That’s a big shift.
Would you like the next post to focus on:
A) Production hardening (idempotency, per-run folders, failure recovery) B) Observability + structured logging + metrics C) A final architectural recap and refined diagram
We are very close to a complete series now.