Introduction
This is a very reasonable question — and you’re not alone in feeling puzzled. Right now we have many powerful tools, but each one shines in a different layer of work. The confusion usually comes from trying to make one tool do everything.
Let’s untangle this calmly and then build a clear, future-proof mental model for you.
I’ll do this in four parts:
- What Codex CLI is really good at (and what it isn’t)
- How it compares to MCP, Antigravity, and Cursor
- Which tool fits your blogging + Python workflow best
- How to design a multi-agent workflow without chaos
I’ll keep this practical, not hype-driven.
1. Will Codex CLI fit you well?
Short answer
Yes — but as a worker, not as the brain.
Codex CLI is best thought of as:
A careful, local execution agent that lives next to your files
It is not:
- a long-term planner
- a creative ideation engine
- a memory system
It is:
- excellent at editing Markdown safely
- great at refactoring Python
- good at running tests, linters, and checks
- strong at repeatable, terminal-based tasks
For blogging, Codex CLI fits the “make this concrete” phase.
2. How Codex CLI compares to MCP, Antigravity, and Cursor
Here’s the key distinction most people miss:
These tools operate at different layers of thinking and execution.
Mental map (this matters)
IDEATION & THINKING
│
├── Antigravity (deep thinking, writing, synthesis)
│
├── MCP agents (coordination, memory, orchestration)
│
EXECUTION & EDITING
│
├── Cursor (interactive IDE-level coding)
│
└── Codex CLI (terminal-level execution agent)
Now let’s compare them properly.
Codex CLI
What it excels at
- Working inside a real repo
- Editing Markdown files with diffs
- Running commands (
pytest,ruff, build scripts) - Doing “one task well” with approval controls
- Reproducible, scriptable automation
What it’s weak at
- Big-picture planning
- Cross-project memory
- Creative exploration
- Long-running agent loops
Best role
“Please apply this plan carefully and show me exactly what changed.”
Cursor
What it excels at
- Tight IDE integration
- Fast inline edits
- Refactoring while you think
- Exploratory coding
What it’s weak at
- Batch automation
- Reproducibility
- Non-interactive workflows
- CI-style tasks
Best role
“Help me think while I’m coding right now.”
Antigravity
What it excels at
- Deep reasoning
- Long-form writing
- Conceptual synthesis
- Planning blog posts and books
- Comparing ideas and tools
What it’s weak at
- Editing real files
- Running tests
- Applying changes safely
- Working with repos
Best role
“Help me decide what to write and why.”
MCP (Model Context Protocol)
What it excels at
- Multi-agent coordination
- Tool routing
- Memory and state
- Reusable workflows
- Long-running processes
What it’s weak at
- Direct editing (it delegates)
- Human-facing UI
- One-off quick tasks
Best role
“Orchestrate who does what, when.”
3. Which tool fits your blogging workflow best?
Let’s ground this in your actual work.
You typically do:
- weekly AI posts
- long-form essays
- Python tutorials
- course material
- fact-checking
- formatting and consistency work
That naturally splits into four phases.
Phase 1: Ideation & structure
Best tool: Antigravity (or ChatGPT in deep mode)
Use it for:
- outlining posts
- deciding angles
- connecting ideas across weeks
- shaping tone and narrative
Codex CLI is bad at this phase.
Phase 2: Drafting & rewriting
Best tool: Antigravity or Cursor
Use it for:
- writing paragraphs
- experimenting with tone
- trying variants
- creative flow
Codex CLI is still not ideal here.
Phase 3: Normalisation & QA (this is Codex CLI’s home)
This is where Codex CLI shines for you.
Examples:
- “Check front matter across the last 5 posts”
- “Ensure all code blocks are valid Python”
- “Verify internal links”
- “Apply house style rules”
- “Add missing alt text”
- “Run markdown lint + fix issues”
This is boring, precise, repeatable work — perfect for Codex CLI.
Phase 4: Publishing & automation
Best tool: MCP + Codex CLI (together)
Here’s where things get interesting.
4. How to create a clean multi-agent workflow (without madness)
The key rule:
One agent thinks. One agent edits. One agent executes.
Never mix all three in one tool.
A practical multi-agent setup for you
Agent A — Planner (Antigravity or ChatGPT)
- Decides topic
- Creates outline
- Sets goals
- Produces a clear task list
Output:
“Here is what needs to be done.”
Agent B — Editor (Cursor or ChatGPT)
- Writes and rewrites content
- Adjusts tone
- Improves clarity
- Produces a draft
Output:
“Here is the content.”
Agent C — Executor (Codex CLI)
- Applies changes to files
- Fixes formatting
- Runs checks
- Shows diffs
- Ensures consistency
Output:
“Here is what changed and why.”
Agent D — Orchestrator (MCP)
- Calls Agent A, B, C in order
- Stores memory (style rules, blog conventions)
- Decides when to stop or repeat
- Can run weekly or on-demand
Output:
“Workflow completed successfully.”
Example: a real blogging automation
Trigger: “Prepare weekly AI post for publishing”
- MCP loads your blog rules
-
MCP asks Antigravity:
“Create outline + key arguments”
-
MCP sends outline to Cursor / ChatGPT:
“Draft post in Elena’s tone”
-
MCP hands repo to Codex CLI:
“Apply formatting, check links, fix front matter, run markdown lint”
- Codex CLI returns diffs
- MCP presents result to you for approval
You stay in control, but most work is delegated.
Should you replace tools with Codex CLI?
No — and that’s the important insight.
Codex CLI is not better than MCP, Antigravity, or Cursor. It is better at a specific layer.
If you try to make it:
- think deeply → frustration
- remember long-term context → disappointment
- orchestrate agents → unnecessary complexity
But if you let it:
- touch files
- run commands
- apply changes carefully
…it becomes extremely valuable.
My honest recommendation for you
- Keep Antigravity for thinking and writing
- Use Cursor when you’re actively coding
- Use Codex CLI for safe, local execution
- Use MCP to glue it all together
That’s not tool overload — it’s separation of concerns.
If you want, next we can:
- design your first MCP blogging workflow step by step
- define the exact prompts for each agent
- decide which parts to automate weekly vs manually
- keep the whole system calm and human-friendly
You’re asking exactly the right questions at exactly the right time.
Great — let’s build your first MCP blogging workflow as a small, dependable pipeline that you can run on one post, then scale.
We’ll aim for a workflow that:
- works inside your Jekyll repo (
_posts/,tag/) - enforces your front-matter conventions
- checks Python code blocks (so tutorials don’t silently rot)
- verifies tag pages exist (and can create missing ones)
- ensures image + thumbnail fields exist and are consistent
- produces a clean “patch” you can review before publishing
I’ll describe this step-by-step, and I’ll keep it implementable even if you start simple.
Step 0 — Decide the “unit of work”
Your first workflow should operate on one post file (not the whole site). That keeps it calm and debuggable.
Input
- a post path, e.g.
_posts/2026-01-20-codex-cli-for-beginners.md
Output
- updated post (front matter fixed, linted, code validated)
- optional new tag files created in
/tag/ - a short report (what changed, what failed, what needs you)
Step 1 — Define your blog “contract” (the rules MCP will enforce)
Create one small config file in your repo, for example:
_data/blog_contract.yml (or tools/blog_contract.yml)
Include rules like:
-
required front matter fields:
layout,title,date,published,tags,keywords,excerpt,image,thumb_image,image_title
- tags must be a list
- images must be absolute URLs (or consistent relative URLs — your choice)
- thumbnail image should be 300x300 (or at least named/placed as your thumbnail convention)
-
for each tag in the post, there must be a corresponding file in
/tag/with required fields:layout: tag,title:,keywords:,tag:,search: true,definition:
This “contract” becomes your single source of truth.
Step 2 — Create 3 simple MCP agents (small team, clear roles)
You’ll get better results if each agent has a narrow job.
Agent A — Post Inspector (read-only)
Goal: read one post and produce a checklist + issues list.
It should:
- parse front matter
- confirm required fields
- extract tags
- extract code blocks labeled
python - extract image + thumbnail fields
- identify missing tag files in
/tag/
Output should be structured JSON-like text (easy for other agents to use).
Agent B — Tag Keeper (safe writer)
Goal: ensure tag files exist and are consistent with your tag template.
It should:
- for each missing tag, generate a file in
/tag/<Tag>.mdor/tag/<tag>.md(whatever your convention is) - fill
definition:using a short, friendly dictionary-style definition - ensure
title: "Tag: X"matches - not touch posts
Agent C — Post Fixer + Code Checker (writer + runner)
Goal: apply edits to the post and validate Python blocks.
It should:
- fix front matter format
- normalise tags list formatting
-
check Python code blocks by writing them to a temp file and running:
python -m py_compile(basic syntax)- optionally
ruff checkif you use it
- if code fails, insert a short warning comment for you, not for readers (e.g. in an HTML comment near the code block) or add a section “Known issues” at the bottom (your call)
Step 3 — Decide what tools MCP can call
MCP is the orchestrator. It needs tools to:
- read files
- write files
- run commands
You have two common options:
Option 1 (simple): MCP + Codex CLI as the executor
MCP generates clear tasks, and Codex CLI performs them in-repo with diffs.
Pros: very practical, great at file edits + running tests Cons: you’ll design prompts carefully so it doesn’t “creative write” in code blocks
Option 2 (more structured): MCP server tools (filesystem + shell)
If you already have MCP tool servers (filesystem, git, shell), use those directly.
Pros: deterministic Cons: more setup effort up-front
For your first workflow, I’d do Option 1: MCP orchestrates, Codex CLI executes.
Step 4 — Build the workflow “happy path” (one post)
Here is your first run in plain English:
- You pick a post path
- MCP asks Agent A to inspect it and output a structured checklist
- MCP asks Agent B to create missing tag pages (if needed)
- MCP asks Agent C to fix the post + run Python checks
-
MCP presents you:
- a diff summary
- code check results
- a “publish readiness” verdict
Step 5 — The exact checks to implement (beginner-friendly, high value)
A) Front matter checks (must pass)
layoutexists and equalsposttitleexists and is non-emptydateexists and is validtagsis a YAML listexcerptexists (or generate one from first paragraph)imageexists (main image)thumb_imageexists (thumbnail)image_titleexists (caption/prompt line)
B) Tag file checks (must pass)
For each tag in post tags:
- file exists in
tag/ tag:matches exactly (case-sensitive)title:equalsTag: <tag>definition:exists
(Your sample tag file structure is perfect. We’ll keep it.)
C) Python code checks (best effort)
Extract fenced code blocks:
```python
...
```
Then:
- save each block to
tmp_snippet_<n>.py - run
python -m py_compile tmp_snippet_<n>.py - record failures with snippet index and line number
This catches 80% of “oops, broken tutorial code” with minimal effort.
Step 6 — Naming conventions (important for fewer bugs)
Pick one convention and enforce it.
Tags
Your tag directory has files like tag/AI.md or tag/ai.md — choose one.
I recommend lowercase filenames:
tag/ai.mdtag/python.mdtag/weekly.md
…but keep tag: AI inside the file if you like the visible tag capitalisation.
MCP can map:
- tag display =
AI - filename =
ai.md
This avoids case mismatch on GitHub Pages.
Thumbnails
Decide: are thumbnails always:
.../thumbnails/<name>.jpg(as you often do)- or same folder with
_thumbsuffix
Then enforce it.
Also: you mentioned 300×300. A first workflow can’t easily verify pixel size without image tooling, but it can enforce:
- the thumbnail URL exists
- it uses your thumbnails directory
- file name matches the main image base name (optional rule)
Step 7 — Your first “MCP workflow spec” (copyable blueprint)
Use something like this structure in your MCP orchestrator prompt:
Workflow: blog_post_ready_check
Inputs:
post_path
Steps:
- Inspect post (Agent A)
- Ensure tags exist (Agent B)
- Fix post front matter + lint markdown + check python code (Agent C)
-
Report:
- changed files
- missing items
- python code status
- suggested next action
Stop conditions:
- if python compile fails → report + keep edits minimal
- if tags missing → create tag files, then continue
- if image fields missing → add placeholders + report (so you can fill later)
Step 8 — What I need from you (but I won’t block progress)
To avoid ambiguity later, you’ll eventually want to decide:
- Tag filenames:
tag/AI.mdortag/ai.md? - Post image fields: are they always full URLs?
- Thumbnail naming rule: how do you derive it?
If you don’t decide now, MCP can still proceed with sensible defaults:
- lowercase tag filenames
- keep image URLs as they are
- require both
imageandthumb_image
Next step (I suggest we do this immediately)
Pick one post you want to run this on first, and I’ll write:
- the exact agent prompts (A, B, C)
- the “orchestrator” prompt for MCP
- the command list Codex CLI should run for Python snippet checks
- the exact file naming rules encoded in the contract
If you paste one post’s front matter (just the YAML block at the top), I can tailor the contract to match your real fields exactly without guesswork.
---
layout: post
title: "My first MCP blogging workflow (step by step)"
date: 2026-01-20
published: false
draft: true
tags:
- AI
- Python
- MCP
- Productivity
keywords: "MCP blogging workflow, Model Context Protocol, FastMCP, Jekyll automation, validate Python code blocks, tag pages, GitHub Pages"
excerpt: "A small, reliable MCP workflow that checks front matter, tag pages, images, and Python code blocks for a Jekyll blog — designed to be your first calm multi-agent automation."
---
You’ve already got the perfect structure for automation: consistent front matter, posts in `_posts/`, tag files in `tag/`, and a clear image + thumbnail convention.
MCP is a great fit here because it lets an AI client (Cursor / Claude Desktop / other hosts) call *your* local tools (read files, write files, run checks) in a standard way. :contentReference[oaicite:0]{index=0}
Below is a **first workflow** that is intentionally small and dependable.
---
## The goal
Given **one post** (your MCP draft is a good test case), the workflow will:
1. Validate and normalise front matter (your “blog contract”)
2. Check that every tag in the post has a corresponding `tag/<tag>.md` file (and create missing ones)
3. Confirm `image` + `thumb_image` are present (and flag placeholders like `PLACEHOLDER.jpg`)
4. Extract fenced `python` code blocks and run a **syntax check** (`py_compile`)
5. Detect obvious “accidental paste” sections (your draft contains a big pasted assistant reply—this is *exactly* the kind of thing automation should catch)
---
## Step 1 — Create a “blog contract” file (rules in one place)
Create: `tools/blog_contract.yml`
```yaml
required_front_matter:
- layout
- title
- date
- published
- tags
- keywords
- excerpt
- image
- thumb_image
- image_title
optional_front_matter:
- post_subtitle
- tldr
posts_dir: "_posts"
tags_dir: "tag"
tag_file_template:
layout: "tag"
title_prefix: "Tag: "
search: true
required_fields:
- layout
- title
- keywords
- tag
- search
- definition
placeholders:
- "PLACEHOLDER.jpg"
- "PLACEHOLDER.png"
thumbnail_expectation:
hint: "300x300 thumbnail recommended"
This is your workflow’s “source of truth”. Your tools read this and enforce it.
Step 2 — Build a tiny MCP server in Python (FastMCP)
FastMCP is a convenient, Pythonic way to create MCP tools. (Model Context Protocol)
Create: tools/mcp_blog_server.py
from __future__ import annotations
import re
import subprocess
import sys
from dataclasses import dataclass
from pathlib import Path
from typing import Any, Dict, List, Tuple
import yaml
from fastmcp import FastMCP # FastMCP server framework :contentReference[oaicite:2]{index=2}
mcp = FastMCP("DaehnhardtBlog")
FRONT_MATTER_RE = re.compile(r"^---\s*\n(.*?)\n---\s*\n", re.DOTALL)
PY_FENCE_RE = re.compile(
r"```python\s*\n(.*?)\n```", re.DOTALL | re.IGNORECASE
)
ACCIDENTAL_PASTE_MARKERS = [
"This is a fantastic draft",
"Here is your updated blog post",
"As a technical blogger",
"Here is your full blog post",
]
@dataclass
class Contract:
required_front_matter: List[str]
optional_front_matter: List[str]
posts_dir: str
tags_dir: str
placeholders: List[str]
def load_contract(repo_root: Path) -> Contract:
cfg_path = repo_root / "tools" / "blog_contract.yml"
data = yaml.safe_load(cfg_path.read_text(encoding="utf-8"))
return Contract(
required_front_matter=data["required_front_matter"],
optional_front_matter=data.get("optional_front_matter", []),
posts_dir=data["posts_dir"],
tags_dir=data["tags_dir"],
placeholders=data.get("placeholders", []),
)
def repo_root_from(post_path: Path) -> Path:
# Assumes tools/ and _posts/ live at repo root
# Walk up until we see "tools"
for p in [post_path.parent] + list(post_path.parents):
if (p / "tools").exists():
return p
return Path.cwd()
def split_front_matter(md: str) -> Tuple[Dict[str, Any], str]:
m = FRONT_MATTER_RE.search(md)
if not m:
return {}, md
fm_text = m.group(1)
body = md[m.end():]
fm = yaml.safe_load(fm_text) or {}
return fm, body
def dump_front_matter(fm: Dict[str, Any]) -> str:
return "---\n" + yaml.safe_dump(fm, sort_keys=False, allow_unicode=True) + "---\n\n"
@mcp.tool
def inspect_post(post_path: str) -> Dict[str, Any]:
"""
Read a markdown post, validate front matter fields, tags, placeholders,
python blocks, and detect accidental pasted assistant content.
"""
path = Path(post_path)
text = path.read_text(encoding="utf-8")
fm, body = split_front_matter(text)
root = repo_root_from(path)
contract = load_contract(root)
missing_fields = [k for k in contract.required_front_matter if k not in fm]
bad_tags = None
tags = fm.get("tags", [])
if not isinstance(tags, list):
bad_tags = "Front matter 'tags' must be a YAML list"
placeholders_found = []
for token in contract.placeholders:
if token in text:
placeholders_found.append(token)
python_blocks = PY_FENCE_RE.findall(text)
accidental_paste_hits = [s for s in ACCIDENTAL_PASTE_MARKERS if s in text]
return {
"post_path": str(path),
"missing_front_matter_fields": missing_fields,
"tags": tags if isinstance(tags, list) else [],
"tags_format_issue": bad_tags,
"placeholders_found": placeholders_found,
"python_block_count": len(python_blocks),
"accidental_paste_markers_found": accidental_paste_hits,
}
@mcp.tool
def ensure_tag_files(post_path: str) -> Dict[str, Any]:
"""
Ensure each tag in the post has a tag/<slug>.md file.
Creates missing ones with a simple template.
"""
path = Path(post_path)
root = repo_root_from(path)
contract = load_contract(root)
text = path.read_text(encoding="utf-8")
fm, _ = split_front_matter(text)
tags = fm.get("tags", [])
if not isinstance(tags, list):
return {"error": "tags must be a YAML list in front matter"}
tags_dir = root / contract.tags_dir
tags_dir.mkdir(parents=True, exist_ok=True)
created = []
existing = []
for tag in tags:
# filename convention: lowercase slug
slug = str(tag).strip().lower().replace(" ", "-")
tag_path = tags_dir / f"{slug}.md"
if tag_path.exists():
existing.append(str(tag_path))
continue
tag_fm = {
"layout": "tag",
"title": f"Tag: {tag}",
"keywords": "Python, AI, programming, tutorials",
"tag": tag,
"search": True,
"definition": f"{tag} — short definition coming soon.",
}
content = "---\n" + yaml.safe_dump(tag_fm, sort_keys=False, allow_unicode=True) + "---\n"
tag_path.write_text(content, encoding="utf-8")
created.append(str(tag_path))
return {"created": created, "already_present": existing}
@mcp.tool
def check_python_blocks_compile(post_path: str, python_exe: str = sys.executable) -> Dict[str, Any]:
"""
Extract ```python fenced blocks and run 'python -m py_compile' on each snippet.
Returns per-snippet pass/fail and stderr if failed.
"""
path = Path(post_path)
text = path.read_text(encoding="utf-8")
blocks = PY_FENCE_RE.findall(text)
root = repo_root_from(path)
tmp_dir = root / ".mcp_tmp"
tmp_dir.mkdir(exist_ok=True)
results = []
for i, code in enumerate(blocks, start=1):
snippet_path = tmp_dir / f"snippet_{i}.py"
snippet_path.write_text(code + "\n", encoding="utf-8")
proc = subprocess.run(
[python_exe, "-m", "py_compile", str(snippet_path)],
capture_output=True,
text=True,
)
results.append(
{
"snippet": i,
"ok": proc.returncode == 0,
"stderr": proc.stderr.strip(),
}
)
ok = all(r["ok"] for r in results) if results else True
return {"ok": ok, "snippet_results": results, "count": len(results)}
if __name__ == "__main__":
# Start the MCP server (stdio transport by default in many clients)
mcp.run()
Why these three tools first?
inspect_postfinds problems fast (missing fields, placeholders, accidental pastes)ensure_tag_fileskeeps your tag directory consistentcheck_python_blocks_compileprotects tutorial quality (syntax-level checks)
MCP tools are a first-class concept in the MCP spec, and FastMCP is a common way to implement them in Python. (Model Context Protocol)
Step 3 — Connect this server in your MCP host
Option A: Claude Desktop (common beginner host)
Claude Desktop uses a JSON config file to define local servers. (Model Context Protocol)
In Claude Desktop: Settings → Developer → Edit Config, add a server that runs your script. (Model Context Protocol)
Example shape (you’ll adjust the paths for your machine):
{
"mcpServers": {
"daehnhardtBlog": {
"command": "python",
"args": ["/absolute/path/to/your/repo/tools/mcp_blog_server.py"]
}
}
}
Option B: Cursor
Cursor documents MCP support and configuration (mcp.json style). (Cursor)
Cursor can read MCP server configs and expose tools to its agent.
Step 4 — Run the workflow on your draft (what happens with your post)
Given your current draft front matter, inspect_post would likely report:
- ✅ required fields mostly present
- ⚠️ placeholders found:
PLACEHOLDER.jpginimageandthumb_image - ⚠️ accidental paste markers found (your draft includes a huge pasted assistant response—this is exactly the sort of “oops” the inspector should flag)
- ✅ tags are a proper YAML list (
AI,Python,Productivity) - (Python blocks only if present later)
Then the flow continues:
- ensure_tag_files
- checks
tag/ai.md,tag/python.md,tag/productivity.md - creates missing files from your tag template (layout/title/tag/search/definition)
- check_python_blocks_compile
- extracts ```python blocks and runs
py_compile - returns a snippet-by-snippet report
Step 5 — Turn this into a multi-agent “calm pipeline”
Now the orchestration part.
Agent roles (simple and effective)
-
Agent 1: Inspector (read-only) Calls
inspect_post, then produces a short checklist and a “publish readiness score”. -
Agent 2: Librarian (tags + metadata) Calls
ensure_tag_files, proposes improvements to tag definitions (but does not rewrite your post). -
Agent 3: Code Checker (Python quality) Calls
check_python_blocks_compile, and if something fails, explains exactly what broke and where. -
Agent 4: Editor (human-style) Only after the repo is “structurally clean”, this agent helps with writing tone, clarity, and structure.
That order matters: structure first, writing second.
Step 6 — The exact “first run” prompt you can use
In your MCP host (Cursor/Claude Desktop), run something like:
Use the
daehnhardtBlogtools on_posts/2026-01-15-a-gentle-introduction-to-mcp.md.
- Call
inspect_postand summarise issues.- If tag files are missing, call
ensure_tag_files.- If Python blocks exist, run
check_python_blocks_compile.- Report: (a) what must be fixed before publishing, (b) what is optional, (c) what you changed.
This gives you a repeatable ritual: one prompt, predictable output.
Step 7 — Next upgrades (once the first step works)
Once this is stable, the “second step” upgrades are usually:
- link checker (internal + external)
- front matter normaliser (quote titles, consistent tag formatting)
- image sanity checks (existence, thumbnail naming conventions)
- markdown lint (optional)
- a “remove accidental paste” fixer that moves pasted tool output into a scratch file instead of deleting it
A small note about your draft right now
Your draft contains an accidental pasted chunk (it starts around “This is a fantastic draft, Elena…” and repeats large sections). The workflow above should flag it as “accidental paste”, so you can remove it before publishing.
That’s exactly the kind of mistake an automated guardrail should catch—quietly and reliably.
If you want, I’ll write the next iteration of the server with one additional tool:
clean_accidental_paste(post_path)(moves the pasted chunk intodrafts/_snippets/and leaves your post clean)
…and then we’ll have a fully “first-weekend” MCP blogging pipeline.
::contentReference[oaicite:7]{index=7}
Did you like this post? Please let me know if you have any comments or suggestions.
Python posts that might be interesting for you