|
Elena Daehnhardt |
Image credit: Illustration created with Midjourney, prompt by the author.
Image prompt“An illustration representing cloud computing” |
Here’s a DB-free, minimal-effort project specification for a blog publishing agency, with Telegram control, scheduled Tuesday/Friday runs, and an agency-run.yml you can drop into the repo. GitHub Actions supports both scheduled runs and manual workflow_dispatch runs, and scheduled workflows can use an IANA timezone string. Scheduled runs execute from the default branch, and GitHub recommends avoiding the top of the hour because schedules can be delayed during heavy load. ([GitHub Docs][1])
Project specification: AI publishing agency for a Jekyll blog
1. Goal
Build a lightweight cloud workflow for your Jekyll blog where:
- blog posts live in
_posts/ - a post is published by GitHub Pages when
published: true - agents create or improve drafts in branches
- you control the system from Telegram
- you review changes in GitHub Mobile
- nothing merges to
mainwithout your approval
2. Core design
This version keeps the moving parts small:
- GitHub Pages: publishing layer only for Jekyll static output (not a bot runtime)
- GitHub repo: source of truth for content and logs
- GitHub Actions: cloud execution
- OpenAI Agents SDK or plain OpenAI API calls in Python: agent behavior
- Telegram bot: control and notifications
- GitHub Issues: on-demand task queue fed by Telegram
/askmessages - GitHub PRs: human approval gate
- CSV + JSON + Markdown files in the repo: logs and task state
No database in v1.
2A. Final simple architecture
Telegram bot
↓
Scheduled GitHub Action polls Telegram with getUpdates
↓
Creates GitHub Issue from each new /ask request
↓
Issue-triggered GitHub Action classifies and runs the task
↓
Edits files on a task branch
↓
Runs validation
↓
Pushes branch and opens/updates draft PR
↓
Telegram notification sent back
↓
You review in GitHub Mobile
↓
You approve and merge
↓
GitHub Pages publishes when a post in _posts has published: true on main
This is the simplest GitHub-native MVP:
- GitHub Pages publishes
- GitHub Issues queues tasks
- GitHub Actions executes automation
- Telegram is request + notification channel
- GitHub Mobile is the approval surface
2B. What each GitHub piece does
GitHub Pages
- publishes the Jekyll site from repository content on
main - may run the Pages/Jekyll build pipeline, but only for static publishing
- does not run bots or long-lived automation services
GitHub Issues
- acts as the task inbox/queue for Telegram
/askrequests - stores request text, labels, status, and links to PRs/reports
- provides mobile-native operations visibility
GitHub Actions
Runs four job families:
- Tuesday scheduled improvement flow
- Friday scheduled Weekly AI Signals flow
- Telegram polling flow
- issue-triggered worker flow
3. Weekly operating rhythm
Tuesday run
Improve existing unpublished posts.
Rules:
- scan
_posts/ - find posts with
published: false - exclude posts tagged
Weekly - pick the best candidates
- improve one or two posts
- open or update a draft PR
- send Telegram summary
Friday run
Create the new AI Signals newsletter draft.
Rules:
- run your Weekly Signals process
- collect and validate sources
- create the new
_posts/YYYY-MM-DD-<slug>.md - save source tracking
- open a draft PR
- send Telegram summary
4. Human-in-the-loop model
Agents may:
- create or edit files
- commit to a task branch
- push that branch
- open or update a draft PR
Agents may not:
- push directly to
main - merge PRs without approval
- auto-publish content
GitHub Actions supports workflow_dispatch for manual runs through the UI, CLI, or REST API, which is useful for Telegram-triggered runs. ([GitHub Docs][1])
5. Telegram control model
Telegram is the command surface on your phone.
Commands
/status— show last runs and active PRs/run_tuesday— manually trigger Tuesday workflow/run_friday— manually trigger Friday workflow/ask ...— submit on-demand editorial/validation requests (Phase 2)/pause— disable scheduled runs (Phase 2 via poller/issue route)/resume— enable scheduled runs (Phase 2 via poller/issue route)/summary <task_id>— show a compact report/rerun <task_id>— rerun a task/help— show commands
MVP Telegram mode is outbound notifications only.
Phase 2 adds Telegram Bot API getUpdates polling:
- GitHub Actions runs every 10–15 minutes
- poller reads new bot updates
/askrequests become GitHub issues- issue labels route work to the correct worker
Telegram documents getUpdates as an alternative to webhooks, with updates retained for up to 24 hours. ([Telegram][2])
5A. Telegram no-extra-service requirement (Phase 2)
Required flow:
- Scheduled GitHub Action polls Telegram via
getUpdates. - Poller reads new commands and stores last processed
update_id. - Poller opens one GitHub issue per valid request.
- Issue-triggered workflow routes by labels and runs agent work.
- Agent creates task branch + draft PR and sends Telegram summary.
This is intentionally poll-based, not real-time. For blog operations this is acceptable and avoids a separate always-on webhook service.
5B. Recommended issue labels
Use labels as routing and status metadata:
task:on-demandtype:improve_posttype:validate_posttype:factcheck_posttype:rank_draftstype:create_newsletter_blurbsource:telegramstatus:queuedstatus:runningstatus:donestatus:blocked
5C. How Telegram messages become issues
Example Telegram command:
/ask improve the post about MCP
Created issue title:
Improve post: MCP
Created issue labels:
task:on-demand
type:improve_post
source:telegram
status:queued
Created issue body:
Requested via Telegram
Original message:
`/ask improve the post about MCP`
Telegram update ID: 123456789
Received at: 2026-03-27T10:15:00Z
6. Repository layout
Use this layout inside your blog repo:
.github/
workflows/
agency-run.yml
telegram-poller.yml
issue-worker.yml
automation/
logs/
tasks.csv
agent_runs.csv
approvals.csv
sources.csv
reports/
2026-03-27-task-001-summary.md
2026-03-27-task-001-validation.md
state/
scheduler.json
telegram_poller.json
prompts/
global_policy.md
technical_post.md
weekly_signals.md
validator.md
agents/
common.py
intake.py
tuesday_improve_drafts.py
friday_ai_signals.py
validator.py
git_ops.py
telegram_notify.py
telegram_poller.py
on_demand_router.py
scripts/
update_logs.py
_posts/
...
7. State and log files
automation/state/scheduler.json
{
"paused": false,
"tuesday_enabled": true,
"friday_enabled": true
}
automation/state/telegram_poller.json
{
"last_update_id": 0,
"last_polled_at": ""
}
automation/logs/tasks.csv
Suggested columns:
task_id,created_at,task_type,status,branch_name,pr_number,slug,notes
automation/logs/agent_runs.csv
run_id,task_id,agent_name,started_at,finished_at,status,model_name,input_tokens,output_tokens,estimated_cost
automation/logs/approvals.csv
approval_id,task_id,created_at,status,actor,channel,notes
automation/logs/sources.csv
task_id,url,title,publisher,published_date,is_open_access,used_in_post,notes
automation/reports/*.md
These are the human-readable summaries you’ll actually read from your phone.
8. Agent responsibilities
Intake agent
Decides whether the run is:
- Tuesday improvement
- Friday AI Signals
- manual rerun
Tuesday improvement agent
- scans
_posts/ - finds
published: false - excludes Weekly-tagged posts
- selects candidates
- improves readability, metadata, tags, code blocks if relevant
- writes changes to a task branch
Friday AI Signals agent
- runs the Weekly Signals process
- validates freshness and duplicates
- drafts the newsletter post
- writes source tracking
Validator agent
- checks front matter
- checks tags
- checks source freshness for Weekly
- checks Python code blocks if present
- writes validation report
Git agent
- creates branch
- commits changes
- pushes branch
- opens or updates draft PR
Telegram notifier
- sends compact summaries
- sends PR link
- alerts on failure
Telegram poller
- calls Telegram
getUpdates - filters allowed commands/chats
- deduplicates by
update_id - creates GitHub issues with routing labels
- updates
automation/state/telegram_poller.json
Issue router worker
- reacts to
issues.opened(and optionallyissues.labeled) - parses labels/body into task payload
- invokes
on_demand_router.py - writes reports/logs and comments back to the issue
9. Secrets and repo settings
Add these repository secrets:
OPENAI_API_KEYTELEGRAM_BOT_TOKENTELEGRAM_CHAT_IDGH_PATonly if the defaultGITHUB_TOKENis not enough for your PR/issues flow
Permissions note:
- issue creation requires token permission for Issues: write
- PR creation still requires repo content/PR write access
Also configure:
- branch protection on
main - require PR review before merge
- require workflow checks before merge
10. Minimal deployment steps
Step 1: prepare the repo
Add:
.github/workflows/agency-run.yml.github/workflows/telegram-poller.yml.github/workflows/issue-worker.ymlautomation/agents/scripts/
Commit that to the default branch.
Scheduled workflows and manual workflow dispatch both require the workflow file to exist on the default branch. ([GitHub Docs][1])
Step 2: create the Telegram bot
In Telegram:
- open BotFather
- create a bot
- copy the bot token
- get your chat ID
- store both as GitHub secrets
For this architecture, do not deploy a webhook service. Use scheduled polling with getUpdates from GitHub Actions. ([Telegram][2])
Step 3: add OpenAI access
Set OPENAI_API_KEY as a repo secret.
OpenAI’s Agents SDK is available for Python and TypeScript and is designed for tool use, handoffs, streaming, and tracing. ([OpenAI Developers][3])
Step 4: protect main
Enable:
- required pull request reviews
- required status checks
This is your human approval gate.
Step 5: add state files
Create:
automation/state/scheduler.json
with the default enabled flags shown above.
Also create:
automation/state/telegram_poller.json
with last_update_id tracking to prevent duplicate issue creation.
Step 6: add a minimal Python runner
You can start with stub scripts that:
- log task start
- print what they would do
- commit a report file
- open a PR
Then replace the stubs with real agent logic.
Step 7: test manual execution and polling
Use workflow_dispatch from the Actions tab and run:
job_type=tuesdayjob_type=friday- a manual run of
telegram-poller.ymlto confirm issue creation from new/askupdates
GitHub documents that manual runs can be started from the UI, CLI, or REST API when workflow_dispatch is configured. ([GitHub Docs][4])
11. agency-run.yml
Here’s the starter workflow.
name: Agency Run
on:
workflow_dispatch:
inputs:
job_type:
description: "Which flow to run"
required: true
default: "tuesday"
type: choice
options:
- tuesday
- friday
dry_run:
description: "Skip push/PR creation"
required: true
default: true
type: boolean
schedule:
- cron: "17 9 * * 2"
timezone: "Europe/Amsterdam"
- cron: "23 9 * * 5"
timezone: "Europe/Amsterdam"
permissions:
contents: write
pull-requests: write
concurrency:
group: agency-run-$
cancel-in-progress: false
jobs:
agency-run:
runs-on: ubuntu-latest
env:
PYTHONUNBUFFERED: "1"
OPENAI_API_KEY: $
TELEGRAM_BOT_TOKEN: $
TELEGRAM_CHAT_ID: $
DRY_RUN: $
steps:
- name: Check out repo
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
- name: Determine job type
id: jobtype
shell: bash
run: |
if [ "$" = "schedule" ]; then
if [ "$" = "17 9 * * 2" ]; then
echo "job_type=tuesday" >> "$GITHUB_OUTPUT"
elif [ "$" = "23 9 * * 5" ]; then
echo "job_type=friday" >> "$GITHUB_OUTPUT"
else
echo "Unknown schedule"
exit 1
fi
else
echo "job_type=$" >> "$GITHUB_OUTPUT"
fi
- name: Load scheduler state
id: scheduler
shell: bash
run: |
python - <<'PY'
import json, pathlib
path = pathlib.Path("automation/state/scheduler.json")
if not path.exists():
path.parent.mkdir(parents=True, exist_ok=True)
path.write_text(json.dumps({
"paused": False,
"tuesday_enabled": True,
"friday_enabled": True
}, indent=2))
data = json.loads(path.read_text())
print(json.dumps(data))
with open("scheduler_flags.txt", "w") as f:
for k, v in data.items():
f.write(f"{k}={str(v).lower()}\n")
PY
cat scheduler_flags.txt >> "$GITHUB_ENV"
- name: Stop if paused
if: env.paused == 'true'
run: |
echo "Scheduler is paused. Exiting."
exit 0
- name: Stop if Tuesday disabled
if: steps.jobtype.outputs.job_type == 'tuesday' && env.tuesday_enabled != 'true'
run: |
echo "Tuesday flow disabled. Exiting."
exit 0
- name: Stop if Friday disabled
if: steps.jobtype.outputs.job_type == 'friday' && env.friday_enabled != 'true'
run: |
echo "Friday flow disabled. Exiting."
exit 0
- name: Create task branch name
id: branch
shell: bash
run: |
TS=$(date -u +'%Y%m%d-%H%M%S')
JOB="$"
echo "branch_name=task/${JOB}-${TS}" >> "$GITHUB_OUTPUT"
- name: Create branch
run: |
git checkout -b "$"
- name: Run Tuesday improvement flow
if: steps.jobtype.outputs.job_type == 'tuesday'
run: |
python agents/tuesday_improve_drafts.py
- name: Run Friday AI Signals flow
if: steps.jobtype.outputs.job_type == 'friday'
run: |
python agents/friday_ai_signals.py
- name: Run validation
run: |
python agents/validator.py
- name: Show changed files
run: |
git status --short
git diff --stat
- name: Commit changes
if: env.DRY_RUN != 'true'
run: |
git config user.name "github-actions[bot]"
git config user.email "41898282+github-actions[bot]@users.noreply.github.com"
git add .
git diff --cached --quiet && echo "No changes to commit" && exit 0
git commit -m "agency: $ run"
- name: Push branch
if: env.DRY_RUN != 'true'
run: |
git push --set-upstream origin "$"
- name: Open draft PR
if: env.DRY_RUN != 'true'
env:
GH_TOKEN: $
run: |
TITLE="Agency: $ run"
BODY=$(cat <<'EOF'
Automated agency run completed.
Review checklist:
- Check changed files
- Read validation report
- Confirm sources and dates
- Confirm front matter and tags
- Approve only after editorial review
EOF
)
gh pr create \
--draft \
--title "$TITLE" \
--body "$BODY" \
--base main \
--head "$" || true
- name: Send Telegram notification
if: always()
run: |
python agents/telegram_notify.py \
--job-type "$" \
--branch "$" \
--status "$"
12. What this workflow does
It gives you:
- manual runs through
workflow_dispatch - scheduled Tuesday and Friday runs
- timezone-aware scheduling
- a task branch per run
- separate Tuesday and Friday entrypoints
- validation before PR
- optional dry run mode
- Telegram notifications
GitHub explicitly supports multiple schedules in one workflow and exposes the triggering cron entry via github.event.schedule, which is why this branching logic works. ([GitHub Docs][1])
13. Minimal Python files to create next
You do not need to build everything at once.
Create these first:
agents/tuesday_improve_drafts.py
Responsibilities:
- scan
_posts/ - parse front matter
- find
published: false - skip
Weekly - choose 1–2 candidate posts
- improve them
- write a report
agents/friday_ai_signals.py
Responsibilities:
- run your Weekly AI Signals routine
- validate sources
- create the draft post
- write sources CSV/YAML
- write summary report
agents/validator.py
Responsibilities:
- verify front matter
- verify required image fields
- verify tags
- compile Python blocks where relevant
- check source freshness for Weekly
agents/telegram_notify.py
Responsibilities:
- send a compact summary message to Telegram
- include job type, branch name, status
- optionally include PR URL if found
14. Telegram integration choices
You have two practical options.
Option A: notifications only first (recommended MVP)
GitHub Actions sends Telegram summaries, but inbound Telegram commands are not processed yet.
This is the fastest and lowest-risk first deploy.
Option B: full Telegram control without extra services (Phase 2)
Add a scheduled poller using Telegram getUpdates, convert /ask messages into GitHub issues, then process them in an issue worker workflow.
Option C: full Telegram control via webhook service
Only choose this if near-real-time command handling is required and you are willing to operate an always-on service.
14A. MVP workflow set (first deploy)
1. agency-run.yml
- Tuesday: improve unpublished non-Weekly drafts
- Friday: create Weekly AI Signals draft
- run validator
- create/update draft PR
- send Telegram notification
2. agents/telegram_notify.py
- outbound notifications only
- no inbound Telegram polling in MVP
14B. Phase 2 additions
1. telegram-poller.yml + agents/telegram_poller.py
- runs every 10–15 minutes
- reads Telegram
getUpdates - deduplicates by
last_update_id - creates issues for
/ask,/pause,/resume
2. issue-worker.yml + agents/on_demand_router.py
- triggers on issue events
- routes by labels and issue payload
- runs task logic in branch + PR flow
14C. What to deploy first
agency-run.ymlautomation/state/scheduler.json- validator + Tuesday + Friday + git ops + Telegram notify
- branch protection on
main - required repository secrets
- dry-run and rerun tests
Phase 2 deployment:
telegram-poller.ymlissue-worker.ymlautomation/state/telegram_poller.json- issue labels and issue body contract
15. Recommended first-week deployment order
Day 1
- add
agency-run.ymland scheduler state - implement shared runtime + policy loading from
AGENTS.md - harden validator before content-writing automation
- add repo secrets and protect
main
Day 2
- implement
tuesday_improve_drafts.py - implement
friday_ai_signals.py - implement
git_ops.py - implement
telegram_notify.py - test
workflow_dispatchin dry-run and live branch/PR mode
Day 3 (Phase 2 start)
- implement
telegram_poller.pyandon_demand_router.py - add
telegram-poller.ymlandissue-worker.yml - add poller state and labels
- test issue-worker routing from labeled issues
16. Minimum viable rules for your blog
Since your blog publishes from _posts when published: true, the system rule should be:
- all newly created or edited drafts stay
published: false - only human-approved PRs may change a post toward publication
- no automated merge to
main
17. Best minimal-effort MVP
If you want the fastest reliable path:
- Tuesday + Friday + validator + PR flow
- Telegram notifications only
- file-based logs/state/reports
- no DB
- no auto-merge
18. Bottom line
Ship the small version first. Add Telegram polling, issue queue, and /ask routing in Phase 2.
19. AGENTS.md governance
AGENTS.mdis the editorial source of truth.automation/policies/files are derived runtime extracts.- Validators enforce objective rules from
AGENTS.md. - If
AGENTS.mdconflicts with a derived file,AGENTS.mdwins. - Editorial rule changes are made in
AGENTS.mdfirst, then propagated.
20. Idempotency rules
- One Telegram
update_idmay produce at most one issue. - One issue may produce at most one active task branch unless explicitly rerun.
- Worker checks for existing PR before creating a new PR.
- Tuesday/Friday reruns must not duplicate same-day draft outputs.
- Friday checks for an existing Weekly draft for today before creating one.
21. Concurrency and write safety
- Use workflow concurrency groups for Tuesday/Friday runs.
- Use one global concurrency group for Telegram poller.
- Use per-issue concurrency groups for issue workers.
- Scheduled workflows must not overlap with themselves.
- Mutating jobs must reload and re-check state before commit.
22. Target resolution order
- exact filename match
- exact slug match
- exact title match
- case-insensitive title substring
- latest matching unpublished draft
- if ambiguous, mark
awaiting_human_input
23. Safe write scope
- Tuesday may edit selected drafts and related logs/reports/state only.
- Friday may edit one Weekly draft, source files, and related logs/reports/state only.
- On-demand tasks may edit task-related files only.
- No workflow may mass-edit unrelated historical posts.
- Out-of-scope edits fail validation.
24. MVP image policy
- MVP may use default/safe image metadata for draft PRs.
- Full Flux generation is Phase 2 unless CI reliability is proven.
- Validator treats draft image gaps as warnings for draft PR readiness.
25. Task outcome levels
draft_createdvalidation_failedready_for_reviewneeds_human_inputpublishable
Tuesday and Friday default outcome is ready_for_review.
26. Friday stop conditions
Friday must stop and mark needs_human_input if:
- fewer than 4 valid fresh sources remain
- sources are mostly paywalled
- major claims lack open-source support
- source freshness cannot be confirmed
- title ranking succeeds but source support is weak
27. Issue body contract
Automation-created issues must include:
- request source
- original Telegram command
- normalized task type
- target selector
- update ID (if Telegram-driven)
- creation timestamp
- current status
28. Failure recovery
- If worker fails after branch creation, keep branch and post failure report.
- If PR creation fails, report branch name in issue comment and Telegram summary.
- If validation fails, keep PR as draft and attach validation report.
- Telegram notification failures must not fail editorial execution.
- State/log write failures mark run blocked and stop further mutation.
29. Path assumptions
- local absolute path is informational only
- CI resolves repository from
GITHUB_WORKSPACEwhen available - production logic must not depend on user-specific absolute paths
30. Model routing and budget controls
- Use cheapest acceptable model for classification/ranking/metadata.
- Reserve stronger models for difficult rewrites and nuanced fact-check/polish.
- Log tokens and estimated cost per run.
- Support
LOW_COST_MODE=true.
31. Dry-run behavior
In dry-run mode, workflows may:
- scan files
- generate reports
- write temporary runner-local artifacts
In dry-run mode, workflows must not:
- push branches
- open PRs
- open issues
- mutate scheduler state
- send Telegram notifications unless explicitly test-enabled
Starter Python files
The starting files are located in _starter_agency_files. These are the MVP starter files, not the full production logic yet. They are designed to be further extended.
Using AGENTS.md
Yes — here’s the cleanest way to reuse the AGENTS.md without making the workflow brittle.
How to reuse AGENTS.md in this agency
Use it in three layers:
1. Master policy
Keep AGENTS.md in the repo root as the human-readable source of truth.
It should remain the place that defines:
- safety rules
- repository boundaries
- post types
- canonical front matter expectations
- tag rules
- image rules
- Weekly Signals rules
- definition of done
This is your editorial constitution.
2. Runtime policy files
Create smaller policy files that are derived from AGENTS.md and used by specific agents.
Suggested structure:
automation/
policies/
global_policy.md
technical_post_policy.md
weekly_signals_policy.md
series_policy.md
media_policy.md
validation_policy.md
3. Code-enforced checks
Turn the objective rules from AGENTS.md into Python validators.
These should check things like:
- post path format
- required front matter fields
- valid tags
- Python block compilation
- Weekly source freshness
- duplicate links
published: falsedefault- required image fields
That way, the workflow does not rely only on prompts.
Recommended split of your current AGENTS.md
Here is the mapping I recommend.
automation/policies/global_policy.md
Put these parts here:
- operate only inside the repository
- never access secrets
- never push directly to protected branches
- prefer minimal, explainable diffs
- preserve author tone, structure, and intent
- stop if required information is missing
- always report changed files and diff summary
- never fabricate sources or citations
Suggested content:
# Global Policy
- Operate only inside this repository.
- Never read, print, or modify secrets, tokens, or credentials.
- Never push directly to protected branches.
- Agents may push only to task branches for review.
- Never merge or publish without human approval.
- Prefer minimal, explainable diffs over large rewrites.
- Preserve the author's tone, structure, and intent.
- If required information is missing, stop and mark the task as awaiting human input.
- Always report changed files, validation results, and a diff summary.
- Never fabricate facts, sources, citations, dates, or quotes.
automation/policies/technical_post_policy.md
Put these parts here:
- Post Type A rules
- canonical layout reference
- new post creation rules
- slug rules
- default front matter rules
- Python code expectations
- Workflow A rules
Suggested content:
# Technical Post Policy
Use this policy for technical posts such as Python, AI tooling, MCP, workflows, and infrastructure.
Requirements:
- Follow the canonical front matter structure and ordering from the reference post.
- Create files in `_posts/YYYY-MM-DD-<slug>.md`.
- Use lowercase ASCII hyphen-separated slugs.
- Default to `published: false` unless explicitly instructed otherwise.
- Require `image`, `thumb_image`, and `image_title`.
- Do not invent fields not present in the canonical reference.
- Preserve the author's tone and intent.
- If Python code blocks are included, they must compile with `python -m py_compile`.
automation/policies/weekly_signals_policy.md
Put these parts here:
- Post Type B rules
- MCP link builder rules
- link validation rules
- Weekly structure requirements
- source rules
- source tracking
- image choice rules
- freshness rules
- newsletter follow-up flow if you want to automate it later
Suggested content:
# Weekly Signals Policy
Use this policy for Weekly AI Signals posts.
Requirements:
- Collect fresh AI news links from the last seven days.
- Remove duplicates across source lists.
- Stop if fewer than 4 valid links remain.
- Prefer open-access English-language sources.
- If a source is paywalled, replace it with an open alternative where possible.
- Include 4–6 numbered signals.
- Include concise summaries and a "Why This Matters" section for each signal.
- Add an optional "Apps & Tool Updates" section if supported by sources.
- Save machine-readable source tracking in `drafts/sources/<slug>.yml`.
- All included sources must be from the last seven days.
- If outdated or unsupported claims remain, remove them and rewrite the post.
- Default to `published: false`.
automation/policies/media_policy.md
Put these parts here:
- Flux generation rules
- thumbnail generation rules
- image front matter update rules
- Weekly default image option
automation/policies/validation_policy.md
Put these parts here:
- tag validation logic
- Python block validation expectations
- “definition of done”
- final recency checks
- changed files and diff reporting
How the Python agents should use these files
Each agent should load:
AGENTS.mdfor full repo context- one narrow policy file for its role
Example:
Tuesday improvement agent
Reads:
AGENTS.mdautomation/policies/global_policy.mdautomation/policies/technical_post_policy.mdautomation/policies/validation_policy.md
Friday Weekly agent
Reads:
AGENTS.mdautomation/policies/global_policy.mdautomation/policies/weekly_signals_policy.mdautomation/policies/media_policy.mdautomation/policies/validation_policy.md
Validator
Reads:
AGENTS.mdautomation/policies/validation_policy.md
This is much better than pasting the whole AGENTS.md into every prompt.
How to update the starter Python files
Your starter files do not yet read AGENTS.md. The easiest upgrade is to add a helper in common.py.
Add this:
from pathlib import Path
def read_text_file(path: str) -> str:
p = Path(path)
if not p.exists():
return ""
return p.read_text(encoding="utf-8")
def load_policy_bundle(*paths: str) -> str:
chunks = []
for path in paths:
text = read_text_file(path)
if text:
chunks.append(f"# Source: {path}\n\n{text}")
return "\n\n".join(chunks)
Then in your agents:
Example for tuesday_improve_drafts.py
Add:
from common import load_policy_bundle
Then inside main():
policy_text = load_policy_bundle(
"AGENTS.md",
"automation/policies/global_policy.md",
"automation/policies/technical_post_policy.md",
"automation/policies/validation_policy.md",
)
For now, even if the script is not yet calling an LLM, this is still useful because:
- it keeps the agent architecture ready
- you can write the loaded policy into the summary report
- later you can pass it into OpenAI Agents SDK prompts
Example for friday_ai_signals.py
policy_text = load_policy_bundle(
"AGENTS.md",
"automation/policies/global_policy.md",
"automation/policies/weekly_signals_policy.md",
"automation/policies/media_policy.md",
"automation/policies/validation_policy.md",
)
What to enforce in code, not just in prompts
These rules from your AGENTS.md should become Python checks.
In validator.py
Add checks for:
_posts/YYYY-MM-DD-slug.mdfilename patternpublishedexistsimage,thumb_image,image_titleexist- tags exist and correspond to
tag/*.md - Weekly posts include
Weekly - source dates are within 7 days for Weekly posts
- Python code blocks compile
- no duplicate source URLs in Weekly sources file
These are ideal validator checks because they are objective.
How to handle the “ask instead of guessing” rule
In a cloud workflow, “ask” should not mean a blocking chat window.
It should mean:
- write a report saying what is missing
- send a Telegram message
- stop the run with a clear status
Recommended statuses:
awaiting_human_inputvalidation_failedready_for_reviewapprovedpublished
So one good addition to the reports is:
## Missing information
- Weekly image option not chosen
- Canonical reference post not found
- Fewer than 4 valid fresh sources
That turns your editorial rule into a workflow behavior.
Important policy adjustment for mobile review
Your original AGENTS.md says:
Never publish (
git push) unless explicitly instructed.
For the GitHub Mobile workflow, change that slightly to:
Agents may push only to task-specific review branches.
Agents may never push directly to protected branches.
Merging or publishing requires explicit human approval.
That keeps the spirit of the rule but makes review from mobile possible.
What I would add to the project spec
Add this section:
Reusing AGENTS.md
The workflow uses AGENTS.md as the master editorial policy.
Implementation rules:
AGENTS.mdremains the single source of truth.- Smaller policy files in
automation/policies/are derived from it for role-specific use. - Objective rules from
AGENTS.mdare enforced in Python validators. - If
AGENTS.mdand a derived policy file disagree,AGENTS.mdwins. - Any update to the editorial process should be made in
AGENTS.mdfirst, then propagated to the derived files.
That gives you a governance model that stays understandable.
Minimal folder additions to make now
Create these files:
automation/policies/global_policy.md
automation/policies/technical_post_policy.md
automation/policies/weekly_signals_policy.md
automation/policies/media_policy.md
automation/policies/validation_policy.md
And add this note to each one:
Derived from AGENTS.md.
If this file conflicts with AGENTS.md, AGENTS.md is the source of truth.
My recommendation
For minimal effort, do this in order:
- keep
AGENTS.mdunchanged except for the push rule refinement - create the five policy files above
- add
load_policy_bundle()tocommon.py - make the Tuesday and Friday agents load the policy bundle
- extend
validator.pywith checks derived fromAGENTS.md
That gets you the benefits of reuse without overengineering the first version.
Create those five policy files as ready-to-paste markdown content, and adapt them for their intended use: common.py, tuesday_improve_drafts.py, friday_ai_signals.py, and validator.py.
General blog operations assistant
Yes — definitely.
That is actually one of the nicest parts of the Telegram setup: it does not have to be limited to just Tuesday drafts and Friday AI Signals. You can also use it as a general blog operations chat interface on any day.
The simple idea
You send a message or command in Telegram, and the bot turns that into a task for the agent workflow.
So besides scheduled runs, you can also do things like:
- “improve the MCP post”
- “find unpublished posts about agents”
- “check which drafts are closest to publishable”
- “generate a cover image for this post”
- “fix tags for the latest three drafts”
- “fact-check this Weekly post”
- “prepare a newsletter blurb for the last post”
- “show me what changed this week”
- “list drafts missing images”
- “which unpublished posts mention Python?”
That makes the system much more useful than a simple cron job.
Best model for this
I’d split Telegram interactions into two categories:
1. Scheduled routines
These are the recurring jobs:
- Tuesday draft improvement
- Friday AI Signals creation
2. On-demand tasks
These are chat-triggered jobs you ask for manually.
That means the agent can work in two modes:
- automatic mode on certain weekdays
- assistant mode whenever you message it
Good tasks for Telegram on other weekdays
Here are the kinds of tasks that fit very well.
Editorial maintenance
- improve readability of a draft
- tighten intros and conclusions
- rewrite excerpts
- improve titles
- generate title variants
- draft newsletter blurbs
- check internal consistency
Repo hygiene
- list unpublished posts
- find posts missing tags
- find posts missing images
- find posts with broken front matter
- find posts with invalid tag names
- list posts with outdated references
Technical validation
- compile Python code blocks
- check code snippets for errors
- verify front matter fields
- detect missing
published - find posts without
keywordsorexcerpt
Weekly/news workflows
- refresh sources
- replace paywalled links
- check source freshness
- remove outdated items from a Weekly draft
- regenerate source YAML
Media tasks
- generate Flux prompt suggestions
- create image prompt variants
- prepare thumbnail jobs
- describe current image usage gaps
Planning tasks
- suggest which draft to finish next
- rank drafts by completion level
- suggest a content calendar
- group drafts into future series
The right way to structure this
Instead of hardcoding every possible Telegram command, I would use:
A small set of fixed commands
For control:
/status/pause/resume/run_tuesday/run_friday/rerun <task_id>
Plus a general request mode
For example:
/ask improve the post about MCP/ask show unpublished Python posts/ask prepare a newsletter blurb for the latest Weekly draft
That way you do not need a new Telegram command for every single maintenance action.
The bot can classify the request and dispatch it to the right Python task.
Recommended task families
I’d define a few internal task types like this:
improve_postreview_postfactcheck_postfix_tagsgenerate_image_promptcreate_newsletter_blurblist_draftsrank_unpublished_postsweekly_refresh_sourcesvalidate_post
Then the Telegram message gets mapped to one of those.
Example Telegram conversations
Example 1
You:
/ask improve the unpublished post about Codex CLI
Bot:
- finds matching post
- creates task branch
- improves draft
- opens PR
- replies with summary
Example 2
You:
/ask which unpublished posts are closest to publishable?
Bot:
- scans
_posts - ranks drafts
- replies with top 3 and reasons
Example 3
You:
/ask fact-check the latest Weekly draft and replace any paywalled links
Bot:
- runs validation
- updates sources if needed
- writes report
- opens/update PR
Example 4
You:
/ask create a newsletter intro for the latest two unpublished posts
Bot:
- generates draft copy
- saves it to a file or PR comment
- sends summary back to Telegram
What I would allow in v1
To keep this safe and easy, I would allow Telegram to trigger:
- scans
- reports
- draft improvements
- metadata fixes
- validation
- PR creation
- report generation
But I would not let Telegram directly:
- merge PRs
- publish posts
- delete historical content
- change protected branch rules
- overwrite many unrelated files at once
So Telegram can request work, but GitHub PR approval still stays the final gate.
Best architecture for this
The clean version is:
- Telegram receives message
- small parser classifies intent
-
GitHub workflow is triggered with:
job_type- optional
post_slug - optional
task_text
- Python agent performs only that narrow task
-
results go into:
- repo files
- report markdown
- PR
- Telegram summary
That keeps the system understandable.
My recommendation
Yes, you should absolutely support weekday ad-hoc requests from Telegram.
In fact, I’d design the system so that:
- scheduled jobs handle recurring editorial routines
- Telegram chat requests handle everything else
That gives you a nice hybrid: predictable automation plus on-demand help.
Best next feature set
After the Tuesday/Friday MVP, I’d add these Telegram request types first:
- list and rank unpublished drafts
- improve a chosen post
- validate a chosen post
- fact-check a Weekly draft
- generate newsletter copy
- fix tags and front matter
Those will probably give you the most value fastest.
Bottom line
Yes — the bot can become your general blog operations assistant on any weekday, not just a scheduler.
The safest pattern is:
- Telegram for requests
- GitHub Actions for execution
- GitHub PRs for review
- GitHub Mobile for approval
And that is a very solid setup for maintaining the blog from your phone.
Absolutely — here’s a practical Telegram command and intent-routing spec for your blog agency.
The goal is to keep it:
- easy to use from a phone
- safe
- simple to implement
- flexible enough for future tasks
Telegram command and routing spec
1. Design principle
Use two layers:
Layer A: fixed commands
These are reliable control commands.
Examples:
/status/pause/resume/run_tuesday/run_friday
Layer B: general task requests
These let you ask for blog work in plain language.
Examples:
/ask improve the MCP post/ask fact-check the latest Weekly draft/ask list unpublished Python posts/ask create a newsletter blurb for the last two drafts
This gives you both precision and flexibility.
2. Command list
Core control commands
/help
Shows all supported commands and examples.
/status
Returns:
- scheduler state
- last Tuesday run result
- last Friday run result
- open agency PRs
- last failed task if any
/pause
Pauses all scheduled runs.
This should update:
automation/state/scheduler.json
to:
{
"paused": true,
"tuesday_enabled": true,
"friday_enabled": true
}
/resume
Resumes scheduled runs.
/enable_tuesday
Enables only Tuesday scheduled improvements.
/disable_tuesday
Disables Tuesday scheduled improvements.
/enable_friday
Enables Friday AI Signals creation.
/disable_friday
Disables Friday AI Signals creation.
/run_tuesday
Triggers the Tuesday improvement workflow immediately.
/run_friday
Triggers the Friday AI Signals workflow immediately.
/rerun <task_id>
Reruns a previous task.
Example:
/rerun tuesday-20260327-091700
/summary <task_id>
Returns a short summary of a previous task.
/cancel <task_id>
Marks a queued task as cancelled.
For the MVP, this can just cancel tasks that have not started yet.
3. General request command
/ask <request>
This is the most powerful command.
Examples:
/ask improve the unpublished post about MCP/ask validate the latest Weekly draft/ask show drafts missing cover images/ask rank unpublished posts by publish-readiness/ask prepare a newsletter email for the latest two drafts
This command routes the request into an internal task type.
4. Supported task types
These are the internal task families I recommend.
Editorial tasks
improve_postrewrite_introrewrite_conclusionimprove_readabilitygenerate_title_optionscreate_newsletter_blurb
Validation tasks
validate_postfactcheck_postcheck_frontmattercheck_tagscheck_python_blockscheck_sources
Weekly tasks
weekly_refresh_sourcesweekly_create_draftweekly_remove_outdated_itemsweekly_replace_paywalled_links
Media tasks
generate_cover_promptgenerate_cover_imagegenerate_thumbnailcheck_missing_images
Discovery tasks
list_unpublished_postsrank_draftsfind_posts_by_topicfind_posts_missing_metadatafind_related_posts
Repo maintenance tasks
show_recent_changesshow_open_agency_prsshow_last_failed_run
5. Intent routing rules
When a message arrives through /ask, the system should classify it into one of the task types above.
Routing logic priority
First: check for exact command triggers
Examples:
- “status” →
/status - “pause” →
/pause
Second: detect explicit actions
Examples:
- “improve” →
improve_post - “fact-check” →
factcheck_post - “validate” →
validate_post - “rank drafts” →
rank_drafts - “newsletter” →
create_newsletter_blurb
Third: detect target object
Examples:
- “latest Weekly draft”
- “post about MCP”
- “unpublished Python posts”
- “last two drafts”
Fourth: detect constraints
Examples:
- “only unpublished”
- “exclude Weekly”
- “from this week”
- “missing image”
- “latest two”
Fifth: decide execution mode
The bot should decide whether the request is:
- read-only
- draft-modifying
- PR-creating
- approval-required
6. Execution classes
This part is important for safety.
Class 1: read-only tasks
Safe to run immediately.
Examples:
- list drafts
- show status
- rank unpublished posts
- find missing tags
- show open PRs
Output:
- Telegram reply only
- optional markdown report in repo if long
Class 2: draft-analysis tasks
Safe to run automatically.
Examples:
- validate post
- compile Python code blocks
- check source freshness
- find missing front matter
Output:
- Telegram summary
- validation report file
Class 3: draft-modifying tasks
Should create or update a task branch and PR.
Examples:
- improve post
- rewrite intro
- fix tags
- add metadata
- create newsletter blurb file
Output:
- task branch
- commit
- draft PR
- Telegram summary
Class 4: publish-sensitive tasks
Must never complete automatically.
Examples:
- merge PR
- set
published: true - publish live content
- rewrite published historical posts
Output:
- create recommendation only
- request explicit review in GitHub
7. Response behavior in Telegram
Keep the bot replies short and useful.
For read-only tasks
Reply directly in chat.
Example:
Top unpublished drafts closest to publishable:
1. the-mcp-post.md — 82%
2. agents-for-blogging.md — 76%
3. codex-cli-workflows.md — 71%
Use /ask improve <slug> to continue.
For modifying tasks
Reply with:
- task ID
- target post
- action taken
- PR link if created
- short summary
Example:
Task started: improve_post
Target: 2026-03-10-mcp-workflows.md
Branch: task/improve-mcp-20260327-101200
PR: <url>
Changes:
- added excerpt
- improved intro
- added Next Steps section
For blocked tasks
Reply clearly.
Example:
I stopped this run because the Weekly draft has fewer than 4 fresh source links.
Please provide more links or rerun source collection.
8. Target resolution rules
The hardest part of chat requests is figuring out what post you mean.
Use this order:
1. Exact filename match
Example:
2026-03-10-mcp-workflows.md
2. Exact slug match
Example:
mcp-workflows
3. Title substring match
Example:
- “the MCP post”
4. Latest matching draft
Example:
- “latest Weekly draft”
5. Ask for clarification only if multiple strong matches remain
For example:
I found two likely matches:
1. 2026-03-10-mcp-workflows.md
2. 2026-03-14-gentle-introduction-to-mcp.md
Reply with:
1
or
2
Since you want minimal friction, the system should only ask when ambiguity is real.
9. Suggested natural-language patterns
These should map to task types.
Improve a draft
Patterns:
- improve
- rewrite
- polish
- refine
- enhance
Maps to:
improve_post
Validate a draft
Patterns:
- validate
- check
- review
- inspect
Maps to:
validate_post
Fact-check
Patterns:
- fact-check
- verify claims
- check sources
- replace paywalled links
Maps to:
factcheck_post
Rank or list
Patterns:
- show
- list
- rank
- find
Maps to:
list_unpublished_postsrank_draftsfind_posts_by_topic
Newsletter tasks
Patterns:
- newsletter
- blurb
- subject line
Maps to:
create_newsletter_blurb
Media tasks
Patterns:
- image
- cover
- thumbnail
- prompt
Maps to:
generate_cover_promptgenerate_cover_image
10. Suggested issue payload (from Telegram poller)
When the poller receives /ask ..., create an issue with normalized routing metadata.
Example title:
Improve post: MCP workflows
Example labels:
task:on-demand
type:improve_post
source:telegram
status:queued
Example issue body:
Requested via Telegram poller
Original message:
`/ask improve the unpublished post about MCP`
Task payload:
```json
{
"task_type": "improve_post",
"target": "mcp-workflows",
"request_text": "improve the unpublished post about MCP",
"source": "telegram",
"chat_id": "<redacted-or-hashed>",
"update_id": 123456789
}
```
This makes GitHub Issues the durable on-demand task queue.
11. Recommended issue-worker workflow trigger
Use an issue-triggered worker workflow for on-demand tasks:
on:
issues:
types: [opened, labeled]
permissions:
contents: write
pull-requests: write
issues: write
Routing rule:
- only process issues labeled
task:on-demand - read
type:*label as task type - parse target/request text from issue body JSON block
- run
agents/on_demand_router.py - comment status back to the same issue
12. Suggested Python routing entrypoint
Create:
agents/on_demand_router.py
Its job:
- read
task_type - resolve target post if needed
- call the correct handler
Example structure:
TASK_HANDLERS = {
"improve_post": run_improve_post,
"validate_post": run_validate_post,
"factcheck_post": run_factcheck_post,
"rank_drafts": run_rank_drafts,
"create_newsletter_blurb": run_newsletter_blurb,
}
This will scale much better than stuffing all logic into one script.
13. Recommended first set of supported /ask requests
For the MVP, I would support only these first:
Safe and valuable first
- improve a post
- validate a post
- rank unpublished drafts
- list drafts missing metadata
- fact-check latest Weekly draft
- create newsletter blurb
- show drafts missing images
These give high value without making the system too complex.
14. Example user flows
Flow 1: quick draft triage
You:
/ask rank unpublished posts by publish-readiness
Bot:
- scans
_posts - scores drafts
- replies with top 5
Flow 2: improve one draft
You:
/ask improve the post about Codex CLI
Bot:
- resolves target
- creates task branch
- edits draft
- opens PR
- sends link
Flow 3: Weekly repair
You:
/ask fact-check the latest Weekly draft and replace paywalled links
Bot:
- validates links
- writes report
- updates sources if possible
- pushes task branch
- opens PR
Flow 4: metadata cleanup
You:
/ask show unpublished posts missing excerpt or keywords
Bot:
- scans front matter
- replies directly with a list
15. Safety rules for Telegram-triggered tasks
Telegram-triggered tasks must follow these rules:
- never merge automatically
- never set
published: trueautomatically - never push directly to
main - never delete historical posts
- never rewrite many unrelated files in one task
- always create a report for modifying tasks
- always send a Telegram summary
That keeps chat control powerful but safe.
16. My recommended MVP command set
If you want the leanest good version, start with:
Fixed commands
/help/status/pause/resume/run_tuesday/run_friday
On-demand requests
/ask improve <post>/ask validate <post>/ask rank unpublished posts/ask fact-check latest Weekly draft/ask create newsletter blurb for <post>
That is enough to make the system feel genuinely useful.
17. Best implementation order
Build in this order:
/status/run_tuesday/run_friday/ask rank unpublished posts/ask improve <post>/ask validate <post>/ask fact-check latest Weekly draft/pauseand/resume
That gets the biggest value fastest.
Bottom line
Yes — your Telegram bot can become a general-purpose blog operations assistant, not just a scheduler.
The best design is:
- fixed commands for control
/askrequests for flexible work- GitHub Actions for execution
- GitHub PRs for review
- GitHub Mobile for approval
If you want, I can now turn this into a copy-paste implementation spec with:
- Telegram command table
- GitHub workflow input schema
on_demand_router.pyskeleton- and example message parsing rules.
log files as memory
Yes — in the MVP design, the bot can use log/state files as a lightweight form of memory.
But it helps to separate memory types, because not all “memory” is the same.
What counts as memory here
In your file-based setup, the bot remembers things through files in the repo, such as:
automation/logs/tasks.csvautomation/logs/agent_runs.csvautomation/logs/approvals.csvautomation/logs/sources.csvautomation/state/scheduler.jsonautomation/reports/*.md
So if you ask:
- “what happened last Friday?”
- “which drafts were improved recently?”
- “is the scheduler paused?”
- “what sources were used for the last Weekly draft?”
…the bot can answer by reading those files.
What this memory is good for
This kind of file-based memory works well for:
- run history
- task tracking
- approval history
- source tracking
- scheduler flags
- simple operational state
- draft progress reports
That is enough for a solid MVP.
What it is not great for
Log files are not ideal as rich long-term memory for things like:
- nuanced client preferences
- evolving editorial style notes
- cross-task semantic retrieval
- many concurrent updates
- complex querying
So it is fine for operational memory, but weaker for knowledge memory.
Best way to think about it
I’d split memory into three layers:
1. Operational memory
Yes — use logs and state files.
Examples:
- task status
- last run result
- approval state
- scheduler settings
2. Editorial memory
Use versioned files in the repo.
Examples:
AGENTS.md- policy files
- prompt files
- style guide files
- source YAML files
3. Long-term structured memory
Only add later if needed.
Examples:
- per-topic history
- reusable research notes
- client-specific tone fingerprints
- semantic search over old drafts
That third layer is where a DB or vector store might eventually help, but you do not need it first.
My recommendation
For your first version:
- yes, use log/state files as memory
- keep that memory explicit and inspectable
- do not try to make it “smart memory” yet
A good pattern is:
automation/
logs/
state/
reports/
memory/
style_guide.md
blog_preferences.md
recurring_topics.md
So:
logs/= what happenedstate/= what is currently truereports/= human-readable summariesmemory/= stable reusable instructions and preferences
Bottom line
Yes — the bot can use log files as memory, and that is perfectly okay for the MVP. They work best as operational memory, while your stable editorial rules should live in dedicated policy/style files rather than in raw logs.
I can sketch a clean file-based memory layout next, so you know exactly what should go into logs/, state/, reports/, and memory/.