Photo by Bernd 📷 Dittrich on Unsplash
Updated April 2026
You write in Cursor. You ship code from Cursor. And then, for some reason, you copy your blog draft into a CMS, fight with WordPress blocks for twenty minutes, paste it back, fix the headings, upload an image, retype the meta description, and pray it doesn't time out.
That handoff is the bottleneck. It's also unnecessary.
In 2026, the same Model Context Protocol (MCP) that lets your editor talk to Postgres and GitHub can talk to your blog. You write the post in Cursor. You ask the agent to score it for SEO. You ask it to publish to yourdomain.com/blog. You never leave the editor.
This guide shows you exactly how to do that — the config, the four-move publishing loop, the prompts that work, and the mistakes to skip.
Publish blogs from Cursor: the short answer
To publish blogs from Cursor to your own domain, install a blog-publishing MCP server in your .cursor/mcp.json config, draft the post in markdown inside Cursor, then ask the agent to score it against SEO criteria and publish it directly to your domain. The post lives at yourdomain.com/blog/... (a subdirectory, not a subdomain), with sitemap and RSS auto-updated. Total time: under five minutes from prompt to live URL.
That's the loop. The rest of this post is the detail — what each phase does, what to put in your config, and how to keep the agent from shipping a 60-score draft you'll regret next quarter.
Why Cursor became a blog publishing tool in 2026
Cursor isn't just a fork of VS Code anymore. In its first full year of tracking, Cursor adoption hit 17.9% of developers, climbing into the top five IDEs alongside VS Code (75.9%) and IntelliJ (Second Talent IDE Statistics 2026). For a tool that didn't exist five years ago, that's the steepest IDE adoption curve since Sublime Text.
Two things flipped Cursor from "the AI VS Code" into a real publishing surface.
First, MCP support. Cursor added native MCP support in late 2025. The January 2026 update added dynamic context management across multiple servers, cutting token usage by 47% when running several MCP servers at once (Toolradar Best MCP Servers for Cursor 2026). That makes it cheap to keep a blog-publishing server installed alongside your usual GitHub, Postgres, and Linear servers — they don't compete for context.
Second, the agent loop. Cursor's agent doesn't just answer one prompt. It calls tools, reads the result, calls another tool, and keeps going until the task is done. That's the exact shape of "draft → score → patch → publish." You give it a topic; it gives you a live URL.
There's a strategic angle, too. Kevin Indig — who's spent the last year studying how LLMs source citations — puts it bluntly: "Trust is the most important ingredient for success in organic and AI Search" (Coalition Technologies, AI SEO Thought Leaders 2026). And 73% of B2B buyers say thought leadership is more trustworthy than marketing materials (Edelman-LinkedIn 2024 B2B Thought Leadership Report).
Trust compounds when you ship consistently from your own domain. The fastest way to ship consistently is to delete the publishing handoff. Cursor + MCP deletes it.
The Cursor publishing loop: a 4-move framework
Every blog post you ship from Cursor follows the same four moves. Name them, and the agent can run them in sequence.
Move | What the agent does | Tool it calls |
|---|---|---|
1. Draft | Writes the post in markdown using your prompt + style rules |
|
2. Score | Runs 14+ SEO checks and returns a grade |
|
3. Patch | Applies surgical find-and-replace fixes for each issue |
|
4. Publish | Pushes live to your domain, regenerates sitemap, pings Google |
|
Call this the Cursor publishing loop. The whole point is that no human has to manually move data between steps. The agent reads the score, sees what's broken, applies the patches, re-scores, and only publishes when the post crosses your quality threshold.
That's it. The rest of the work is in your prompt and your config — and once those are dialed in, you'll publish the next 50 posts the same way.
Phase 1: Draft the post in Cursor's agent
Open the Cursor agent panel. Drop a markdown file in your repo called posts/draft.md (or skip the file and prompt directly). Then write a real prompt — not "write a blog about X."
A working prompt has four parts: topic, audience, evidence sources, and style. Here's the template that ships ranking posts:
Topic: How to ship a feature flag without breaking prod
Primary keyword: feature flag rollout
Audience: senior backend engineers at Series A startups
Word count: 2,800-3,400
Use these sources:
- LaunchDarkly's 2026 trunk-based dev report
- Anthropic's experiment notes on staged rollouts
- Two real GitHub PRs from our repo (link below)
Style: builder-to-builder. No "in today's fast-paced world."
Short paragraphs. Code blocks for every claim about how
to actually do the thing. One named framework readers can
quote ("the canary checklist" or similar).
Draft to a Quillly draft on website_id <id>. Folder: Engineering.
Tags: feature-flags, deployment, devops.Notice what's there: a primary keyword, a target reader, real sources for evidence, and a style boundary. Notice what's not there: any guidance on length padding, any "remember to mention our product." If the post needs to mention the product, the agent figures that out from the topic.
Cursor's agent reads that prompt, plans the section structure, calls create_blog with status: "draft", and you get back a draft URL. The first draft will not be perfect. That's fine — it doesn't have to be. The next phase fixes it.
If you want a reusable spec, drop the prompt template into .cursor/rules/blog-style.md. Cursor injects rules files into every agent run for that workspace, so you don't retype the style rules every post.
Phase 2: Score the draft against real SEO criteria
The fastest way to publish bad SEO is to skip this phase. The fastest way to publish good SEO is to score before you ship.
Quillly's MCP exposes a tool called check_blog_seo. It runs 14+ SEO criteria against the saved draft and returns a 0-100 score, a grade, and a per-criterion breakdown. The criteria cover the things that actually move rankings in 2026: meta tags, heading structure, keyword placement, internal linking, image alt text, readability grade, content length, schema readiness, and more.
Why bother scoring before publish? Two stats explain it.
44.2% of all ChatGPT citations come from the first 30% of a page (ALMCorp citation study). If your intro doesn't lead with a direct answer, the rest of the post barely matters for AI search.
Articles over 2,900 words are 59% more likely to be cited than those under 800 words. Length matters, but density matters more — padded posts get caught by Google's Helpful Content System.
Scoring catches both before they're public. The agent prompt looks like this:
Run check_blog_seo on the draft. If score < 85, run
get_blog_seo_patches and fix every issue with update_blog
in a single call. Re-score. Loop until 85+ or two passes,
whichever comes first. Then stop and show me the report.That's the whole control flow. The agent decides when to stop. You decide the threshold.
Phase 3: Patch the issues without rewriting the post
This is where most "AI publishing" workflows quietly fall over. The agent gets the score, then tries to fix issues by regenerating the entire post — which usually breaks something else.
Patches solve that. A patch is a surgical find-and-replace operation. The MCP server returns patches like "replace this exact meta description string with this one" or "insert this H2 here". The agent applies them in a single update_blog call. Nothing else changes.
A typical patch response from get_blog_seo_patches looks like this:
[
{
"find": "## Why this matters",
"replace": "## Why feature flag rollouts matter in 2026",
"impact": "+4 (H2 keyword coverage)"
},
{
"find": "Learn how to roll out features safely.",
"replace": "Learn how to roll out features safely without breaking prod.",
"impact": "+3 (meta description CTA)"
}
]Each patch has a projected score impact. The agent stacks all of them into one update_blog call, re-runs the check, and reports the new score. If something can't be patched mechanically — like an entire missing FAQ section — the agent writes that section and appends it.
Patches matter because content updated within the last 30 days receives 3.2x more citations than content older than 90 days (SE Ranking 2026 AI Stats). Once a post is live, the same patch tool keeps it fresh on a schedule. You don't rewrite — you patch.
Phase 4: Publish to your own domain (not a subdomain)
Most "AI blog" tools publish to something.subdomain.theirsite.com. That's a trap. Subdomains don't share link equity with your root domain in the way subdirectories do, and Google has spent the last decade quietly favoring subdirectory blogs for E-E-A-T signals.
Quillly publishes blogs at yourdomain.com/blog/your-post-slug. Same domain, same authority, same Search Console property. When the agent calls publish_blog:
The post goes live at your subdirectory URL.
Your sitemap (
/sitemap.xml) regenerates automatically.Your RSS feed updates.
Google's Indexing API gets pinged.
The blog appears in
list_blogswithindexing.status: "submitted".
You can watch indexing progress without leaving Cursor. A follow-up list_blogs call shows when Google moves the post from submitted to indexed, plus search position, clicks, and impressions once GSC data flows.
That last point is the one most workflows miss. Publishing isn't the end of the loop — it's the start of the data feedback loop. We've covered the GSC half of that loop in detail in the 2026 MCP workflow for Search Console rankings. The short version: GSC data feeds back into the next prompt, the next prompt produces a sharper draft, and the loop tightens.
The Cursor MCP config you actually need
Cursor reads MCP server config from two locations (Cursor MCP docs):
.cursor/mcp.jsonin your project root — scopes the server to that project only.~/.cursor/mcp.jsonin your home directory — makes the server available globally.
If both exist, project-level config wins. For a blog publishing server, global is usually right — you want the same agent able to publish from any repo.
Here's a working ~/.cursor/mcp.json with a Quillly server installed alongside the usual GitHub server:
{
"mcpServers": {
"quillly": {
"url": "https://quillly.com/mcp",
"headers": {
"Authorization": "Bearer ${env:QUILLLY_API_KEY}"
}
},
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "${env:GITHUB_TOKEN}"
}
}
}
}Notes on the syntax that trip people up:
HTTP transport vs stdio. Quillly runs as a remote HTTP server, so the config uses
urlandheaders. Servers that run locally (like the GitHub one above) usecommandandargs. Cursor supports both transports out of the box.Environment interpolation.
${env:QUILLLY_API_KEY}pulls from your shell env. Don't paste real keys into the config file — they'll end up in dotfiles repos. Set the var in.zshrc/.bashrcand let interpolation do the work.The 40-tool ceiling. Cursor caps active MCP tools at roughly 40 across all servers. Exceed it and the agent silently drops some tools without warning. Don't install every server you've ever heard of — install the four or five you'll actually use.
Verify the install: open Cursor settings, go to Tools & MCP, look for a green dot next to quillly. If it's yellow, click Connect to finish the OAuth handshake.
Once it's green, type @quillly in the agent panel — Cursor will autocomplete the available tools (create_blog, check_blog_seo, publish_blog, and friends). That's your confirmation the wire is up.
If you've configured Claude Desktop before, the file format is identical because both editors speak the same MCP spec. We walked through the Claude Desktop equivalent in the 2026 MCP workflow for Claude Desktop — the same mcp.json block works there with a one-line path change.
Old workflow vs Cursor + MCP: the honest comparison
The reason the Cursor publishing loop matters isn't theoretical. It's a flat-out time delete on the part of blog work nobody wants to do. Here's the comparison.
Step | Old workflow (ChatGPT → WordPress) | Cursor + MCP |
|---|---|---|
Draft post | Write in ChatGPT, copy markdown | Prompt agent in Cursor |
Format for CMS | Paste into WordPress, fix blocks | Already markdown — no transform |
Add featured image | Upload to media library, copy URL, paste | Agent calls |
Set meta tags | Manually type into Yoast/Rank Math | Agent generates, validates length |
Internal links | Search old posts, copy URLs back | Agent calls |
SEO check | Switch to Surfer/Clearscope, score, fix | Agent calls |
Publish | Click Publish, hope nothing 504s | Agent calls |
Submit to Google | Manually copy URL into GSC | Auto-pinged via Indexing API |
Time per post | 90-180 minutes | 5-15 minutes |
The hidden cost in the left column isn't just minutes — it's context switches. Every tab you open is a chance to lose the thread of what you were trying to say. The right column keeps you in one window, in one conversation, in one mental mode.
That's the contrarian take buried in this whole approach: stop moving your blog into your stack. Pull publishing into your editor. The CMS as a separate destination was a 2010s assumption. In a 2026 agent stack, the editor is the publishing surface.
Five mistakes Cursor users make on their first publish
Most people running this loop the first time hit the same five potholes. Skip them.
1. Letting the agent publish at score 70. The default temptation is to ship the first draft. Don't. Set a hard threshold (85 is a reasonable floor) and let the agent loop until it crosses. The patch step is cheap; ranking from a low-quality post is not. Pages with strong structural signals get cited at materially higher rates — 72.4% of cited blog posts include an identifiable answer capsule, per the ALMCorp study.
2. Skipping the FAQ section. PAA-style FAQs are the single most-cited block by ChatGPT and Google AI Overviews — see the 2026 AEO playbook for getting cited for the structural patterns LLMs reward. If your post doesn't have one, you're leaving citations on the table. The agent can generate FAQ content from the People Also Ask data — ask for it explicitly in the prompt.
3. Publishing to a subdomain because it's faster. It's not faster. It's a permanent SEO tax. Take the extra hour to wire up yourdomain.com/blog once. You'll thank yourself when you've shipped 50 posts and they're all consolidating link equity to a single domain.
4. Not committing the prompt. Your blog prompts are the most valuable artifact in this whole workflow. Drop them into .cursor/rules/ or a prompts/ folder in the repo. Version-control them. Iterate on them like code.
5. Treating MCP as a single tool, not a stack. The real power isn't quillly alone — it's quillly + github (for publishing changelogs as posts) + a search MCP (for fresh stats) + your own data MCP. Aleyda Solis frames it well: AI search is both a performance channel and a visibility channel. Composability of MCP servers is what lets one agent serve both jobs at once. We covered the broader stack in the 2026 MCP servers for SEO guide.
A real prompt that works on the first try
Theory is fine. Here's a prompt you can paste straight into Cursor's agent panel and ship a post tonight. Edit the bracketed parts.
You are my blog publishing agent. Use the quillly MCP server.
Topic: [your topic]
Primary keyword: [keyword]
Target audience: [who reads this]
Word count: 3,000-3,600
Voice: builder-to-builder, contractions, no marketing fluff,
no "in today's fast-paced world."
Workflow:
1. Search the web for 5+ recent stats with sources
2. Draft the post in markdown with: H1 under 60 chars,
40-60 word direct-answer paragraph after the H1,
8-10 H2 sections, FAQ with 6 questions, conclusion
3. Call create_blog (status: draft) on website_id [your_id]
4. Call check_blog_seo. If score < 85, call
get_blog_seo_patches and apply all fixes via update_blog
in one call. Re-score. Loop max 2 times.
5. Call suggest_internal_links and apply the relevant ones
6. Call publish_blog when score is 85+ and meta description
is 160 chars or less
7. Report the live URL and final scoreThat's the whole thing. The agent runs to completion. You get a live URL with no copy-paste anywhere in the loop.
If you want to schedule posts instead of shipping immediately, swap step 6 for update_blog with a published_at future timestamp — Quillly's scheduled publishing handles the rest. We walked through programmatic batch publishing in the 2026 programmatic SEO guide for MCP, if you're shipping more than a handful of posts a week.
Frequently asked questions
Do I need to host my own MCP server to publish from Cursor?
No. You can use a hosted MCP server like Quillly that exposes blog publishing as a remote tool. Add the server URL and your API key to ~/.cursor/mcp.json and Cursor's agent can call its tools directly. Self-hosting is an option if you have specific compliance requirements, but it's not required to start. Most indie hackers and small teams should use the hosted option to skip server maintenance entirely.
Can Cursor publish to a self-hosted WordPress site instead?
Yes, several MCP servers expose WordPress as a tool surface. The trade-off is the WordPress block editor's quirks: markdown-to-Gutenberg conversion is lossy, image uploads can be flaky, and the SEO scoring layer is missing unless you bolt on Yoast or Rank Math separately. If you want a markdown-native pipeline that publishes to your own domain without WordPress's overhead, an MCP-first platform like Quillly removes those moving parts.
What's the difference between a Cursor MCP server and a Cursor extension?
A Cursor extension adds UI features inside the editor — sidebars, syntax highlighters, custom commands. An MCP server adds tools the AI agent can call autonomously, like create_blog or query_database. Extensions help the human; MCP servers help the agent. Blog publishing is firmly in MCP-server territory because the agent does the work end-to-end without manual UI clicks.
How do I keep my API key out of the mcp.json file?
Use environment variable interpolation. Set QUILLLY_API_KEY in your shell config (.zshrc, .bashrc, or your secrets manager) and reference it in mcp.json as ${env:QUILLLY_API_KEY}. Cursor resolves the variable at startup and never writes the literal key to disk. This works for any header or env field in the config — including OAuth client secrets and database passwords for other servers.
Will Google penalize blogs published by an AI agent?
Not if the content is genuinely useful. Human-written content still ranks #1 about 80% of the time vs 9% for purely AI-generated pages in a 42,000-post Semrush analysis (Search Engine Land, 2026). The fix isn't to avoid AI — it's to add human judgment in the loop. Use the agent for drafting and patching; review the final draft yourself before approving publish. Hybrid workflows consistently outperform pure-AI workflows in 2026.
How do I track which posts are actually getting indexed?
Cursor can call list_blogs and read each post's indexing status. Statuses include submitted, indexed, not_indexed, and error. After publishing, Google's Indexing API gets notified automatically, and a background job polls Search Console for status changes. You'll see search position, clicks, and impressions populate in list_blogs once GSC data flows — usually within a few days for a new post.
Three takeaways before you ship
The whole point of this guide is one move: delete the publishing handoff. Three specifics:
Install the MCP config once. Drop one block into
~/.cursor/mcp.json, set the env var, restart Cursor. Five minutes of setup pays back on every future post.Run the four-move loop every time. Draft → Score → Patch → Publish. Don't skip the score step. A score floor of 85 cuts your "I should have edited that" regret rate to near zero.
Keep your domain on the post. Subdomains leak link equity. Subdirectories compound it. Pick the subdirectory.
Want your AI to actually publish the post it just wrote? Connect Quillly to Cursor in 30 seconds.
