All Posts

Publish Blogs from Gemini to Your Own Domain (2026 Workflow)

Computer screen displaying code and text

Photo by Bernd 📷 Dittrich on Unsplash

You let Gemini read a 200-page PDF, summarize it, and rewrite the bullet points in your voice. You let it grep your codebase, scaffold a CLI, and run the test suite. You trust it with a 1M-token context window.

Then you write a blog post. Gemini drafts a clean 3,000 words. You copy the markdown out of the chat, paste it into WordPress, fight the block editor for twenty minutes, retype the meta description into Yoast, upload an image, and pray the publish button doesn't 504.

The drafting half got 100x better. The publishing half got worse. That gap is the bottleneck. And in 2026, it doesn't have to exist.

In December 2025, Google announced fully-managed, remote MCP servers for Google services, with the Gemini CLI as the reference client. Anthropic's Model Context Protocol — the "USB-C for AI" — is now Google's protocol too. That means a single config file lets Gemini publish a blog to your own domain, score it, patch it, and ping Google's Indexing API without you ever leaving the terminal.

This guide is the working setup. The config block, the four-move publishing loop, the prompts that ship, and the five mistakes that quietly tank scores. Updated May 2026.

Publish blogs from Gemini: the short answer

To publish blogs from Gemini to your own domain, add a blog-publishing MCP server to ~/.gemini/settings.json, prompt Gemini to draft a post in markdown, then ask it to score the draft against SEO criteria and publish to your subdirectory. The post lives at yourdomain.com/blog/your-slug with sitemap and RSS auto-updated and Google's Indexing API pinged. Total time: under five minutes from prompt to live URL.

That is the loop. The rest of this post is the detail. Config. Prompts. The guardrails that keep Gemini from publishing a 65-score draft you'll regret next quarter.

Why Gemini is a publishing surface in 2026

Gemini stopped being just a chat interface a year ago. The Gemini CLI is open source on GitHub, runs natively in your terminal, and reads its tools from a settings.json file that supports stdio, SSE, and Streamable HTTP MCP transports. Three forces converged to make this the year you publish from it.

First, MCP became infrastructure. MCP hit 97 million monthly SDK downloads by March 2026, with 78% of enterprise AI teams running at least one MCP-backed agent in production (Digital Applied MCP Adoption Statistics 2026). It is no longer a side experiment. It is the default plumbing for any agent that touches the outside world.

Second, Google made MCP first-class. Google's December 2025 announcement rolled out remote MCP servers for Cloud Run, Cloud Storage, AlloyDB, Spanner, Pub/Sub, and a dozen more services. The Gemini CLI is the reference client. Add a third-party MCP server to your settings.json and Gemini treats it the same as a Google-native tool.

Third, AI search forced the publishing volume question. AI Overviews now appear in roughly 48% of all Google queries, a 58% year-over-year jump (Digital Applied AI Search Statistics 2026). For queries where AI Overviews appear, organic CTR crashed 61% (Search Engine Land coverage of Seer Interactive's study). Winning visibility now requires both more posts and sharper AEO structure on every post. The fastest way to ship that volume is to delete the publishing handoff entirely.

There is a strategic angle, too. Aleyda Solis, the SEO consultant on every AI-search panel that matters in 2026, frames it like this: "Treat AI search as both a performance channel and a visibility channel" (Humans of Martech podcast, January 2026). One stack. Two outcomes. Gemini plus MCP delivers both.

The Gemini publishing loop: a 4-move framework

Every blog you ship from Gemini follows the same four moves. Name them, and Gemini runs them autonomously.

Table

Move

What Gemini does

Tool it calls

1. Draft

Writes the post in markdown using your prompt and style rules

create_blog

2. Score

Runs 14+ SEO checks and returns a 0–100 grade

check_blog_seo

3. Patch

Applies surgical find-and-replace fixes for each issue

update_blog (patches)

4. Publish

Pushes live to your domain, regenerates sitemap, pings Google

publish_blog

Call this the Gemini Publishing Loop. The point: no human moves data between steps. Gemini reads the score, reads the patches, applies them, re-scores, and only publishes when the post crosses your quality threshold.

That is the whole framework. The remaining work lives in your prompt, your config, and your style file. Get those dialed in once and the next 50 posts ship the same way.

Phase 1: Prompt Gemini like a content brief

Open the Gemini CLI. You can drop a posts/draft.md skeleton into your repo or prompt Gemini directly. Either way, the prompt matters more than the surface.

A working prompt has four parts: topic, audience, evidence sources, and style. Here is the template that ships ranking posts.

code
Topic: How to roll out feature flags without breaking prod
Primary keyword: feature flag rollout
Audience: senior backend engineers at Series A startups
Word count: 2,800-3,400

Use these sources:
- LaunchDarkly's 2026 trunk-based dev report
- Anthropic's experiment notes on staged rollouts
- Two real GitHub PRs from our repo (links below)

Style: builder-to-builder. Contractions. Short paragraphs.
Code blocks for every claim about how to actually do the
thing. One named framework readers can quote (e.g.
"the canary checklist"). No "in today's fast-paced world."

Draft to a Quillly draft on website_id <id>. Folder: Engineering.
Tags: feature-flags, deployment, devops.

Notice the structure. A primary keyword. A target reader. Real sources for evidence. A style boundary. Notice what is missing — no length-padding instructions, no "remember to mention our product." If the post needs to mention the product, Gemini infers it from the topic.

Gemini reads that prompt, plans the section structure, calls create_blog with status: "draft", and returns a draft ID. The first draft is rarely perfect. That is fine. The next phase fixes it.

For reuse, drop the prompt template into a project-local .gemini/GEMINI.md file or your settings.json customInstructions field. Gemini pulls those rules into every session in that workspace, so you never retype the style guide. Pages with strong structural signals get cited at materially higher rates: content scoring 8.5/10+ on semantic completeness is 4.2x more likely to be cited by AI search engines (Digital Applied AI Search Statistics 2026).

Phase 2: Score the draft against 14 SEO criteria

The fastest way to ship bad SEO is to skip this phase. The fastest way to ship good SEO is to score before you publish.

Quillly's MCP exposes a tool called check_blog_seo. It runs 14+ SEO criteria against the saved draft and returns a 0–100 score, a grade, and a per-criterion breakdown. The criteria cover what actually moves rankings in 2026: meta tags, heading structure, keyword placement, internal linking, image alt text, readability grade, content length, schema readiness, and a handful more.

Why score before publish? Two stats explain it.

  • 44.2% of all LLM citations come from the first 30% of a page, with 31.1% from the middle and 24.7% from the last third (Digital Applied citation pattern study). If your intro does not lead with a direct answer, the rest of the post barely matters for AI Overviews and ChatGPT.

  • Pages with section lengths of 120 to 180 words between headings averaged 4.6 citations versus 2.7 for sections under 50 words (Search Engine Land citation study). Density beats both bloat and brevity.

The Gemini prompt for this phase:

code
Run check_blog_seo on the draft. If score < 85, run
get_blog_seo_patches and fix every issue with update_blog
in a single call. Re-score. Loop until 85+ or two passes,
whichever comes first. Then stop and show me the report.

That is the entire control flow. Gemini decides when to stop. You decide the threshold. A floor of 85 is reasonable for a long-form post. A floor of 90 if the post sits in a competitive cluster.

Phase 3: Patch the draft without rewriting it

This is where most "AI publishing" workflows quietly fall apart. The agent reads the score, then tries to fix issues by regenerating the entire post. Regeneration breaks the things that were already working.

Patches solve this. A patch is a surgical find-and-replace operation. The MCP server returns patches like "replace this exact meta description string with this one" or "insert this H2 here." Gemini applies them in a single update_blog call. Nothing else changes.

A typical response from get_blog_seo_patches:

code
[
  {
    "find": "## Why this matters",
    "replace": "## Why feature flag rollouts matter in 2026",
    "impact": "+4 (H2 keyword coverage)"
  },
  {
    "find": "Learn how to roll out features safely.",
    "replace": "Learn how to roll out features safely without breaking prod.",
    "impact": "+3 (meta description CTA)"
  }
]

Each patch comes with a projected score impact. Gemini stacks all of them into one update_blog call, re-runs check_blog_seo, and reports the new score. If a fix cannot be patched mechanically — like an entire missing FAQ section — Gemini writes the section and appends it.

Patches matter because freshness compounds. Content updated within the last 30 days receives roughly 3.2x more citations from AI search engines than content older than 90 days (Digital Applied AI Search Statistics 2026). Once a post is live, the same patch tool keeps it fresh on a schedule. You do not rewrite. You patch.

Phase 4: Publish to your subdirectory, not a subdomain

Most "AI blog" tools publish to something.subdomain.theirsite.com. That is a permanent SEO tax. Subdirectories share link equity automatically with the root domain. Subdomains divide it. Real-world case studies show subdirectories rank faster and more effectively on the first SERP page (Backlinko: Subdomain vs Subdirectory).

Quillly publishes blogs at yourdomain.com/blog/your-post-slug — same domain, same authority, same Search Console property. When Gemini calls publish_blog:

  1. The post goes live at your subdirectory URL.

  2. Your sitemap (/sitemap.xml) regenerates automatically.

  3. Your RSS feed updates.

  4. Google's Indexing API gets pinged for fast discovery.

  5. The blog appears in list_blogs with indexing.status: "submitted".

You can watch indexing progress without leaving the Gemini CLI. A follow-up list_blogs call shows when Google moves the post from submitted to indexed, plus search position, clicks, and impressions once Search Console data flows.

That last point is the one most workflows miss. Publishing is not the end of the loop. It is the start of the data feedback loop. The GSC half of that loop is covered in the 2026 MCP workflow for Google Search Console. The short version: GSC data feeds back into the next prompt, the next prompt produces a sharper draft, and the loop tightens with every cycle.

The Gemini settings.json config you actually need

Gemini reads MCP server config from ~/.gemini/settings.json for user-wide servers, or .gemini/settings.json inside a project for workspace-only servers (official Gemini CLI MCP docs). The shape is identical to Claude Desktop's and Cursor's, with one or two extra knobs.

Here is a working config with a Quillly server installed alongside the GitHub MCP server.

code
{
  "mcpServers": {
    "quillly": {
      "httpUrl": "https://quillly.com/mcp",
      "headers": {
        "Authorization": "Bearer $QUILLLY_API_KEY"
      },
      "timeout": 30000
    },
    "github": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-github"],
      "env": {
        "GITHUB_PERSONAL_ACCESS_TOKEN": "$GITHUB_TOKEN"
      },
      "trust": false
    }
  }
}

Five things that trip people up on the first install.

  1. Three transport types, one config shape. Use httpUrl for remote streaming HTTP servers (Quillly), url for SSE servers, and command plus args for stdio servers that run locally. Pick whichever the server publishes and the rest of the block stays the same.

  2. Environment variable interpolation. Use $QUILLLY_API_KEY (or ${QUILLLY_API_KEY}) instead of pasting the literal key. Set the variable in your shell config (.zshrc, .bashrc) so it does not end up in a dotfiles repo. Gemini resolves the variable at startup and never writes the literal key to disk.

  3. The **trust** flag. Defaults to false, which means Gemini asks before every tool call. Set it to true for servers you fully own and audit — like your own publishing stack — to skip the confirmation prompt. Leave it false for anything that touches external write APIs you do not control.

  4. Tool filters. includeTools and excludeTools let you scope a server to the exact tools you want. Quillly exposes up to 23 tools (12 on Free, all 23 on Pro), and Gemini will pull every one of them by default. If you only want the publishing flow, scope it: "includeTools": ["create_blog", "check_blog_seo", "get_blog_seo_patches", "update_blog", "publish_blog"]. Less surface area, less chance Gemini reaches for the wrong tool.

  5. The 600-second timeout default. Gemini's MCP timeout is 600,000 ms by default. For a fast hosted server like Quillly, drop it to 30,000 to surface failures sooner. For long-running tools — say, a search MCP that crawls a few thousand URLs — leave the default.

Verify the install: restart the CLI and run gemini mcp list. Each configured server should show as connected. Type @quillly in a Gemini session and the available tools autocomplete (create_blog, check_blog_seo, publish_blog, and friends). That is your green light.

If you have configured Cursor, Claude Desktop, ChatGPT, or Windsurf, the JSON shape is almost identical because all five editors speak the same MCP spec. The Cursor walkthrough lives in the 2026 Cursor publishing guide, the Claude Desktop walkthrough is in the 2026 Claude Desktop publishing workflow, and the ChatGPT publishing workflow covers the OpenAI side. Same mcpServers block, slightly different file paths.

Gemini CLI vs Gemini app vs Gemini Code Assist: which one publishes?

Google ships Gemini in three surfaces and they do not all speak MCP the same way. Here is the breakdown.

Table 2

Surface

MCP support

Best for

Gemini CLI

Full — stdio, SSE, HTTP transports

Terminal-native devs, agentic workflows, scripted publishing

Gemini Code Assist (agent mode)

Yes, via the same settings.json schema

IDE-bound devs (VS Code, JetBrains) who want MCP inside their editor

Gemini app (web/mobile)

Limited — official Google services only via remote MCP

Conversational drafting; pair with the CLI for the publish step

For a publishing workflow, the Gemini CLI is the right surface. It runs in your terminal, reads the standard settings.json schema, and treats third-party MCP servers as first-class. Gemini Code Assist works too if you are an IDE-bound developer — see Google's agent mode docs for the setup. The web app is great for drafting prose but does not yet talk to arbitrary third-party MCP servers, so use it for ideation and hand the structured publish step to the CLI.

If you are coming from Android Studio's Gemini integration or Firebase Studio, both also support MCP via similar JSON config. The publishing pattern in this guide ports directly.

Old workflow vs Gemini + MCP: the comparison

The reason this loop matters is not theoretical. It is a flat-out time delete on the part of blog work nobody enjoys. Here is the comparison.

Table 3

Step

Old workflow (chat → WordPress)

Gemini + MCP

Draft post

Write in Gemini app, copy markdown

Prompt the Gemini CLI

Format for CMS

Paste into WordPress, fix blocks

Already markdown — no transform

Add featured image

Upload to media library, copy URL, paste

Gemini calls search_images, picks one

Set meta tags

Manually type into Yoast or Rank Math

Gemini generates, validates length

Internal links

Search old posts, copy URLs back

Gemini calls suggest_internal_links

SEO check

Switch to Surfer or Clearscope, score, fix

Gemini calls check_blog_seo, patches

Publish

Click Publish, hope nothing 504s

Gemini calls publish_blog, returns URL

Submit to Google

Manually copy URL into GSC

Auto-pinged via Indexing API

Time per post

90–180 minutes

5–15 minutes

The hidden cost in the left column is not just minutes. It is context switches. Every tab you open is a chance to lose the thread of what you were trying to say. The right column keeps you in one window, in one conversation, in one mental mode.

The contrarian take buried in this approach: stop moving your blog into your stack. Pull publishing into your terminal. The CMS as a separate destination was a 2010s assumption. In a 2026 agent stack, the terminal is the publishing surface. Your CMS is a Gemini session.

Five mistakes Gemini users make on their first publish

Most people running this loop the first time hit the same five potholes. Skip them.

1. Setting **trust: true** on every server. It is tempting to flip every MCP server to trust: true so Gemini stops asking permission. Do not. Trust the servers you own and audit. Leave external servers — search, scraping, anything that hits arbitrary URLs — at trust: false so Gemini surfaces destructive calls before executing them. The five-second confirmation prompt is the cheapest insurance you will ever buy.

2. Letting Gemini publish at score 70. The default temptation is to ship the first draft. Do not. Set a hard threshold (85 floor) and let Gemini loop until it crosses. The patch step is cheap. Ranking from a low-quality post is not. Sites publishing 50–100 quality AI articles with human editing saw traffic increases of 30–80%, while sites publishing 1,000+ unedited AI articles saw traffic drops of 40–90% (Pravin Kumar's analysis of Ahrefs data, 2026). The difference is quality control, not AI usage.

3. Skipping the FAQ section. People-Also-Ask-style FAQs are the single most-cited block by ChatGPT and Google AI Overviews. The structural patterns LLMs reward live in the 2026 AEO playbook for getting cited. If your post does not have an FAQ, you are leaving citations on the table. Ask Gemini for one explicitly in the prompt: six questions, 50–90 word answers, direct answer in the first sentence.

4. Publishing to a subdomain because the platform makes it easy. Subdomains divide link equity. Subdirectories consolidate it. Take the extra hour to wire up yourdomain.com/blog once. You will thank yourself when you have shipped 50 posts and they are all stacking authority on a single domain instead of fragmenting it across four.

5. Treating MCP as a single tool, not a stack. The real power is not quillly alone. It is quillly plus github (publish changelogs as posts), plus a search MCP (fresh stats on demand), plus your own data MCP (real customer language and product metrics). Composability is the wedge. The broader publishing stack is mapped in the 2026 MCP servers for SEO guide.

A real Gemini prompt that ships on the first try

Theory is fine. Here is a prompt you can paste straight into the Gemini CLI and ship a post tonight. Edit the bracketed parts.

code
You are my blog publishing agent. Use the quillly MCP server.

Topic: [your topic]
Primary keyword: [keyword]
Target audience: [who reads this]
Word count: 3,000-3,600
Voice: builder-to-builder, contractions, no marketing fluff,
no "in today's fast-paced world."

Workflow:
1. Search the web for 5+ recent stats with sources
2. Draft the post in markdown with: H1 under 60 chars,
   40-60 word direct-answer paragraph after the H1,
   8-10 H2 sections, FAQ with 6 questions, conclusion
3. Call create_blog (status: draft) on website_id [your_id]
4. Call check_blog_seo. If score < 85, call
   get_blog_seo_patches and apply all fixes via update_blog
   in one call. Re-score. Loop max 2 times.
5. Call suggest_internal_links and apply the relevant ones
6. Call publish_blog when score is 85+ and meta description
   is 160 chars or less
7. Report the live URL and final score

That is the whole thing. Gemini runs to completion. You get a live URL with no copy-paste anywhere in the loop.

If you want to schedule posts instead of shipping immediately, swap step 6 for update_blog with a future published_at timestamp — Quillly's scheduled publishing handles the rest. For batch publishing across a content calendar, the 2026 programmatic SEO with MCP guide walks through shipping 100 posts from one conversation.

Frequently asked questions

Do I need to host my own MCP server to publish from Gemini?

No. Use a hosted MCP server like Quillly that exposes blog publishing as remote tools. Add the server URL and your API key to ~/.gemini/settings.json and Gemini calls the tools directly. Self-hosting an MCP server is an option if you have specific compliance requirements, but it is not required to start. Most indie hackers and small teams should pick the hosted option to skip server maintenance entirely and stay focused on shipping content.

Can the Gemini app publish blogs, or only the Gemini CLI?

The Gemini CLI is the right surface for end-to-end publishing because it talks to arbitrary third-party MCP servers. The Gemini web and mobile app currently only speaks MCP to official Google services like Cloud Run and AlloyDB, not to third-party publishing servers. The clean pattern is to use the app for ideation and drafting, then hand off the final brief to the CLI for the structured publish step. Gemini Code Assist agent mode also works inside VS Code and JetBrains.

How does this compare to publishing from Claude Desktop or Cursor?

Functionally identical. All MCP-compatible AIs read the same kind of settings.json, call the same tools on the same server, and produce the same live URL. The differences are surface-level: file paths (~/.gemini/settings.json vs ~/.claude/claude_desktop_config.json vs ~/.cursor/mcp.json), tooling around the chat UI, and which model is doing the writing. Pick the AI you already live in. The publishing layer does not care.

How do I keep my API key out of settings.json?

Use environment variable interpolation. Set QUILLLY_API_KEY in your shell config (.zshrc, .bashrc, or a secrets manager) and reference it in the config as $QUILLLY_API_KEY or ${QUILLLY_API_KEY}. Gemini resolves the variable at startup and never writes the literal key to disk. The same pattern works for any header or env field across all your MCP servers, including OAuth client secrets and database passwords.

Will Google penalize blogs published by Gemini?

Not if the content is genuinely useful. A 600,000-page Ahrefs study found a near-zero correlation (0.011) between AI assistance and ranking penalties, with 86.5% of top-ranking content using some form of AI assistance (Pravin Kumar analysis, 2026). Google penalizes unhelpful content, not the production method. The fix is not to avoid AI. It is to add human judgment in the loop. Use Gemini for drafting and patching. Review the final draft yourself before approving publish.

How do I track which posts are actually getting indexed?

Gemini can call list_blogs and read each post's indexing status. Statuses include submitted, indexed, not_indexed, and error. After publishing, Google's Indexing API gets notified automatically, and a background job polls Search Console for status changes. Search position, clicks, and impressions populate in list_blogs once GSC data flows, usually within a few days for a new post. Ask Gemini for a weekly indexing report and get a one-glance view of every blog's coverage state without opening Search Console.

What if Gemini calls the wrong tool or hallucinates a parameter?

Two guardrails. First, leave trust: false for any server that writes to the outside world so Gemini surfaces every call before executing it. Second, scope each server with includeTools so the model can only see the tools you want it to use. With those two flags set, the worst case is a confirmation prompt for a misnamed tool, which you can deny. Reading parameters before approving is the difference between a five-minute publish and a five-hour rollback.

Does this work with Gemini 2.5 Pro and Gemini 3 the same way?

Yes. The MCP transport layer is model-agnostic, so the publishing loop in this guide works against whichever Gemini model is current in your CLI. Newer models tend to plan multi-tool workflows more reliably, which means they hit the score threshold in fewer patch passes. The config and the prompt template do not change.

Three takeaways before you ship

The whole point of this guide is one move: delete the publishing handoff. Three specifics to walk away with.

  1. Set up the config once. Drop one mcpServers block into ~/.gemini/settings.json, set the env var, restart the CLI. Five minutes of setup pays back on every future post and stays out of your way forever.

  2. Run the four-move loop every time. Draft → Score → Patch → Publish. Do not skip the score step. A score floor of 85 cuts your "I should have edited that" regret rate to near zero and keeps you out of the 40–90% traffic-drop bucket for sites that ship raw AI output.

  3. Keep your domain on the post. Subdomains leak link equity. Subdirectories compound it. Pick the subdirectory once and stop fragmenting your authority across hosts.

Want Gemini to actually publish the post it just wrote? Connect Quillly to Gemini in 30 seconds.