How I used Claude to auto-generate structured blog content at scale across multiple tenant sites.
March 31, 2026

Most content pipelines are single-tenant by design. One site, one database, one set of posts. That works until you need to run the same CMS infrastructure for multiple brands, clients, or verticals — and suddenly you're copy-pasting codebases and hoping the configs stay in sync.
This post walks through the architecture behind a multi-tenant blog that uses the Claude API to generate and publish structured content at scale, with a single Payload CMS instance serving every tenant.
The goal is one backend, many front-ends. A single Payload CMS deployment handles all tenants. Each tenant has its own slug, theme colors, and site metadata stored in a Tenants collection. Posts are scoped to a tenant via a required relationship field.
The front-end is a Next.js deployment that reads NEXT_PUBLIC_TENANT_SLUG from its environment. Want a new tenant? Create the record in Payload, spin up a new Vercel deployment with the right env var, and you're done — no code changes.
// apps/web/src/lib/payload-client.ts
export async function getTenant(slug: string) {
const res = await fetch(
`${process.env.NEXT_PUBLIC_PAYLOAD_URL}/api/tenants?where[slug][equals]=${slug}&limit=1`,
{ next: { revalidate: 300 } } // 5-minute cache
)
const data = await res.json()
return data.docs?.[0] ?? null
}
export async function getPosts(tenantSlug: string) {
const tenant = await getTenant(tenantSlug)
if (!tenant) return []
const res = await fetch(
`${process.env.NEXT_PUBLIC_PAYLOAD_URL}/api/posts?where[tenant][equals]=${tenant.id}&where[status][equals]=published&sort=-publishedAt&limit=50`,
{ next: { revalidate: 60 } }
)
const data = await res.json()
return data.docs ?? []
}
The tenant relationship on every post is the critical piece. Without it, you have no isolation — posts bleed across tenants and your per-tenant front-ends show the wrong content.
Two collections do the heavy lifting: Tenants and Posts.
// apps/cms/src/collections/Tenants.ts
export const Tenants: CollectionConfig = {
slug: 'tenants',
fields: [
{ name: 'name', type: 'text', required: true },
{ name: 'slug', type: 'text', required: true, unique: true, index: true },
{ name: 'description', type: 'textarea' },
{
name: 'theme',
type: 'group',
fields: [
{ name: 'primary', type: 'text' },
{ name: 'secondary', type: 'text' },
{ name: 'accent', type: 'text' },
],
},
{
name: 'status',
type: 'select',
options: ['active', 'inactive', 'maintenance'],
defaultValue: 'active',
},
],
}
// apps/cms/src/collections/Posts.ts (tenant field only)
{
name: 'tenant',
type: 'relationship',
relationTo: 'tenants',
required: true,
index: true, // critical for query performance
hasMany: false,
admin: { position: 'sidebar' },
}
The index: true on the tenant field matters. Without it, fetching all posts for a tenant triggers a full-table scan. With it, Payload generates a PostgreSQL index and the query is fast even with thousands of posts.
The content pipeline follows a simple loop: define a brief, call Claude, parse the response, publish.
import Anthropic from '@anthropic-ai/sdk'
interface ContentBrief {
topic: string
audience: string
keywords: string[]
tone: 'technical' | 'casual'
wordCount: number
}
async function generatePost(brief: ContentBrief, client: Anthropic) {
const response = await client.messages.create({
model: 'claude-sonnet-4-6',
max_tokens: 4000,
messages: [{
role: 'user',
content: `Write a blog post for ${brief.audience}.
Topic: ${brief.topic}
Keywords: ${brief.keywords.join(', ')}
Tone: ${brief.tone}
Length: ~${brief.wordCount} words
Return ONLY valid JSON:
{
"title": "SEO title (50-60 chars)",
"description": "Meta description (150-160 chars)",
"content": "Full MDX content",
"tags": ["tag1", "tag2"],
"seoKeywords": ["keyword1", "keyword2"]
}`
}],
})
const text = response.content[0].type === 'text' ? response.content[0].text : ''
const match = text.match(/\{[\s\S]*\}/)
if (!match) throw new Error('No JSON in response')
return JSON.parse(match[0])
}
The key constraint is the output format. Asking Claude to return raw Markdown means you get a Markdown document. Asking for JSON with a content field means you get a structured object you can immediately pass to your Payload REST endpoint. The schema instruction in the prompt is the difference between a pipeline that works reliably and one that requires manual cleanup.
Once you have the generated content, publishing is a POST to the /api/posts endpoint:
async function publishPost(opts: {
title: string
slug: string
description: string
mdxContent: string
tags: string[]
seoKeywords: string[]
tenantId: number
categoryId?: number
mediaId?: number
}) {
const res = await fetch(`${CMS_URL}/api/posts`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `API-Key ${process.env.PAYLOAD_API_KEY}`,
},
body: JSON.stringify({
title: opts.title,
slug: opts.slug,
description: opts.description,
mdxContent: opts.mdxContent,
content: {
root: {
type: 'root',
children: [{
type: 'paragraph',
children: [{ type: 'text', text: 'See mdxContent field.' }],
}],
},
},
tenant: opts.tenantId,
category: opts.categoryId,
featuredImage: opts.mediaId,
status: 'published',
publishedAt: new Date().toISOString(),
tags: opts.tags.map(t => ({ tag: t })),
seoKeywords: opts.seoKeywords.map(k => ({ keyword: k })),
}),
})
if (!res.ok) throw new Error(await res.text())
return res.json()
}
Notice the dual content / mdxContent pattern. Payload's Lexical editor stores rich text as JSON. Scripts that bypass the editor need to provide a valid Lexical document — even if it's a stub. The actual readable content lives in mdxContent as a plain MDX string, which the Next.js front-end renders with next-mdx-remote.
Any pipeline that can fail mid-run needs idempotency. Before creating a post, check whether the slug already exists:
async function postExists(slug: string): Promise<boolean> {
const res = await fetch(
`${CMS_URL}/api/posts?where[slug][equals]=${slug}&limit=1`,
{ headers: { Authorization: `API-Key ${process.env.PAYLOAD_API_KEY}` } }
)
const data = await res.json()
return (data.docs?.length ?? 0) > 0
}
Re-running the script after a partial failure will skip already-published posts and continue from where it left off. No duplicate content, no wasted API calls.
Each tenant front-end is an independent Vercel deployment pointing at the same CMS:
cms.arunabh.me → Payload CMS (Azure Container Apps)
blog.arunabh.me → Next.js (NEXT_PUBLIC_TENANT_SLUG=arunabh-blog)
tech.yourdomain.com → Next.js (NEXT_PUBLIC_TENANT_SLUG=tech)
design.yourdomain.com → Next.js (NEXT_PUBLIC_TENANT_SLUG=design)
The CMS never needs to know about the front-ends. The front-ends are fully stateless — all state lives in Payload. Adding a new tenant is a database operation, not a deployment.