Solo Unicorn Club logoSolo Unicorn
2,214 words

The AI Content Factory: How One Person Publishes Daily Across Three Platforms

solo businesscontent creationAImulti-platformindie entrepreneurship
The AI Content Factory: How One Person Publishes Daily Across Three Platforms

The AI Content Factory: How One Person Publishes Daily Across Three Platforms

In mid-2025, someone asked me: how do you have content going out every single day?

At that point I was simultaneously running three businesses, managing a 700-person community at the Solo Unicorn Club, and iterating on JewelFlow's product. I had no content team, no operations assistant. Writing, formatting, publishing, reviewing analytics — on the surface, I was doing all of it.

In reality, 75% of the execution work was handled by an AI pipeline. I only did the remaining 25%.

This article breaks down exactly how that content factory is built: what triggered it, how it's designed, what tools it uses, and what went wrong along the way.


Background: Why Content Production Became a Bottleneck

I create content with clear business objectives: driving trial signups for JewelFlow, continuously bringing new members into the Solo Unicorn Club, and building recognition for the Jessie Qin brand in the AI entrepreneurship space.

In early 2025, I was publishing about 3–4 pieces per week, mostly on LinkedIn. Traffic was decent, but coverage was too narrow.

The problem: hitting daily multi-platform publishing by simply "writing more" doesn't work. A 1,500-word LinkedIn long-form post takes me two to three hours to write. Reformatting it for Twitter takes another 40 minutes, WeChat formatting another 30, plus research time — a single piece easily exceeds 4 hours of total work.

4 hours x every day = impossible.

So the question was never "how do I write more." It was: which parts of this process don't need me to do them at all?

The answer: research, first drafts, format conversion, and scheduling. Not one of these requires my judgment — they just need someone to execute.


System Design: Four Stages of the Content Pipeline

I broke the journey from idea to publication into four stages, each backed by one or more AI nodes.

Topic bank ──► Research Agent ──► Drafting Agent ──► Format Conversion Agent ──► Scheduled Publishing
                                            │
                                       QA Agent (score < 75 → sent back for revision)

The entire pipeline runs on structured JSON, so each node can be swapped independently without disrupting upstream or downstream.

Stage 1: Topic Bank Management

This is the only part I maintain entirely by hand — and it sets the quality ceiling for the whole system.

I maintain a topic bank in Notion, organized into six series: AI tool comparisons, company teardowns, Agent framework deep-dives, daily insights, solo entrepreneurship + AI, and hot takes. Each series maps to different platform audiences and content tones.

Every Sunday evening, I spend 30–40 minutes planning the coming week's topics, writing down the core argument and 1–2 specific examples I want to use. This is my real "thinking" time. Everything else is execution.

The topic bank currently holds 300+ queued topics, covering roughly the next four months of content. This buffer means the system keeps running even on days when I have zero new ideas.

Stage 2: Research Agent

Trigger: manual or scheduled batch processing, 5–10 topics at a time.

Task: for each topic, run targeted web searches to pull the latest data points, relevant tool pricing, competitive comparisons, and recent industry events. Output is a structured research brief, 500–800 words, JSON format.

Tools: Perplexity API (web search) + Claude claude-sonnet-4-6 (information synthesis and deduplication).

A concrete example: the topic "Notion AI vs Obsidian + AI Plugins Comparison." The Research Agent automatically pulls both tools' latest pricing, major feature update logs, user review summaries from Reddit and Product Hunt, and use case definitions. If I did this manually, it would take at least an hour. The Agent finishes in about 3 minutes.

Stage 3: Drafting Agent + QA Agent

Once the research brief enters the Drafting Agent, it generates three versions:

  1. LinkedIn long-form (800–1,200 words in Chinese, with English version)
  2. Twitter main post (under 280 characters, with a 5–8 tweet expansion thread)
  3. WeChat article (1,500–2,000 words, with section subheadings and engagement questions)

The three versions aren't translations of each other — they're rewritten for each platform's audience. LinkedIn readers skew toward working professionals who value frameworks and data. Twitter demands high density — the first sentence has to hook. WeChat readers come from the Solo Unicorn Club and want more concrete operational detail.

After drafting, the QA Agent scores each piece across dimensions including: information density, clarity of argument, alignment with the Jessie brand voice, and whether specific data supports the claims. Total score out of 100 — anything below 75 gets sent back for revision, with a two-round limit. After two rounds, it's flagged for manual review.

Tools: Claude claude-sonnet-4-6 (drafting + QA).

A note: tuning the Drafting Agent's prompt took a long time. The main challenges were avoiding "AI-speak" (things like "it goes without saying" or "needless to say") and preventing template fatigue (every piece structured identically). The current version was calibrated over roughly 60 pieces of content. First-pass approval rate is around 80%; the remaining 20% I'll scan and tweak a couple of sentences.

Stage 4: Format Conversion and Scheduled Publishing

Content that passes QA enters the final stage: formatting + auto-scheduling.

LinkedIn versions get YAML frontmatter with topic, date, platform, and word count tags. Twitter versions are split into copy-paste-ready format. WeChat versions get formatting markup (bold, dividers).

Scheduling logic: ensure each platform has at least one piece of content per day. Twitter gets the highest volume — 1–2 main posts daily. LinkedIn gets one post per day, primarily on weekdays. WeChat gets 3–4 articles per week.

Tools: n8n (workflow orchestration) + Buffer (Twitter/LinkedIn scheduling) + manual push for WeChat (the WeChat API has too many restrictions for individual accounts — I haven't found a fully automated solution yet).


Tool Stack and Costs

Use Case Tool Monthly Cost Notes
Web research Perplexity API ~$8 Usage-based, ~500 queries/month
Drafting + QA Claude API (claude-sonnet-4-6) ~$22 Covers all drafting, QA, and revision calls
Workflow orchestration n8n (self-hosted) $8 (server) Running on Railway
Scheduled publishing Buffer $15 Twitter + LinkedIn
Topic bank + content archive Notion $0 (personal plan)
Total ~$53/month

This $53/month supports three platforms, 2–4 pieces of content per day, and roughly 80–100 pieces per month. If I hired someone instead — even the most junior content assistant — Beijing market rates alone start at 5,000–8,000 RMB/month.


Real Numbers

The system has been running steadily for about five months. Key metrics:

Output: monthly average of 92 pieces (Twitter posts + LinkedIn articles + WeChat articles combined), with a peak month hitting 117. When I was writing everything manually in early 2025, the monthly average was 18.

My actual time investment: about 3.5 hours per week on content. Breakdown: Sunday topic planning 35 minutes, daily QA output scan about 15 minutes, weekly manual WeChat push about 40 minutes.

Performance data: LinkedIn monthly views grew from roughly 12,000 in early 2025 to 41,000 now, primarily driven by increased publishing frequency. Twitter was even more dramatic — going from inconsistent updates (sometimes a week without posting) to daily publishing, the account gained a net 1,200 followers within 30 days.

Quality retention rate: the Drafting Agent's first-pass approval rate is about 80%, with a combined approval rate of 93% after second-round revisions. The remaining 7% I review and lightly edit. No content has ever been scrapped entirely due to quality issues.


Lessons From Mistakes

Mistake #1: The Drafting Agent was "over-optimized" early on

When I first wrote the prompt, I loaded it with style requirements: must include specific numbers, must have comparisons, must include personal experience, must have actionable steps... The result was that every piece followed the same template, and readers could feel the formulaic structure almost immediately.

The fix: replace mandatory requirements with "preferred" and "optional" items. Let the Agent decide which techniques to use based on the topic's nature. Once I added that flexibility, the formulaic feel dropped, and content differentiation actually improved.

Mistake #2: The three platform versions were "borrowing" sentences from each other

I once discovered that the LinkedIn version and the WeChat version shared entire paragraphs nearly verbatim. Readers who followed both platforms would feel like they were being fed repeats.

The cause: the drafting prompt didn't explicitly prohibit content reuse, so the Agent would sometimes copy paragraphs from one version to another with minor tweaks.

The fix: I added an explicit constraint to the prompt — the three versions must approach the topic from different angles, with at least 60% of sentences being unique. Blunt, but effective.

Mistake #3: No contingency plan for when the topic bank ran dry

One week I'd done minimal topic planning, and the remaining topics in the bank were ones I hadn't fully formed opinions on. The Agent's output quality dropped noticeably — because the research briefs had data and examples, but lacked a clear argument as an anchor point. The drafts came out reading like information compilations rather than opinionated content.

My current approach: I get a notification when the topic bank drops below 30 entries, prompting me to replenish. I also added a mandatory field to every topic: "What is my core argument?" This forces me to think that through before a topic enters the bank.

Mistake #4: Fully delegating WeChat scheduling to tools

I tried using a third-party tool to manage WeChat publishing. One day a push failed with zero notifications, and that day's content was wasted. WeChat's algorithm is time-sensitive — miss the window and you've missed it.

Now I insist on pushing WeChat manually, three to four times a week, about 10 minutes each. Automation isn't the goal — reliable delivery is.


The System's Boundaries: What AI Cannot Do For Me

I don't hand topic selection to AI. Which topics are worth writing about, what angle to take on an argument, which example best illustrates the point — these judgments directly determine content quality and represent my core value to readers.

I don't hand reader interactions to AI. LinkedIn comments, Twitter replies, WeChat messages — I respond to all of these personally. Content is my voice; interaction is proof that I actually exist behind it. If I used AI to reply to comments, readers would eventually notice, and once trust is lost, it's nearly impossible to rebuild.

I don't chase "zero human involvement." The 3.5 hours per week isn't a number I've squeezed down to the absolute minimum — it's a human touchpoint I deliberately preserve. The purpose of automation is to free my time for higher-value work, not to remove me from the content entirely.


For Those Who Want to Build Something Similar

You don't need to start this complex. My suggestions:

Step one: get the full pipeline working on a single platform first. Pick your most active platform and run the complete flow from research to publishing with AI assistance — even if it's just a simple ChatGPT conversation. Feel out which steps consume the most time. That's your first automation target.

Step two: codify your voice into a prompt. This is the most critical and most difficult step, and there are no shortcuts. Take ten pieces of your own writing that you're proud of, and seriously analyze them: how do you typically open, what's your sentence rhythm, how do you use data, how do you use examples — then write those traits into your prompt. This process takes time, but once codified, every new piece of content has a baseline.

Step three: set a cost ceiling for yourself. My content pipeline runs at $53/month as a result of deliberate cost control. AI tools are easy to pile on. Before the ROI is clear, set a budget cap and force yourself to keep only the tools that genuinely deliver.


Many members in the Solo Unicorn Club create content, and the most common complaint is "I don't have time to write." In most cases, the real bottleneck isn't time — it's starting from scratch every single time.

What a system can solve is that friction of starting from zero. Everything else still comes down to your own judgment.

What's the biggest bottleneck in your current content production workflow?