Solo Unicorn Club logoSolo Unicorn
2,190 words

My AI Learning System — Keeping Up with the AI Industry in 30 Minutes a Day

Solo CompanySolopreneurAI LearningKnowledge ManagementNotebookLMPerplexity
My AI Learning System — Keeping Up with the AI Industry in 30 Minutes a Day

My AI Learning System — Keeping Up with the AI Industry in 30 Minutes a Day

How much happened in the AI industry over the course of 2025?

A rough tally: Anthropic released the Claude 4 series, OpenAI shipped GPT-5 and o3, Google launched Gemini 2.5, Meta open-sourced Llama 4. Agent frameworks went from concept to production — AutoGen, CrewAI, and LangGraph all entered production environments. Multimodality became table stakes, and reasoning models emerged as a new frontier.

If you work in AI or run a business powered by AI, falling behind on these developments means you're using yesterday's tools to solve today's problems. But the traditional way of keeping up — spending 2–3 hours a day reading papers, scrolling Twitter, and browsing blogs — is far too time-expensive for a solo company founder.

My solution: build an AI-assisted learning system that automates information collection and initial filtering, so I only handle the final step — absorbing and making decisions. Thirty minutes a day, zero critical information missed.


Background: Where Information Anxiety Comes From

In 2024, when I was still at a big tech company, I tracked the AI industry through three channels: scrolling Twitter, subscribing to 10 newsletters, and occasionally reading papers. The problems:

Information volume grows exponentially, but my time is linear. In early 2024, I could cover the key developments in an hour a day. By year-end, the same coverage took 2.5 hours. By mid-2025, I started selectively skipping entire subfields — and then got blindsided in a client conversation by a tool I'd never heard of. Deeply embarrassing.

Scrolling a feed is not learning. I noticed I was reading 50 tweets a day but remembering fewer than 5 the next morning. Information was passing through my brain without sticking. The reason was simple: scrolling is passive reception, with no active processing.

In June 2025, I decided to redesign my learning system from scratch. The core idea: let AI handle "collecting, filtering, and summarizing," while I focus on "understanding, connecting, and applying."


Core Methodology: Three Layers of Filtering

Layer 1: Automated Collection — Let Information Come to Me

I stopped actively scrolling Twitter and blogs. Instead, I built an automated information collection pipeline:

RSS subscriptions + n8n automation: I use Feedly to subscribe to 47 information sources — including Arxiv's cs.AI and cs.CL categories, major AI company blogs (Anthropic, OpenAI, Google DeepMind, Meta AI), 6 high-quality newsletters, and 15 independent bloggers I trust. An n8n workflow pulls all updates automatically at 6 AM every morning — typically 80–120 items.

Claude API for initial filtering: Reading 80–120 items isn't realistic. The n8n workflow calls Claude Haiku to quickly classify each item: high relevance (directly related to my business or tech stack), medium relevance (industry trends), low relevance (noise). The prompt includes explicit rules defining what qualifies as each level. After filtering, I'm usually left with 15–25 high- and medium-relevance items.

Output to Notion: Filtered results are automatically written to a "Daily Feed" database in Notion, sorted by date, including the title, summary (2–3 sentences auto-generated by Claude), original link, and relevance tag.

This entire layer is fully automated. Zero manual effort.

Layer 2: Deep Digestion — 30 Minutes of Active Learning

Every morning at 8:30, I open Notion's "Daily Feed" and spend 30 minutes doing three things:

Quick scan (5 minutes): Read all high- and medium-relevance titles and summaries. Most items need nothing more than the summary — "OpenAI launched XX feature," "Anthropic updated XX API." I need to know about these, but I don't need to read the full article.

Deep reading (15 minutes): Pick 2–3 pieces that warrant careful reading. The selection criterion is simple: will this content affect my product decisions or technology choices in the next month? If yes, open the original article and read it thoroughly.

Paper processing (10 minutes): If there's a relevant paper, I process it with NotebookLM. Upload the PDF, and have it generate a summary and key findings. NotebookLM's Audio Overview feature is particularly useful — it converts the paper into a 10–15 minute "podcast-style explainer." I usually listen during lunch or exercise, using idle time for absorption.

For especially complex papers, I use Claude for interactive reading. Upload the paper, then ask targeted questions section by section: "How does this method differ from XX?" "Under what conditions does this conclusion hold?" "What implications does this have for the XX I'm building?" This conversational approach to reading is 3–5x more efficient than reading cover to cover on my own.

Layer 3: Knowledge Consolidation — From "I Know This" to "I Can Use This"

Reading something doesn't mean you've learned it. The most critical step is connecting new information to your existing knowledge framework.

My method: every Friday, I spend 20 minutes writing a "weekly learning note." This isn't a summary of the week's information (AI handles that) — it's answering three questions:

  1. What was the most important development this week?
  2. What specific impact does this have on my business or product?
  3. Do I need to adjust any decisions because of it?

These notes are written for myself, usually 300–500 words. The quality of writing doesn't matter — what matters is forcing the conversion from "information" to "judgment." Passively absorbing information creates no value; actively using information to make decisions does.

Over time, these notes also became source material for my monthly shares in the Solo Unicorn Club — many members have told me that my industry insight briefings are one of the main reasons they joined the community.


Tool Stack Breakdown

Use Case Tool Monthly Cost Why I Chose It
RSS Subscriptions Feedly Pro $6 AI-assisted categorization, Board feature for easy organization
Workflow Automation n8n (self-hosted) $8 Runs on a VPS, chains RSS → Claude → Notion
Initial Filtering Claude Haiku API ~$3 Processes ~100 items/day. Haiku is cheap ($0.25/M input) and fast
Paper Digestion NotebookLM Plus $20 Paper summaries + Audio Overview. Included in Google One AI Premium
Deep Q&A Claude Pro $20 (already using) Interactive paper reading, technical analysis
Quick Search Perplexity Pro $20 For looking up specific information on demand. Real-time web access + source citations
Knowledge Base Notion (personal plan) $0 Daily Feed + weekly notes
Total ~$40/month (not counting Claude Pro and Perplexity Pro, which I use for other work too) Core incremental cost is just Feedly + n8n + Haiku API = $17

Note: I use Claude Pro ($20) and Perplexity Pro ($20) for other work as well, not exclusively for the learning system. If you count only the incremental cost of the learning system, it's just $17/month.


Actual Results

Nine months of system performance data (Jun 2025 – Feb 2026):

Metric Before the System After the System
Daily Information Tracking Time 2–2.5 hours 30 minutes
Papers Read In-Depth per Week 1–2 4–5
Critical Industry Events Missed (Monthly) 2–3 0
Information Sources Covered 15 47
Knowledge Utilization Rate (% of what I read that I actually used) ~10% ~35%

How I calculate that last metric — "knowledge utilization rate": at the end of each month, I review the product decisions and client conversations from that month and count how many times something I'd previously learned directly helped me. Before the system, roughly 10% — I read a lot but used very little. After the system, it rose to 35%. The core reason: Layer 3's weekly learning notes forced me to connect information to my business.

A concrete example: in November 2025, my feed picked up a blog post about Anthropic's launch of the Tool Use feature. After reading it, I immediately realized this feature could replace a complex function-calling module I'd hand-coded in JewelFlow. I completed the migration the following week — codebase shrank by 40%, with a noticeable improvement in stability. Without this system, I might not have stumbled across that announcement on Twitter for another two or three weeks.


Lessons Learned the Hard Way

Mistake 1: Adding too many sources — noise overwhelmed signal

Initially, I subscribed to 80+ RSS feeds. The result: 200+ daily updates, with Claude flagging 40–50 as "high relevance" — impossible to get through.

I spent a week auditing my sources: scoring each one based on content quality over the past month, then cutting 35 that scored below 60. Now at 47 sources, 80–120 daily updates, 15–25 after filtering. The right volume is what makes the system executable.

Lesson: quality of sources > quantity of sources. Ten high-quality feeds beat fifty mediocre ones.

Mistake 2: Claude's filtering prompt needed continuous refinement

The initial filtering prompt was too broad — "related to the AI industry" cast too wide a net, and nearly everything got flagged as high relevance.

I refined the rules: high relevance must meet at least one of three criteria — a) directly related to my current tech stack (Python, FastAPI, Claude API, Vercel); b) a platform-level change affecting more than 10,000 developers; c) an AI application case in my vertical (jewelry retail, luxury goods).

After refining, high-relevance accuracy improved from 55% to 82%.

Mistake 3: NotebookLM's Audio Overview occasionally misses critical details

Audio Overview is incredibly convenient, but it's summary-oriented and doesn't adequately cover technical details in papers (mathematical formulations, experimental setups). I once cited a conclusion from an Audio Overview in a client discussion about a technical approach, only to realize I'd missed an important limitation noted in the original paper — nearly leading to a flawed solution design.

Lesson: Audio Overview is great for getting the gist of a paper quickly, but if you're using a paper's conclusions to inform decisions, you must go back and read the original experiment section and limitations.


Advice for Those Getting Started

Step 1: Define your "information objective."

Why are you tracking industry news? To make product decisions? To find new opportunities? To avoid your tech stack becoming obsolete? Different objectives dictate different sources and filtering rules. Information collection without a clear objective is just aimless Twitter scrolling — time spent with nothing to show for it.

Step 2: Start with Perplexity + NotebookLM — no automation needed.

The simplest version: every morning, open Perplexity and ask, "What were the 3 most important things that happened in the AI industry in the past 24 hours?" For any interesting papers, upload them to NotebookLM. These two tools together cost $20/month (just Perplexity Pro) and already cover 60% of the need. The automated pipeline is the advanced play — first confirm you can sustain the daily 30-minute habit before building it.

Step 3: Write weekly notes — don't just consume information.

This step seems the least important but is actually the most important. Information collection without note-taking is like attending a lecture without taking notes — you feel like you understood everything at the time, but a week later it's all gone. Three hundred words is enough. The key is answering: "How does this affect me?"


Final Thoughts

Thirty minutes a day, $40/month, covering 47 information sources, deeply digesting 4–5 papers per week, zero critical information missed.

The core of this system isn't how advanced the tools are — it's a cognitive shift: the bottleneck of learning isn't information access; it's information processing. The volume of AI industry information has already exceeded any individual's capacity for manual processing. Use AI to handle "collection and filtering," and direct your limited brainpower toward "understanding and application" — that's high-leverage learning.

I ran an "AI Learning System Workshop" in the Solo Unicorn Club and shared this methodology. Afterward, over a dozen members built their own versions. One member in cross-border e-commerce told me he uses a similar system to track supply chain developments and platform policy changes, saving 1.5 hours daily. Different contexts, same method.

How much time do you spend tracking industry news each day? If it's over an hour, it might be time to let AI handle the front-end work.