How I Use AI for Product Decisions — A Data-Driven Framework for Solo Founders

How I Use AI for Product Decisions — A Data-Driven Framework for Solo Founders
Last month, JewelFlow's new batch quoting feature hit 38% active-user adoption within 72 hours. The SaaS industry average for first-week adoption of a new feature is 10%–15%.
It wasn't great product instinct. Before I decided to build that feature, I used AI to analyze over 400 pieces of user feedback, pricing page changes across 3 competitors, and 200 jewelry-industry discussion threads on Reddit. The entire analysis took under 3 hours and cost $1.20 in API fees.
A solo business has no product team, no user research department. But product decisions can't rely on guesswork. This article shares the data-driven decision framework I built with AI, and the real-world results it's produced for JewelFlow and ArkTop AI.
Background: From Gut Feeling to Data-Backed Decisions
In the first half of 2025, JewelFlow was still in its early days. Every time I decided what feature to build, I relied on two signals: what users directly told me they wanted, and my own judgment of the industry.
The problem is both signals are biased.
What users say directly tends to be surface-level. "Can you add an Export to Excel button?" — the real problem behind that might be "I need to show this data to my boss, and he only uses Excel." Building just an export button without a report-sharing feature treats the symptom, not the cause.
My own judgment isn't always reliable either. I built an inventory alert feature that only 6% of users touched after a month — I'd overestimated how much small and mid-size jewelers care about inventory management. What they actually care about is client follow-up and quoting efficiency.
After that, I started seriously thinking: how can one person systematize product decisions?
The Core Method: Three Principles
Principle 1: Every product decision needs three independent data sources
I set a hard rule for myself: before any feature gets a green light, it must be backed by three independent pieces of evidence. The three sources cannot overlap.
For example, when deciding to build JewelFlow's batch quoting feature, my three data points were:
- User feedback clustering: I fed the past 6 months of user feedback (420 entries collected via Tally) into Claude for topic clustering. "Quoting-related" appeared 67 times — the #1 topic across all categories.
- Competitor signals: Stuller (the largest jewelry supplier platform in North America) launched a bulk pricing tool in Q4 2025, validating that the industry had proven demand for this.
- Community data: On Reddit's r/jewelers and several Facebook jewelry-trade groups, discussions around "batch pricing" and "bulk quote" had more than doubled over the previous 6 months.
Three data points converging on the same direction — only then do I start building.
In practice: I feed each type of data to Claude separately, ask each to produce a conclusion, then compare whether the conclusions converge. If the three point in different directions, the signal isn't strong enough, and I hold off.
Principle 2: Let AI do the analysis; make the judgment calls yourself
This is the concrete application of "humans steer, AI executes" in product decisions.
What AI can do: text classification, sentiment analysis, topic clustering, competitor page comparison, trend extraction. At the information-processing level, AI is faster and more thorough than a person. What AI can't do: set product direction, weigh priorities, judge timing. Those must be mine.
Every Friday afternoon, I spend 2 hours on a "product decision review":
- Run an n8n workflow that automatically pulls the past week's user feedback from Tally, Intercom, and email
- Claude API classifies and clusters the data, outputting a structured "weekly user voice summary"
- I review the summary and flag which signals need deeper investigation vs. which are noise
- For signals worth digging into, I manually supplement with competitor data and community data
- Synthesize everything and update the product roadmap
AI handles steps 1–2 and part of step 4, saving roughly 6–8 hours. Steps 3 and 5 must be me — business judgment can't be outsourced.
Principle 3: Every decision needs an audit trail
For every product decision, I leave a record in Notion in a fixed format:
- Decision: what feature to build
- Evidence: what the three data sources were
- Expected metric: what number will validate this after launch
- Actual result: (filled in 2 weeks post-launch)
The value of these records is cumulative. Looking back at 30-something decision records, the patterns are obvious: features driven by user feedback average 35% adoption post-launch; features based on my gut instinct average 12%; features copying competitors average 22%. The numbers made me lean harder into feedback clustering and away from gut calls.
Tool Stack Breakdown
As of March 2026, here are the tools I use for product decisions:
| Use Case | Tool | Monthly Cost | Why I Chose It |
|---|---|---|---|
| User feedback collection | Tally (free tier) | $0 | Clean forms, easy Webhook integration with n8n |
| Feedback aggregation | n8n (self-hosted) | ~$5 (server cost) | Data stays local, flexible integrations |
| Text analysis + clustering | Claude API (Sonnet 4.5) | ~$15–20 | Strong with Chinese text, stable with long context |
| User interview analysis | Dovetail (Free) | $0 | Free tier includes unlimited transcription and AI summaries |
| Competitor monitoring | Visualping + Claude | ~$10 | Auto-alerts on webpage changes, Claude interprets the diffs |
| Community data scraping | Python scripts + Reddit API | $0 | Periodically pull posts, Claude runs trend analysis |
| Decision records | Notion | $0 (personal plan) | Structured records, easy to search |
Total monthly cost: roughly $30–35. Including occasional heavy-batch Claude API analysis, peak months don't exceed $60.
A few notes on tool selection:
Why Claude API instead of Dovetail Pro: Dovetail Pro runs $15/user/month with full-featured analysis, but my feedback volume averages 300–500 entries per month — Claude API is more flexible and cheaper at that scale. If volume hits a few thousand per month, I might switch to Dovetail Pro or Productboard (Essentials at $19/user/month + AI add-on at $20/user/month).
Why not MonkeyLearn: Starts at $299/month — not viable for a solo business. Any accuracy advantage from their pre-trained models gets neutralized by Claude prompts at my scale.
Dovetail free tier is sufficient: I do 2–3 video interviews per month, and the free tier supports one project with unlimited transcription and AI summaries — more than enough for a solo founder's research needs.
Real-World Results
Data from 8 months of using this framework:
Decision accuracy improvement: I define an "effective decision" as a feature with >20% adoption two weeks after launch. Before the framework (first half of 2025), my effective decision rate was around 40% (4 out of 10 features cleared 20%). After the framework (second half of 2025 through now), the rate is around 85% (11 out of 13 features cleared 20%).
Time investment: Weekly time spent on product decisions dropped from 8–10 hours to 2–3 hours. The reduction came mainly from eliminating manual feedback processing and competitor-site browsing.
Applying it to ArkTop AI: ArkTop AI is a different situation — its customers are B2B enterprises, and feedback mostly comes from project communication logs and renewal-stage requirement lists. I use Claude to summarize each project meeting transcript and extract requirements. After aggregating these, I found that "real-time data integration" had been independently raised by 5 different clients, which directly drove our Q4 2025 API integration feature development.
Lessons Learned the Hard Way
Mistake 1: Treating all user feedback equally
Early on, when I had AI analyze feedback, I didn't weight feedback by source. The result: AI told me "export functionality" was the #1 need — because free-trial users submitted a lot of requests for it, but paying customers actually cared more about quoting efficiency.
I later added a preprocessing step: tag users by type (paying / trial / churned) before having AI cluster each group separately. Paying users get 3x weight, trial users 1x, and churned users are analyzed separately (to understand why they left). This adjustment significantly improved the practical value of the analysis.
Mistake 2: Over-relying on AI's "confidence scores"
Claude occasionally produces high-confidence wrong answers during text classification. For example, "your pricing is too high" got classified as a "pricing issue," but the context was "the feature for quoting customers costs too much" — it was actually a feature request.
Lesson: after every batch analysis, I spot-check 10% manually. Accuracy runs 88%–92% — good enough but not 100%.
Mistake 3: Too much analysis, not enough action
There was a stretch where I got addicted to analysis — slicing data by time trends, user personas, competitor comparisons — reports kept getting prettier while iteration speed actually slowed down.
I set a rule: each weekly product decision review produces exactly one conclusion — the single most important thing to do next week. Not three things. One. This constraint pulled me out of "analysis mode" and back into "execution mode."
Advice for Getting Started
If you're also running a solo business and want to use AI for product decisions, you don't need to build a full framework from day one. Three steps to get started:
Step 1: Centralize your user feedback in one place. Forms, emails, chat screenshots — just have one unified collection point. Tally is free, Notion is free. That's enough.
Step 2: Spend 30 minutes a week doing one AI-assisted analysis. Copy your feedback into Claude (the free web version works), and ask it to cluster by topic. The prompt doesn't need to be fancy: "Categorize this feedback by theme, count occurrences, and label each as positive or negative." Do this for a few weeks and patterns will emerge.
Step 3: Before every decision, ask yourself "do I have three independent data points?" At first you might not be able to find all three — that's fine, two is still better than a gut call. Once the habit forms, you'll gradually fill in the channels.
Several SaaS founders in the Solo Unicorn Club use similar methods, and we regularly share our user feedback analysis templates and prompts. If you're figuring out this process, joining the club will save you a lot of trial and error.
Final Thoughts
The core tension of product decisions in a solo business: you need to be data-driven, but you don't have the headcount for traditional user research. AI compresses text analysis, pattern discovery, and change tracking into minutes — one person can now do what used to require a team.
But tools are means, not ends. What ultimately decides what to build and what to skip is your understanding of users and your business judgment. AI helps you see the data clearly. You make the call.
What's the biggest pain point in your product decisions right now — lack of data, lack of analytical capability, or lack of a decision framework?