Teardown· 5 min read· Sourced from r/SaaS

How early-stage SaaS founders find product-market fit in 2026

By Michal Baloun, COO — aggregated from real Reddit discussions, verified by direct quotes.

AI-assisted research, human-edited by Michal Baloun.

TL;DR

Early-stage SaaS founders keep reaching for automated marketing funnels and polished code architecture before they have the one thing pricing actually depends on — a clear sense of who their customer is and why they buy. In recent r/SaaS threads, founders describe discovering major pricing-relevant realities (unexpected use cases, an ICP that's smaller than they thought, a feature set nobody needs) only after replacing automated onboarding with manual calls. The effective path to early traction isn't broad SEO or paid ads. It's 1:1 discovery that tells you what to charge for, who will pay, and which package tier actually maps to willingness-to-pay before you harden a pricing page around guesses.

By Michal Baloun, COO at Discury · AI-assisted research, human-edited

Editor's Take — Michal Baloun, COO at Discury

The pricing question that gets asked on r/SaaS — "should I do three tiers or per-seat, should the middle plan be twice the lower one?" — is almost always the wrong question for a pre-meaningful-MRR product. Pricing at that stage isn't a packaging problem; it's an ICP-clarity problem wearing packaging clothes. Founders who manually onboard their first cohort almost always reprice within the quarter because they finally learn who actually converts, and the repricing makes their previous pricing-page debates look quaint. The page is the artifact; the discovery call is the strategy.

The second trap I'd name is the one where a founder treats automation as a sign of maturity. Early automation hides exactly the signal you need: the specific sentence a real buyer uses when they decide to pay. Automated funnels report aggregate behavior that only tells you whether your existing hypothesis is working. They cannot surface the hypothesis you didn't know to form — which is almost always where the useful pricing insight lives. Manual onboarding for the first cohort is the single highest-ROI time investment at this stage, and I've never seen a founder regret it.

What I'd do differently than most founders reading these threads is resist the pull toward CRM selection, marketing automation, and pricing tier architecture until you have fifty paying customers or roughly three months of manual onboarding notes, whichever comes first. Write down one sentence from each customer on why they paid. Group the sentences. The three or four clusters you end up with are your pricing tiers, your positioning, and your roadmap — derived from real willingness-to-pay rather than reverse-engineered from a pricing page you hoped would work.

The pricing timeline: how the r/SaaS threads keep playing out

Read chronologically, the r/SaaS threads on pricing and early growth rhyme so closely that they read like a single shared timeline. The sequence below is how that timeline keeps unfolding, rebuilt from the specific incidents founders described in public.

Month 0–1: the pricing page gets built from guesses

The canonical mistake arrives early. A founder with a few signed-up trials writes a pricing page — three tiers, middle one starred, annual discount — before they have talked to anyone who has actually paid. The page looks professional. It is also a hypothesis wearing a uniform. Automated in-app tours launch alongside it, partly to look polished, partly because the founder quietly hopes the tours will answer the discovery questions for them. The instrumentation dashboard lights up. Activation metrics read as "fine." Churn starts quietly and without explanation.

Month 2: automated onboarding gets swapped for founder calls

Something forces the swap. Usually it's churn the founder can't explain, sometimes it's a conversion rate that stalls below five percent, occasionally it's just a slow week with nothing else to do. u/Live_Young831 in a r/SaaS thread on replacing automation with founder calls describes the switch: they started jumping on 20-minute calls with new trials, and within a fortnight the data the automated dashboard had never been capable of surfacing started landing.

"The #1 reason people didn't convert was they couldn't figure out how to import their data. A problem I'd never have found through automated NPS surveys." — u/Live_Young831

The point isn't that the dashboard was wrong. It's that the dashboard was only capable of measuring hypotheses the team already had. The manual calls surfaced the hypothesis the team didn't yet know to form — which is where the useful pricing insight almost always lives.

Month 3: an unexpected use case surfaces, and the ICP narrows

By the third month of manual calls the founder usually notices something uncomfortable: a meaningful slice of paying users showed up for a use case nobody on the team had planned for. Or the ICP they'd been targeting is one level too broad. u/Designer_Money_9377, in a thread on early-SaaS ICP discovery, describes that exact moment — having spent months marketing to agencies broadly while the real pain, and the real willingness-to-pay, sat with 2–5-person shops drowning in manual prospecting.

"I was targeting agencies broadly when the real pain was with 2-5 person shops who couldn't afford a dedicated sales person but were drowning in manual prospecting." — u/Designer_Money_9377

Content marketing and LinkedIn posting that were running in parallel had been producing thin engagement throughout this period, and in retrospect the sequence reveals itself: the team was scaling marketing against an ICP hypothesis that was never quite right, and the aggregate numbers had nowhere to tell them. A parallel thread on Reddit as a slow-burn channel captures the same pattern — founders describe the useful work as answering questions in niche threads rather than posting promotional links, with traction arriving weeks later rather than the same day.

Month 4: the over-engineered architecture finally shows up as a tax

Somewhere around month four the technical-debt bill lands. The monolith-vs-microservices debate that felt important in month one now reads as a quiet runway drain. u/farhadnawab flagged this specific trap in a r/SaaS thread on pre-revenue over-engineering, and u/DramaLobster8 in the same thread gave it the clearest post-mortem:

"Spent way too long building the 'right' microservice architecture for something that ended up being 3 crud operations and a stripe integration now i just slap everything in a monolith." — u/DramaLobster8

The infrastructure costs compound alongside. In a thread on developer-tool pricing friction, founders describe even modest per-developer fees becoming contentious as free tiers tighten, pushing pre-revenue teams toward open-source alternatives like Bruno. The sane default is monolith-first, tests-second, refactor-when-real-users-complain. Energy spent on state-management library debates is energy not spent on the ICP-narrowing conversations of month two and three.

Month 5: the CRM question arrives, with the right constraint finally in place

By month five the founder has enough pipeline to need something sturdier than a spreadsheet, and the CRM selection question surfaces. In a r/SaaS CRM-selection thread, u/AndesAndAlps makes the orienting point: for the first fifty customers the job is pipeline visibility and automated conversation capture, not forecasting. The tools most founders converge on — Attio, HubSpot, Pipedrive, all on free tiers — are usually sufficient. What matters is whether the team actually opens the tool daily.

"The people who became my first paying users weren't the ones asking for features. They were the ones venting about existing solutions. When you find someone mid-rant, you have a customer." — u/aerobase-app

u/inotused makes the companion point in the same thread: a CRM the team doesn't open is a CRM with no features. Simple reporting plus Slack-and-calendar integration beats five hundred enterprise features at this stage.

Month 6: the pricing page gets rewritten, this time from evidence

The repricing moment is almost always the same. The founder reviews three months of manual onboarding notes, clusters the "why I paid" sentences into three or four groups, and finds the tiers of the pricing page were structured around the wrong axis. Usage caps should have been seat caps, or vice versa. The middle tier was priced against a persona that barely exists in the customer base. The rewrite is boring and obvious in hindsight — which is the sign it's being driven by evidence rather than a hypothesis.

Summary of the sequence

Collapsed into a single paragraph: manual onboarding produces the sentences that eventually become the pricing tiers. Automated onboarding produces dashboards that confirm the hypotheses you already had. The first fifty customers are a research project; the pricing page is the writeup. Founders who try to draft the writeup before running the research reprice within the quarter. Founders who run the research first only draft the page once.

Minimum-viable action plan for the next 14 days

If the timeline above reads uncomfortably close to your current trajectory, the shortest intervention is also the cheapest:

  1. Turn off the automated onboarding tour for new signups. Replace the first-touch email with a plain-text one-liner offering a 20-minute call.
  2. Take every call yourself for two weeks. Ask only: what made you sign up, what did you try before, what would a win look like? Write the answers down verbatim.
  3. At the end of the fortnight, cluster the notes. Look for three or four groups of why-I-paid sentences. These are your real pricing tiers.
  4. Do not touch the pricing page yet. Run another 10–20 calls against the new ICP you just identified, confirm the clusters hold, then rewrite the page once.

Sources

This analysis draws on r/SaaS threads surfaced via Discury's cross-subreddit monitoring, prioritizing recent discussions where founders connected pricing and go-to-market decisions to concrete onboarding and ICP experience.

About the author

Michal Baloun

COO at Discury · Central Bohemia, Czechia

Co-founder and COO at Discury.io — customer intelligence built on real online conversations — and at Margly.io, which gives e-commerce operators profit visibility beyond top-line revenue. Focuses on turning community-research signal into decisions operators can actually act on.

Michal Baloun on LinkedIn →

Made by Discury

Discury scanned r/SaaS to write this.

Every quote, number, and user handle you just read came from real threads — pulled, verified, and synthesized automatically. Point Discury at any topic and get the same output in about a minute: direct quotes, concrete numbers, no fluff.

  • Monitor your competitors, category, and customer complaints on Reddit, HackerNews, and ProductHunt 24/7.
  • Weekly briefings grounded in verbatim quotes — the same methodology you see above.
  • Start free — 3 analyses on the house, no card required.