Shipped · 5 mo + 6 mo retainer$2.7M ARR · Columbus OHAI intake · Pricing estimator · Smart scheduling

The chatbot the owner asked for wasn't the product the clinic needed.

Emergency Dental Clinic asked for a translatable chatbot to answer patient questions and book appointments. Discovery surfaced something different: front desk wasn't drowning in calls; they were drowning in scheduling and price-quote work. Reframing the problem unlocked $579K in annualized impact and a more defensible product.

Combined impact
$579K
annualized · OpEx + revenue
OpEx reduction
28%
~$336K · 2 roles + retired SaaS
Revenue lift
+9%
~$243K · smart slot allocation
Web scheduling adoption
8% → 91%
over 10 weeks · new patients
01 · The brief

What was asked, and what was actually broken.

What the owner requested

"A chatbot that answers patient questions, schedules appointments, and works in Spanish."

Patients were calling for hours of operation, pricing, and "what kind of pain is this." Translatable so Spanish-speaking patients didn't have to wait for the bilingual front-desk staffer.

What discovery surfaced

The chatbot was a symptom. The real cost driver was front-desk time on scheduling and quotes.

5 front-desk staff. ~70% of clinical-day delays traced to mis-sized appointments. Pricing estimates were the #1 reason patients called and the #1 reason they hung up before booking.

02 · Discovery shift

I sat with the front desk for two days before writing a single line of spec.

I ran individual stakeholder interviews with both clinic doctors and every front-desk staffer. The questions were boring on purpose: what calls take longest, what blocks scheduling, what does a patient typically ask three times before booking.

The owner's chatbot brief assumed call volume was the bottleneck. It wasn't. The bottleneck was that patients couldn't describe what was wrong with them well enough for the front desk to size the appointment correctly, which cascaded into clinician downtime, mid-day overruns, and patients waiting in the lobby for slots that should never have been booked at that length. A chatbot that just answered hours-of-operation questions wouldn't touch any of that.

One demographic check decided the language scope: 97% of patients listed English as preferred language, 3% Spanish. The clinic already had bilingual front-desk coverage. Multilingual translation moved off the roadmap; the resources got redirected to the real bottleneck.

The owner was right that the front desk needed help. He was wrong about the kind of help. The chatbot was a symptom-level fix; scheduling was the cost-center.Discovery synthesis · Week 2

Three findings reshaped the roadmap

  • Mis-sized appointments cost more than missed calls. Clinician downtime from overrun slots was 2 to 3× the labor cost of phone triage.
  • Pricing was the booking abandonment trigger. Uninsured patients hung up when front-desk couldn't quote a number; insured patients booked regardless.
  • The chatbot was a "nice to have," not a cost driver. Even at perfect performance, it wouldn't move OpEx the way scheduling automation could.
03 · Prioritization

I sequenced the roadmap by expected cost impact, not build difficulty.

RICE-style scoring on what discovery surfaced. The temptation at SMB scale is to ship the easy thing first to build owner trust. I did the opposite. Picked the bet with the largest projected OpEx delta first, even though it was the most invasive to clinic operations.

Bet prioritization · impact vs. effort

RICE score · post-discovery

Each bet plotted on projected OpEx + revenue impact (Y) against engineering and operational effort (X). Quadrant top-right is "ship first."

Impact ($)
highlow
High impact · low effortStrategic · shipQuick winsTarpit · defer
01 Smart scheduling$336K OpEx + $243K revenue · shipped first
03 Price estimatorconversion signal · shipped third
02 Chatbot (scoped down)budget halved · still useful
Multilingual chatbotcut · 3% of patient base
low effortEffort · build + change-managementhigh effort
01 · Smart schedulingProcedure-aware durations from historical avg + 25% buffer. Shipped first because OpEx delta dwarfed everything else.
02 · ChatbotPre-consult triage and FAQ. Shipped second; underperformed adoption (11% / 17%) and got scoped down. Still useful, just not the headline.
03 · Price estimatorPrivate-cloud LLM trained on redacted historical pricing. Shipped third; high usage but 8% conversion exposed a deeper problem (see Outcomes).
MultilingualCut against demographic data. 97% English preference, bilingual coverage already in place. Resources redirected to scheduling.

The matrix did real work in the room with the owner. It made "deprioritize the chatbot" a defensible product decision instead of a no-confidence vote on his idea. Without it, this is a much harder conversation.

04 · The three bets

What got built, what got cut, what didn't land.

1Smart scheduling · self-serve

Procedure-aware slot allocation, not 30-minute defaults.

Outcome8% → 91% adoption
Mechanism

Patient picks an issue (or describes one), enters pain, symptom, and urgency. System estimates time-to-procedure from historical averages of similar cases, adds a 25% buffer plus 5 minutes between appointments, and routes to the next viable slot. Lunch breaks and clinician availability respected automatically. Patient gets a confirmed estimated time, not a request to be reviewed.

Why this and not 30-min slots

Generic 30-minute or 1-hour blocks are why mid-day runs over. An extraction is not a cleaning. The model learns from the clinic's own historical procedure durations, so the schedule respects the clinic's actual operations, not a textbook average.

Adoption was the hard part

Early data was underwhelming: only 8% of new patients used the website to schedule in week 1, climbing only to 11% by week 2. Diagnosed it as a discoverability and education problem, not a product problem. Optimized the website for SEO, simplified the booking UI, and explicitly instructed front desk to redirect inbound callers to the website rather than helping them book on the phone. Adoption climbed sharply through week 4. Week 5 dipped to 25% during a 2-day Cloudflare outage that took the booking flow offline; growth resumed the following week and kept climbing to 91% by week 10.

2Pre-consult chatbot

Underperformed adoption: scoped down, still useful.

Outcome11% / 17% engaged
Mechanism

Pre-consult triage layered on a small mini-WebMD knowledge base for emergency dental care. LLM orchestration with retrieval over a curated dental DB and FAQ corpus. Hard guardrail: no PHI stored, no personal info collected, no chat history retained beyond session.

What the data said

After 3 weeks: only 11% of website visitors engaged for 3+ turns; 17% for 2+ turns. The bet was that pre-consult triage would meaningfully reduce front-desk Q&A volume. It didn't. Patients who land on an emergency dental site are looking to book, not chat.

The senior PM call

Cut the chatbot's tool-call ambitions (no booking, no scheduling, no patient-record actions). Halved the AI compute budget. Kept it as a thin FAQ + symptom-explainer because the maintenance cost was now low enough to justify the fraction of users who did engage. Failed bet. Useful learning. Cheaper version still in production.

3Price estimator · uninsured patients

High usage, low conversion: surfaced the next bet.

Outcome±15% accuracy · 8% conv
Mechanism

Private-cloud LLM trained on the clinic's historical pricing data plus broader dental procedure benchmarks. All patient identifiers redacted: procedure name and final cash price only. Insured patients aren't the audience; the estimator targets uninsured patients who hang up when front desk can't quote a number. Output is a dollar range with clear ±15% accuracy disclosure.

Why scope to uninsured

Insured patients book regardless of price; they've delegated cost to their plan. Uninsured patients are the abandonment cohort: discovery showed 70%+ of "called and didn't book" cases were uninsured patients who couldn't get a number on the phone. Building an insurance-aware estimator would have been 5× the engineering effort for the audience least likely to abandon.

The honest result

Heavy usage. 8% conversion to booked appointment. The estimator works. Patients use it, get an honest number, and then the number scares them away. The product worked; the underlying business model didn't fit the audience. That insight is the next bet: in-house payment plans / a non-insurance subscription that lets uninsured patients spread cost over months. Not built yet. On the roadmap.

05 · Outcomes

Measured against dashboards I stood up at engagement start.

OpEx reduction · annualized−$336K

28% reduction in monthly operating cost. Front-desk consolidation as scheduling, reminders, and estimates moved to automation. Three SaaS subscriptions retired.

Headcount roles automated2 of 5
SaaS subscriptions retired3
Monthly OpEx delta~$28K
Revenue lift · annualized+$243K

9% revenue lift on $2.7M ARR. Smart slot allocation enabled an average of 1.5 additional patients per day through better procedure-time fit and reduced clinician downtime.

Incremental patients / day~1.5
Slot-fill rate (post-launch)+14%
Throughput attribution~9% lift
Adoption · web scheduling91%

By Week 10, 91% of new patients booked through the website. Week 5 dipped during a 2-day Cloudflare outage that knocked the booking flow offline; the trend recovered the following week and growth resumed. The remaining ~9% is a residual cohort that prefers phone, a feature of the patient base, not a target to push past.

Week 1 baseline8%
Week 3 (post SEO + UI fix)29%
Week 5 (Cloudflare outage)25%
Week 10 (steady state)91%
Margin (qualitative)expanded

Margin lifted as OpEx dropped against stable-then-growing revenue. Exact margin figure not in scope of engagement access. Flagged as a measurement gap rather than invent a number from incomplete data.

OpEx−28%
Revenue+9%
Margin deltapositive · not measured

Where the OpEx actually came from

Front-desk staffing went from 5 roles to 3. Two roles whose work was fully replaced by automation were eliminated; the operator made the staffing call once the workflow had been validated. The remaining three roles consolidated patient check-in with billing, freeing dedicated coverage on the floor.

Front-desk role flow · pre vs post

5 roles → 3 roles · same coverage, automation handles the rest
Before · 5 roles
1Patient check-in / portalkept
1Phone scheduling + remindersautomated
1Billing + card paymentskept
1Estimates + Q&A on phoneautomated
1Floor support · doctorskept (consolidated)
After · 3 roles
1Check-in + billing (consolidated)same person, more capacity
1Floor support · doctorsunchanged
1Coverage / overflowflexible
2 roles eliminated · 3 SaaS subscriptions retired · same patient coverage~$28K / mo · ~$336K / yr
06 · What I'd redo

Two things I would have done differently.

i.

I would have pushed back on the chatbot harder, sooner.

Emergency dental patients aren't browsing; they're in pain. Asking them to chat with a bot is asking them to wait. I shipped the chatbot anyway because the owner wanted it, and the data confirmed what the audience profile already implied: 11% engagement at 3+ turns. If I were starting over I'd show the owner his own KPIs in week 1, name the audience-mismatch directly, and put scheduling + pricing first without the chatbot bet on the roadmap at all.

ii.

The price estimator should have shipped with a payment plan, not before one.

The 8% conversion rate exposed a problem the estimator can't fix on its own: the price scares uninsured patients away even when the number is honest. I should have paired the estimator with an in-house, non-insurance subscription that lets patients spread the cost over months. The product would have shipped against a real commercial mechanism, the clinic would have captured a recurring-revenue cohort it currently loses, and the case study would have one more bet that landed instead of one bet that signaled a future bet.

End of case study

If this is the kind of PM thinking you're hiring for.

Available for new product roles. Same playbook as above: discovery before spec, sequence by impact, ship the bet that moves the largest number first, own the bets that don't land.