Quick Answer
Most therapy practices are not losing because nothing exists online. They are losing because the first public signals are weak, thin, or inconsistent. In this 100-practice sample, search context, local trust, and homepage clarity broke down much more often than therapists usually assume.
What we reviewed
How this benchmark was reviewed
The benchmark uses a cleaned sample of 100 complete practice assessments from January 1, 2026 through March 26, 2026.
Duplicate rows, placeholder enrichment emails, broken runs, and `needs_review` rows were excluded from the public sample.
AI visibility is reported only as “in the prompts we tested,” and only on the subset where the prompt test ran cleanly.
Psychology Today is discussed qualitatively in this version because the PT denominator is still too thin for a strong headline percentage.
Public source pages checked: Practice Visibility Assessment, Google Business Profile for Therapists, and How Clients Find Therapists.
Why Trust This Guide
This is useful because it shows repeated public discovery patterns, not one-off opinions
The benchmark is built from Reframe’s real assessment workflow. The goal is not to claim a perfect industry census. The goal is to show what keeps appearing when private practices lose clarity or trust before a client reaches out.
Sample size
100 practices
The public report uses a cleaned 100-practice sample drawn from Reframe’s live assessment workflow.
Strongest headline stat
91% weak search context
Weak Google-result context before the click was the most common issue in the current benchmark sample.
Local trust gap
72%
Nearly three quarters of the sample either had no live Google Business Profile or had low visible review trust.
Sources And Method
The benchmark is based on Reframe’s real assessment workflow rather than a hypothetical checklist.
Useful proof of how fixing the discovery path changes outcomes once the public signals reinforce each other.
Supports the local trust interpretation used in the benchmark findings.
The public story in this version uses only the chart set that survived the denominator QA pass cleanly.
Quick Findings
The current benchmark is strong enough to say five things clearly.
These are the repeated problems that showed up most often in the current 100-practice sample. The AI statistic is a subset read. The other four are full-sample benchmark reads.
Weak Google-result context
91%
91 of 100 practices had weak context before the click.
Local trust gap
72%
72 of 100 had no live GBP in the sample or had low visible review trust.
Homepage first-screen unclear
59%
59 of 100 made fit harder to understand quickly on the homepage.
Cross-surface mismatch
81%
81 of 100 showed inconsistency across search, local trust, and site surfaces.
Not named in tested AI prompts
43 / 48
Subset read only: 43 of 48 usable AI tests did not name the practice.
Read this as a reviewed sample of 100 practices, not as a national survey. The AI stat uses a 48-row subset where the prompt test ran cleanly. Psychology Today appears in the analysis, but PT is not used as a headline percentage in this version because the current PT denominator is still too thin.
Methodology
What this benchmark is, and what it is not
This report is based on a cleaned sample of 100 complete practice assessments from Reframe’s live assessment workflow between January 1, 2026 and March 26, 2026.
Included
- Complete assessments from the live `practice_assessments` table
- Google-result context, local trust, website first-screen clarity, and AI prompt data when usable
- A cleaned final sample after exclusions and dedupe
Excluded
- Duplicate practices or duplicate emails in the time window
- Placeholder enrichment rows and clearly broken runs
- `needs_review` rows that were not ready for public benchmark use
This is not a randomized national census. It is an observational benchmark based on practices Reframe actually reviewed. That is still useful as long as the claims stay narrow and the denominators stay visible.
Discovery Stack
Most therapy practices are judged across several public surfaces, not one
A prospective client may see a Google result, notice whether there is a local profile with visible trust, click through to a homepage, compare that impression to a directory profile, and then ask AI for recommendations or summaries. That is why visibility problems are usually system problems.
01
Google result
02
Google Business Profile
03
Homepage first screen
04
Directory profile
05
AI prompts tested
The problem is rarely “nothing exists.” The problem is that one surface is vague, the next surface is thin, and the combined impression does not make the right client feel confident enough to reach out.
Finding 1
Weak Google-result context is still the default
In the current sample, 91 of 100 practices had weak Google-result context before the click.
The first leak usually happened before the site visit. Search results often had generic titles, weak or missing meta descriptions, or not enough local and specialty context to help the right client self-select.
What prospects often see
- a practice name with no clear specialty
- a result that sounds credible but generic
- no strong local signal
- no immediate reason to prefer one click over another
Smallest useful fix
Tighten the homepage title and meta description around specialty, location, and client problem before chasing broader SEO work.
Finding 2
Local trust gaps are common, even when the practice exists online
In the current sample, 72 of 100 practices had a local trust gap. Among rows with a live Google Business Profile, 26 of 54 still had low visible review trust.
The local problem was broader than “no profile.” Many practices either had no live Google Business Profile in the sample or had a live profile with too little visible trust to reassure a skeptical prospective client.
What prospects often see
- no local panel at all
- a thin profile with very few reviews
- a profile that exists but does not yet feel established
Smallest useful fix
Claim or strengthen the Google Business Profile before spending time on broader visibility expansion.
Finding 3
Homepage clarity still breaks down on the first screen
In the current sample, 59 of 100 practices made fit harder to understand quickly on the homepage.
Many sites were not empty. The issue was slower first-impression clarity: broad language, long credential stacks, or a delayed statement of who the practice helps and where it practices.
What prospects often see
- welcome language before fit language
- credentials before client problem
- several audiences named at once
- a homepage that feels polished but does not resolve quickly
Smallest useful fix
Rewrite the first screen around who you help, what you help with, and where you practice.
Finding 4
Cross-surface mismatch is more common than pure invisibility
In the current sample, 81 of 100 practices showed cross-surface mismatch.
The most repeated pattern in this benchmark was inconsistency. One surface said enough to sound relevant. The next surface felt broad, outdated, or thin. That forces the prospective client to do repair work that the practice should have handled already.
What prospects often see
- one surface signaling a specialty and another saying almost nothing specific
- one profile that feels current and another that feels neglected
- location or positioning language that changes from one place to the next
Smallest useful fix
Align the first public message across search, Google Business Profile, and the homepage before doing more expansion work.
Finding 5
AI visibility is still mostly absent in the rows where it was tested
Among the 48 rows where the AI prompt test ran cleanly, 43 were not named in the prompts tested.
This is a subset read, not a full-sample claim. It does not mean a practice is absent from every AI surface. It does mean most practices have not built enough public evidence to be surfaced reliably in the prompts Reframe tested.
How to read this safely
- AI is not a separate growth channel here. It is an outcome of the rest of the discovery system.
- Thin websites, weak local trust, and inconsistent public language make AI invisibility more likely.
- The useful fix is to improve the public evidence stack before treating AI as its own optimization project.
Psychology Today
Psychology Today matters here, but not as a headline percentage yet
PT issues showed up repeatedly in the findings, but the current export does not yet support a strong PT headline chart.
In the current sample, many rows include a PT URL, but the PT read is not yet broad enough to support a publish-safe PT percentage. So this version keeps PT inside the qualitative findings: it still matters, it still contributes to visibility leakage, but it is not being overstated as a benchmark number until the denominator is stronger.
What Better Rows Shared
The stronger practices looked easier to trust and easier to understand
The better rows in this sample were not perfect. They were just easier to read, easier to trust, and less likely to create doubt between surfaces.
The common thread was not “more marketing.” It was clearer public evidence.
Fix Order
Most practices do not need to fix everything at once. They need the right fix order.
The benchmark points to a simple sequence that is more useful than random activity or channel-chasing.
- 1
Fix the first public mismatch.
- 2
Strengthen local trust if GBP is missing or thin.
- 3
Clarify homepage first-screen fit.
- 4
Align the message across surfaces.
- 5
Expand broader discovery only after the first layers are coherent.
How this maps to Reframe
PT-only problem
Start narrow with PT Optimization.
Google/local trust problem
Start with a Google Business Profile fix.
Cross-surface system problem
Treat it like a Visibility Foundation issue.
FAQ
Questions practice owners usually ask after reading the benchmark
These are the narrow answers the current benchmark can support without overclaiming.
Is this a national therapist survey?
No. It is a reviewed sample of 100 therapy practices from Reframe’s assessment workflow.
How was the sample chosen?
The sample came from complete assessments in a defined time window, then was cleaned to remove duplicates, placeholder rows, broken runs, and rows not ready for public benchmark use.
What does “AI visibility” mean here?
It means whether a practice was named in the prompts Reframe tested. It does not claim coverage of every AI product or every recommendation surface.
Does every therapist need a Google Business Profile?
Not every therapist will use it the same way, but local trust was weak often enough in this sample that GBP deserves a serious look for most private practices.
Is Psychology Today still worth it?
Often yes, but it works best when it reinforces the rest of the discovery system instead of carrying the whole load alone.
What is the first thing most solo practices should fix?
Usually the first public mismatch: the place where the current search result, local presence, or homepage creates doubt before the client gets a clear sense of fit.