Press Release

The Algorithmic Editor

By James A. Gardner, Lecturer at Northeastern University

How Google Curates Your Reviews — and Why Most Marketers Haven’t Noticed 

Google Is Editing Your Brand Story 

When a prospective patient, guest, diner, student, or client searches for your organization on Google, they don’t read your reviews. They read Google’s edit of your review collection. 

Out of necessity, Google’s algorithm distills hundreds — sometimes thousands — of complex, nuanced customer voices into a handful of “Most Relevant” snippets and reviews. On mobile, a user might see three. On desktop, perhaps ten.  

These algorithmically selected choices are the first thing a prospective customer sees, and research on anchoring tells us they frame everything that follows — whether the next step is booking a hotel room, choosing a restaurant, selecting a university, or scheduling a medical appointment. 

Most marketing leaders know this is happening in a general sense. What they don’t know is how much Google’s selection diverges from the actual record, in what direction, and across which dimensions of their brand. The assumption — that Google is showing a reasonably representative sample of what customers actually wrote — turns out to be wrong in ways that are measurable and significant. 

“Google isn’t just hosting your collection of reviews. It’s editing, curating, and summarizing them. And nobody has been checking their work.” 

Why Reviews Are the Digital Front Door 

Before exploring what Google does with reviews, it’s worth establishing why this matters at scale. According to BrightLocal’s ongoing consumer review research, 98% of consumers at least occasionally read online reviews for local businesses. That number is not unique to any one sector — it reflects how fundamental the review-reading habit has become across healthcare, higher education, financial services, hospitality, and professional services alike. 

83% of consumers primarily read those reviews on Google — a platform that, by most estimates, now hosts approximately 73% of the online review market. Just 3% of consumers would consider using a business with an average star rating of two stars or fewer. And a 0.1-star increase in average rating has been shown to lift conversions by as much as 25%, meaning the stakes of how your rating is perceived are not abstract. 

The zero-click dynamic amplifies all of this. For a growing majority of consumers across sectors, the Google search results page is the entire decision environment — the first screen is the last screen. What Google chooses to surface in that moment is not a preview of your reputation — it is your reputation, for that user, in that moment. 

What We Measured, and What We Found 

We studied the emergency department of a major Boston academic medical center — an institution ranked among the top hospitals in the United States — comparing the complete corpus of available Google Reviews against the algorithmically curated “visible set” that Google surfaces by default. The emergency department context is analytically useful precisely because it generates high review volume, high emotional intensity, and broad thematic range. But the measurement approach applies equally to a hotel, a restaurant group, a law firm, or a university. 

The full corpus contained 222 reviews. Google’s default view surfaces roughly 10 reviews — the first-impression set that the majority of users read and stop at. We coded every text-bearing review across eight dimensions of patient experience: wait times, communication, quality of care, staff behavior, facilities, cost and billing, discharge process, and access and logistics. 

The divergence was striking. 

Table 1: Visibility Bias by Review Theme — Major Boston Academic Medical Center ER 

Review Theme  Google’s Visible Set  Full Corpus  Bias Ratio  Effect 
Discharge Process  10%  2.8%  3.54×  Extreme amplification 
Access & Logistics  20%  8.5%  2.36×  Severe amplification 
Cost & Billing  10%  4.5%  2.21×  Severe amplification 
Communication  40%  21.5%  1.86×  Strong amplification 
Facilities  20%  11.3%  1.77×  Strong amplification 
Staff Behavior  90%  66.1%  1.36×  Moderate amplification 
Quality of Care  80%  66.1%  1.21×  Moderate amplification 
Wait Times  60%  50.3%  1.19×  Proportional 

Bias Ratio = prevalence in Google’s visible set ÷ prevalence in full corpus. A ratio of 1.0 = proportional representation. >1.0 = algorithmically amplified. 

Table 2: Star Rating Comparison — What Patients See vs. What Patients Wrote 

What the patient sees  One-star %  Mean star rating 
Google’s curated visible set (default view)  80%  1.8 ★ 
Full patient review corpus (all reviews)  61%  2.3 ★ 

The algorithmically curated default view is meaningfully more negative than the full patient record on both measures. 

The amplification was striking — and not where you might expect. The themes amplified most aggressively were not the most common ones. Discharge issues, present in just 3% of the full corpus, appeared in 10% of Google’s visible set — an extreme 3.5× amplification. Access and cost concerns showed severe amplification of more than 2× each, despite both being rare in the full record. Wait times — the single most common theme in the corpus — appeared at roughly proportional rates. And the default visible set was meaningfully more negative than the full record: the mean star rating visible to a default user was 1.8, against a corpus mean of 2.3. 

One additional finding deserves attention: every review in Google’s visible set carried zero community helpful votes, despite 68% of the full corpus having at least one. Whatever signal Google is optimizing for, it does not appear to be consensus usefulness as expressed by other users. 

What Is Google’s Algorithm Looking For? 

This is where honest intellectual humility is required. Google’s curation algorithm is a black box. We can observe its outputs; we cannot audit its inputs. But the evidence — from our data and from practitioner research — points toward some plausible factors. 

Review length appears to be among the strongest signals. Longer, more detailed reviews tend to retain top-10 visibility longer (Hawkins, 2025). This creates a structural asymmetry in high-stakes service contexts: negative experiences — a six-hour wait, a billing dispute, a moment of perceived dismissal — generate longer, more narrative reviews than positive ones. The algorithm rewards specificity, and dissatisfied customers are more specific. 

Other signals that appear to play a role include Local Guide reviewer status, whether the reviewer has a profile photo, and engagement signals like upvotes. The age of a review also matters in complex ways: newer reviews enter the visible set on recency, while older reviews with accumulated engagement can retain their position long-term. 

Google appears to prioritize perceived usefulness over strict statistical accuracy. That’s a reasonable design choice for a general consumer platform. It becomes a structural problem when the content being curated involves high-stakes decisions, when users cannot tell they’re reading a curated view, and when the systematic direction of the distortion — amplifying intensity, suppressing nuance — is neither measured nor disclosed. 

“Providers think they’re managing reputation when they’re actually managing algorithmic interpretation.” — Dr. Patrick McAvoy, CMO 

Why This Matters Beyond Any Single Sector 

The mechanism described here is not specific to emergency departments or even to healthcare. Any high-stakes, emotionally charged service context is susceptible to the same dynamic — and that covers a remarkably wide range of industries. 

Hotels and restaurants are perhaps the most obvious case. A single difficult stay or a kitchen off its game generates a vivid, detailed, story-rich review. A perfectly pleasant experience generates a four-word five-star. If Google’s algorithm rewards length and narrative intensity — as practitioner research suggests it does — then the visible set for any hospitality brand will systematically overweight the outlier experiences and underweight the routine satisfactions that constitute the majority of actual customer reality. 

Universities face the same asymmetry. The students who feel most compelled to write are those navigating bureaucratic frustration, financial stress, or unmet expectations. The students who got what they came for — a degree, a career, a credential — have less urgency to document it. The algorithm then curates from an already non-representative corpus. 

In financial services and professional services — law firms, accounting practices, wealth managers — the pattern recurs. Difficult cases, disputed outcomes, and billing disagreements produce the most detailed written accounts. Clients whose estate planning went smoothly rarely feel moved to narrate it. 

In all of these contexts, the gap between what customers actually wrote and what Google chooses to show is not a minor calibration issue. It is the difference between a brand’s actual reputation and the algorithmically mediated reputation it projects to the market — a gap that currently goes unmeasured in virtually every sector. 

What Marketing Leaders Should Do 

The first step is simply to look. Open an incognito browser, search for your organization, and read the first ten reviews Google surfaces under “Most Relevant.” Then look at your full review corpus sorted by recency. Are the same themes present? Are the same kinds of experiences represented? Whether you run a hotel brand, a restaurant group, a professional services firm, or a health system, most marketing leaders who do this exercise for the first time are surprised by how different the two pictures are. 

Second, stop treating your aggregate star rating as your primary review metric. The rating visible to a default user may differ from your headline rating depending on which reviews Google includes in its visible set. The number that matters is the experiential signal in the ten reviews Google actually shows — the themes emphasized, the themes buried, and the overall impression a first-time visitor would form in sixty seconds. 

Third, understand that this is a measurement problem before it is a management problem. You cannot optimize toward a target you haven’t quantified. The framework we’ve developed — comparing theme prevalence in the algorithmic visible set against the full corpus — is one approach. The specific numbers will vary by institution, industry, and review volume. But the question is universal, whether you are a CMO at an academic medical center, a VP of brand at a hotel chain, or a marketing director at a regional university: is Google showing your prospective customers a representative view of what your existing customers actually said? 

Our early evidence suggests the answer is no. And nobody, until now, has been measuring how large that gap is. 

James A. Gardner is a marketing strategist and lecturer at Northeastern University in Boston, MA. He’s leading a study of algorithmic visibility bias in Google Reviews across four Boston academic medical centers — the full study, co-authored with Xintong Li and Mengyao Li, is forthcoming — and is excited about extending the measurement framework to other sectors where algorithmic curation shapes high-stakes consumer decisions. Data and inquiries: [email protected] 

References 

BrightLocal (2025). Local Consumer Review Surveybrightlocal.com. 

Hawkins, J. (2025). Google Most Relevant Reviews. Sterling Sky. sterlingsky.ca. 

Placona, A.M. & Rathert, C. (2022). Are Online Patient Reviews Associated with Health Care Outcomes? Medical Care Research and Review. 

Uberall (2019). The Reputation Management Revolution: A Global Benchmark Report. uberall.com. 

Acknowledgments 

The author thanks Rose Glenn (former Chief Marketing & Communications Officer, Michigan Medicine and Henry Ford Health System), Nardeep Singh (Manager of Marketing Technology, Renown Health), Mike Blumenthal (co-founder, Near Media), John Davey (VP of Marketing Technology, Mount Sinai Health System), Nathalia Hyland (whose framing of “trust architecture” and the cognitive role of anchoring shaped the article’s opening logic), and Dr. Patrick McAvoy (CMO and AI authority strategist) for their practitioner perspectives and substantive contributions to the research framing. Their independent convergence on the article’s core thesis from different vantage points — brand strategy, health system martech, local search, academic medical center marketing, cognitive framing, and AI-era practitioner experience — has materially strengthened the work. 

Author

Related Articles

Back to top button