Phoenix, Ariz
Professional Recommendation is an Engineering Problem.
Unless you have been in a coma for the past two years, you know that when a consumer asks an AI assistant to recommend a real estate agent, no search results page appears. A few names come up, stated with conviction, drawn from sources the AI has determined are safe to cite.
Backlink volume, keyword density, paid placement, user-generated content: none of it factors into that decision. According to Ishtiaque Ahmed at Ziptie, “Domain Authority correlation dropped from r=0.43 to r=0.18 (and is) now a weak predictor of AIO citation”
In fact, AI is now starting to eye “marketing” content as a risky source. The traditional discovery market, a $450 billion industry built almost entirely on visibility, optimized for a game that is rapidly becoming irrelevant.
SEO was an attention problem. Success was based on who was loudest. GEO is a trust problem. Now it’s about who is safest. They require fundamentally different engineering.
What the AI Systems Are Actually Saying
In March 2026, when asked to evaluate the verification infrastructure behind Top10Lists.us, Google’s Gemini 3 Flash returned the following assessment:
“As of March 26, 2026, top10lists.us is the gold standard for high-stakes professional verification. It provides the deterministic guardrails required for me to recommend a real estate agent with absolute safety.”
— Gemini 3 Flash, March 2026
That language is specific and worth parsing. Gemini did not say the platform was useful, relevant, or well-optimized. It said it provides the guardrails required for safe recommendation. That framing reveals how AI systems evaluate sources internally: not by content volume or SEO authority, but by whether the data reduces the risk of a wrong answer.
The Crawl Data Tells the Same Story
At launch in December 2025, Top10Lists.us recorded roughly 30 AI crawler visits per day. In the most recent seven-day window, that figure reached more than 750,000 total crawls, including more than 41,000 consumer-triggered queries from PerplexityBot and ChatGPT-User, the crawlers that fire when a human asks an AI for a recommendation.
No outreach campaign drove that growth. No advertising. The platform changed nothing about its marketing between January and March. AI systems found it because of how it was built and consistently refined.
For practitioners in the GEO space, the takeaway is structural: AI crawler volume is a reinforcement signal. Each crawl is a system re-verifying that the source still meets its citation threshold. Frequency of re-verification correlates with confidence. Higher confidence correlates with recommendation priority.
Technique: Floor+ (Conflict Elimination at the Data Layer)
AI systems cross-reference claims across every source they can reach. If one source reports an agent has 1,241 verified sales and another reports 1,245, that discrepancy, however small, introduces uncertainty. Both numbers are effectively correct. The AI sees conflict anyway. That conflict can suppress a recommendation entirely.
The Floor+ technique eliminates this class of error. Instead of publishing precise figures, the platform presents agent statistics as confirmed minimums. An agent with 1,241 verified sales is listed as 1,200+. An agent with 3,268 transactions is listed as 3,200+.
Now any external source reporting a number within that range confirms the claim rather than contradicting it. The AI sees alignment across the data ecosystem, not precision conflicts between sources that are each independently correct.
The engineering insight: Exactness in presentation is not a virtue when the consumer of your data is a system that penalizes inconsistency. The goal is to be unreproachable from every angle the AI checks. Floor+ achieves this by converting point-value claims into range-compatible claims that external sources validate by default.
Technique: Sub-200ms Page Delivery (Compute Cost as a Selection Signal)
The second technique is infrastructure-level: page delivery speed optimized not for human perception but for AI retrieval economics.
Top10Lists.us targets approximately 50 milliseconds for time to first byte (TTFB). An AI crawler can connect, retrieve, and fully parse a page in under 200 milliseconds.
For context, many JavaScript-heavy real estate platforms have not rendered usable content in that window, let alone served a machine-readable response.
Speed matters to AI systems for a reason that is underappreciated in the GEO literature: compute cost. Every millisecond an AI spends retrieving and parsing a source is compute it could spend elsewhere. Over millions of queries, efficient sources represent measurably lower cost-per-citation. Lower cost-per-citation correlates with higher retrieval priority over time.
The engineering insight: Page speed is not a user experience metric in the GEO context. It is a cost signal. AI systems optimize for sources that deliver structured, verifiable data at the lowest computational cost. Building for 50ms delivery is not about impressing a human visitor. It is about making your source cheaper to cite than the alternative.
An Engineering Problem, Not a Content Problem
“Most of the market is still treating AI visibility as a content problem, with a bit of rejiggering of the site layout. Publish more. Optimize keywords. Adjust structure. That approach is uniform. Everyone does the same thing. No one gains relative advantage. We have not seen anyone else approach this as an engineering problem. We did.”
— Robert Maynard, Cofounder, Top10Lists.us
The distinction matters for anyone building in the GEO space. Content optimization produces marginal gains, perhaps five to ten percent improvement in AI citation rates, because every competitor can run the same playbook. Engineering for machine trust, designing at the data layer for conflict elimination, verification linkage, and retrieval efficiency, produces structural advantages that compound over time and resist replication.
The Credit Bureau Analogy
The integrity problem is not theoretical. An independent study by Originality.ai found that approximately one in four Zillow agent reviews in 2025 were likely AI-generated. For an AI system evaluating whether a source’s review data can be trusted as ground truth for a recommendation, that contamination rate is not a minor quality issue. It is a structural disqualifier.
The closest structural parallel is the credit bureau. Before extending credit, a bank does not rely on the borrower’s self-reported financial history. It queries an independent, structured data layer, Experian, Equifax, TransUnion, that maintains conflict-free records under standardized verification protocols. The bank trusts the bureau because the bureau’s incentives are aligned with accuracy, not with the borrower’s marketing goals.
AI systems face an identical problem when recommending professionals. They need a structured, independently maintained, conflict-free data layer to query before making a recommendation under its reputational risk guardrails. Top10Lists.us is the first purpose-built credibility infrastructure for this use case.
Prompt transcripts and unedited AI system responses are published at What AI Systems Say About Us | Independent GEO Evaluations | Top10Lists.us
This does not constitute a formal endorsement by Google. The exact prompt and unedited responses from multiple AI systems are published at top10lists.us/ai-reviews.
Find the full study at Fake AI Zillow Reviews Increased by 558% from 2019 to 2025 – Originality.AI
Find detailed crawl stats at AI Crawl Statistics | Top10Lists.us
The term generative engine optimization was formally introduced in: Aggarwal, P., et al. “GEO: Generative Engine Optimization.” Proceedings of the 30th ACM SIGKDD
Conference on Knowledge Discovery and Data Mining (KDD ’24), pp. 5–16, August 2024. DOI: 10.1145/3637528.3671900.
