
When a short-term rental platform operating under America’s strictest vacation rental laws evaluated every major AI provider, the decision came down to more than benchmarks
When CasaVoya first spoke to The AI Journal in October 2025, the platform’s strategy was deliberately agnostic. Different models for different tasks, no ideological commitment to any single provider, resilience through redundancy. It was a pragmatic position — and for a fast-moving startup marketing vacation rentals in New York City, pragmatism made sense.
Six months later, the picture has sharpened. CasaVoya has designated Anthropic’s Claude as its primary AI across compliance, content, and guest experience — a decision that reflects what the platform learned from actually running production AI workloads in a regulated, trust-dependent marketplace.
“We meant what we said about not being ideological,” said Sasha Ramani, Board Member and AI Advisor at CasaVoya. “This isn’t ideology. It’s the conclusion we reached after putting these models to work on problems where the cost of error is real.”
When AI Gets the Law Wrong, Someone Gets Fined
Most AI use cases are forgiving. A suboptimal product recommendation costs a click. A mediocre email draft costs a few minutes of editing. Short-term rental compliance in New York City is a different category of problem entirely.
Local Law 18 — the ordinance that effectively banned most short-term rentals in NYC in 2023 — is among the most detailed and actively enforced vacation rental regulations in the world. Hosts must register with the city, remain present during guest stays, and navigate a web of building-level and neighborhood-level restrictions. As CasaVoya expands to additional cities — Barcelona, London, Tokyo each bring their own distinct regulatory frameworks — the surface area of compliance complexity grows with every market.
The platform uses AI along with its own legal experts to ensure that it is compliant with relevant laws and regulations in all jurisdictions in which it operates, flag potential issues before they become violations, and help hosts understand their obligations in plain language. It is, in Ramani’s framing, exactly the kind of task where confident-sounding wrong answers are worse than acknowledged uncertainty.
“The failure mode we fear most isn’t an AI that says ‘I don’t know,'” said Ramani. “It’s an AI that generates a plausible-sounding but incorrect answer about a host or a platform’s legal obligations. That’s a real harm to a real person.”
Claude’s Constitutional AI framework — Anthropic’s approach to building honesty and appropriate uncertainty into the model at a structural level — proved decisive for this workload. Where other models would generate authoritative-sounding responses about regulatory nuances they had no reliable basis to assert, Claude demonstrated a more calibrated relationship with the limits of its own knowledge.
The Writing Is the Product
Beyond compliance, CasaVoya’s second major Claude deployment is in content — and here the stakes are different but the quality bar is equally unforgiving.
Listing descriptions, review summaries, and blog content are not internal documents. They are the guest experience before the guest arrives. A traveler from Seoul deciding between two comparable Manhattan apartments is making that decision based substantially on how well the platform communicates what each property actually feels like — the texture of the neighborhood, the honest trade-offs of a fifth-floor walkup, the pattern in 200 reviews that says this host is exceptional at problem-solving.
The platform processes more than 8,000 verified guest reviews, using Claude to generate structured summaries that surface themes, identify trade-offs, and distinguish what business travelers valued from what families with young children valued. The AI-powered search — which interprets queries like “anniversary weekend near great restaurants” rather than forcing users to fill in structured filter fields — also runs on Claude for the reasoning-intensive dimensions of intent interpretation.
“Generic AI content is now everywhere, and guests can feel it,” said Ramani. “The test for us is whether the writing actually helps someone decide — whether it’s honest about trade-offs, specific about what makes a property unusual, and calibrated to what different types of travelers care about. Claude’s natural language generation clears that bar more consistently than the alternatives we’ve tested.”
The multilingual dimension adds another layer. With guests from 22 countries using the platform, review translation is not a marginal feature — it is core infrastructure. Claude’s translation preserves sentiment and nuance rather than flattening the distinctive voice of a guest review into functional but lifeless English.
Building on Claude Code
The third workstream where Claude has earned its primary designation is in development itself. CasaVoya’s technical stack — Wix Velo-based, with integrations spanning, infrastructure, iCal compatibility, messaging systems, and AI-powered search — is more sophisticated than its startup size might suggest.
Claude’s coding capabilities have become embedded in the platform’s engineering workflows. Where developers previously moved through feature cycles constrained by the pace of manual coding, Claude has shifted the dynamic — enabling the small team to build and ship at a velocity previously reserved for much larger engineering organizations.
The broader market data supports what CasaVoya is experiencing firsthand. Claude Code reached $1 billion in annualized revenue within six months of its public launch, and surpassed $2.5 billion by early 2026, with engineering teams across industries describing it as infrastructure they can no longer imagine working without. For a lean startup competing in a market that rewards product sophistication, that velocity differential matters enormously.
A Values Decision That Happened to Be a Technical One
The designation of Claude as CasaVoya’s primary AI did not happen in a vacuum. Early 2026 produced one of the most clarifying moments in the short history of the commercial AI industry: Anthropic publicly refused to allow Claude to be deployed for mass domestic surveillance or fully autonomous weapons, absorbing significant political and commercial pressure rather than compromising the safety principles at the company’s founding.
For a platform whose entire market position rests on being the trustworthy, compliant alternative to less scrupulous operators, that moment registered.
“We operate in a market where trust is the product,” said Ramani. “Our guests trust that the listings are compliant and accurately represented. Our hosts trust that we’re helping them operate safely. The AI we build on should reflect that same commitment to not cutting corners on things that matter.”
CasaVoya also noted discomfort with the direction some AI providers are taking in their consumer products. OpenAI’s introduction of an adult content mode signals a set of product priorities that the company believes sit uneasily alongside technology that families and professional hosts rely on. Anthropic’s explicit commitment to keeping Claude ad-free, and its refusal to trade safety principles for revenue, aligns more naturally with the culture CasaVoya is building.
None of this is to say the technical and value judgments are separable. In Ramani’s view, they converge: a company that prioritizes honesty at the model level because it believes that’s the right way to build AI is also, not coincidentally, building a model that performs better on tasks where honesty matters most.
“The compliance use case and the values alignment aren’t two different reasons we chose Claude,” said Ramani. “They’re the same reason, expressed two different ways.”
What This Means for AI Selection in Regulated Industries
CasaVoya’s experience points toward a broader pattern that is likely to play out across regulated sectors — financial services, healthcare, legal, real estate — as AI moves from experimentation to production.
The selection criteria that dominate early-stage AI evaluation — benchmark scores, context window size, generation speed — are necessary but insufficient for environments where the cost of error is concrete and the regulatory surface area is complex. In those environments, model behavior under uncertainty, the cultural and ethical posture of the underlying company, and the quality of nuanced natural language generation become decisive factors.
“We started by evaluating models,” said Ramani. “We ended up evaluating companies. That shift in framing is something I’d recommend to any team deploying AI in a regulated context.”
CasaVoya is currently expanding beyond New York City, with its Claude-powered layer being adapted for new markets ahead of a significant growth push tied to FIFA World Cup 2026 matches in the New York metropolitan area. The platform is actively recruiting hosts and expects to significantly expand its listing inventory over the coming months.
About CasaVoya
CasaVoya (formerly ManhattanBNB) is a vacation rental platform that democratizes access to authentic, affordable travel experiences. Born in New York City’s highly regulated rental market, CasaVoya operates as a trusted introduction service, connecting travelers directly with exclusive vacation rentals not listed on traditional booking platforms. The company has facilitated thousands of stays for guests from 22 countries, serving groups, families, and travelers seeking alternatives to hotels and cookie-cutter accommodations across New York City and other major global destinations. For more information, visit www.CasaVoya.com.


