
Purusoth Mahendran has spent his career building and scaling engineering teams at major tech companies, moving from Amazon to Cash App to his current role as Senior Engineering Manager at Thumbtack. His approach to AI leadership has evolved dramatically across these environments, shifting from integrating AI into existing systems to building AI-native products from the ground up.
Mahendran leads cross-functional teams developing multi-modal AI systems that move beyond traditional keyword search toward conversational interactions. His work involves solving the challenge of translating ambiguous natural language requests into accurate, actionable recommendations for home services. When someone asks for “help with a leaky faucet,” his systems must determine urgency, location, and the right type of professional while maintaining a trustworthy user experience.
Beyond product development, Mahendran focuses on how AI is reshaping engineering itself. He sees teams needing new skills in prompt literacy, data curation, and systems thinking with guardrails as AI touches real-world decisions. His perspective combines hands-on experience scaling infrastructure with insights into building AI products that millions of people actually use.
Mahendran discusses the evolution from searching to collaborating in service marketplaces, common misconceptions about implementing AI in traditional industries, and practical advice for engineering managers transitioning into AI-focused roles.
You’ve led engineering teams at three major tech companies: Amazon, Cash App, and now Thumbtack. How has your approach to building and scaling AI teams evolved across these different environments?
Across these companies, what’s evolved most is my approach to org design, iteration velocity, and problem framing in AI-native contexts.
At Amazon and Cash App, AI was typically integrated into existing systems. The focus was on building centralized ML infra, applying models to well-scoped problems (e.g., ranking, deduplication, fraud), and scaling teams that specialized in data plumbing, model iteration, and infra abstraction. The challenge was often navigating legacy complexity and aligning across orgs.
At Thumbtack, by contrast, we’re building AI-native products from the ground up. This demands cross-functional teams (eng, design, science, content) that work product-first iterating on prompt UX, agentic workflows, and streaming responses before even training a model. What matters is not just model accuracy but the end-to-end user experience. Team structure, tooling, and review processes have to reflect that.
The core lesson: AI teams don’t scale linearly with headcount they scale with clarity of learning loops. The faster we can test, learn, and ship across product, model, and infra the more durable our advantage.
You’ve mentioned moving away from keyword search to conversational, AI-driven interactions. How do you balance the complexity of natural language understanding with the need for accurate, actionable recommendations in home services?
Natural language is inherently ambiguous people ask for “help with a leaky faucet,” but the system needs to determine: Is this plumbing? Handyman? Urgent? Indoor? Outdoor? That’s step one. Step two is harder: surfacing the right pro who’s available, nearby, and qualified without asking the user to repeat themselves.
The solution is to decouple language parsing from fulfillment logic. We treat NLU as a probabilistic layer that feeds into a deterministic constraint solver: availability, location, category, booking rules. This ensures robustness even when understanding isn’t perfect, we can still return safe, actionable results.
Ultimately, the goal isn’t perfect comprehension it’s progressive clarification. We design systems that learn and adapt, improving the interaction loop with every user signal while protecting the integrity of the booking experience. That’s how you scale trust in AI-driven consumer platforms.
Beyond your day job, how do you see AI changing the skills that engineering teams need to develop?
AI is redefining what it means to be an engineer. In the past, engineers focused on writing deterministic code to solve well-defined problems. Today, many of the most valuable problems are ill-defined, and the solutions require probabilistic thinking, data-centric iteration, and an ability to collaborate with models as creative partners.
This shift means teams need to develop three new core muscles:
Prompt Literacy & Model Debugging: Engineers must learn to treat LLMs and other models as “soft modules” tools that don’t behave deterministically, but can be shaped through prompting, fine-tuning, and careful evaluation. This requires a mindset closer to debugging people than debugging code.
Data as Product: The quality of your model is only as good as the data behind it. Engineers must now think like product managers for data curating datasets, labeling edge cases, and creating synthetic inputs to improve generalization.
Systems Thinking with Guardrails: As AI systems touch real-world decisions, engineers must own safety, fairness, and feedback loops. It’s not enough to build; they must contain, steer, and audit AI behavior in production.
The result is a more creative, more iterative, and more interdisciplinary engineering culture. It’s not just about writing code anymore; it’s co-designing intelligence.
Looking at the broader AI landscape, what misconceptions do you see companies having when they try to implement AI in consumer-facing applications, especially in traditional industries like home services?
The biggest misconception is assuming AI can be “plugged in” like a feature rather than designed in as a system. In traditional industries, data is often messy, fragmented, or siloed, yet teams skip the unglamorous work of aligning taxonomies, labeling edge cases, and deeply understanding user intent.
Another trap is overestimating what models can infer and underinvesting in product scaffolding,the UX, and fallback logic that keep experiences trustworthy when the AI isn’t confident. You can’t just ship a smart model; you have to ship a resilient experience.
Finally, I see a lot of “bolt-on AI” thinking of layering a model on top of legacy workflows without asking the first-principles question: Does this even require AI? Sometimes the best answer is a rule-based system, tighter data modeling, or workflow automation. AI is powerful, but without disciplined problem framing, it risks adding complexity instead of solving it.
You’ve worked on both the infrastructure side (scaling systems) and the consumer AI side (personalization, search). How do these experiences inform your approach to building AI products that millions of people will actually use?
It’s given me a bias toward realism and resilience. Great AI experiences aren’t just about model accuracy; they’re about latency, cost, interpretability, fallbacks, and observability. When you’ve scaled systems, you know that every “smart” feature needs a dumb but reliable backup path.
So, when I design AI products, I don’t just ask “Can we do this?” I ask, “What happens when it fails at scale? Can we debug it? Can we degrade gracefully?” It’s the marriage of smart algorithms and hardened infrastructure that creates AI people can actually trust
What emerging trends in AI do you think will most impact how people interact with service marketplaces over the next few years? What should other engineering leaders be preparing for?
The biggest shift is from searching to collaborating. Users increasingly expect marketplaces to act less like directories and more like assistants, understanding vague intent, asking clarifying questions, and negotiating tradeoffs.
This means engineering leaders need to prepare for agentic systems models that can reason, plan, and hold multi-turn context and the tooling required to monitor and steer them safely. Leaders should invest not just in LLM integration, but in orchestration layers, feedback loops, and human-in-the-loop escalation paths.
What advice would you give engineering managers looking to transition into AI-focused roles about building the right technical foundation while still maintaining their leadership responsibilities?
Start with your strength: product engineering. The best AI leaders are not just model-savvy; they deeply understand how to turn messy, real-world problems into systems that deliver value. That starts with clear problem framing, data-driven iteration, and ruthless attention to how users actually interact with the product.
You don’t need to reinvent backpropagation, but you do need to get hands-on with the applied stack. Learn how data is collected, labeled, validated, and monitored. Understand what causes models to drift or degrade in production. Get comfortable reading model outputs and tracing them back to data issues or product gaps.
Equally important: don’t outsource the thinking. Resist the temptation to treat AI as “someone else’s domain.” Set aside regular time, even 30 minutes a day, to review papers, run Colabexperiments, or dig into misclassified edge cases. You’ll build intuition faster than you think.
And finally: stay outcome-focused. AI is not about tech for tech’s sake. It’s a tool in the service of trust, speed, personalization, or scale. Your job as a leader is to connect the dots between what’s possible in AI and what’s valuable to users.