Prithvish Rakesh Doshi is a software engineer at Kaizen Software Systems, where he builds custom solutions for clients in the media industry. He brings a strong foundation in data structures, backend development, and practical problem-solving. His recent hands-on work with autonomous agents is helping redefine how AI can be integrated into real-world systems.
One of Doshi’s earliest experiences with agentic AI happened unexpectedly while prototyping a browser-based agent as part of a proof of concept. “The first feeling seeing the browser agent in action was awe,” he recalls. “It could break the webpage into DOM components, navigate through screens, maintain context, all from simple text instructions.” That early demo showed the power of autonomous agents to execute basic tasks like adding products to a cart or listing vendors at a festival. More importantly, it opened his eyes to how agents could be scaled across complex workflows with minimal human input.
But building these systems isn’t just about clever engineering. It’s about reliability. “Yes, agents can do simple tasks well, but you have to build for when they fail. Design for failure from day one,” he says. Doshi likens agent integration to plugging in third-party APIs, but with one major difference: unpredictability. The same prompt might yield slightly different outcomes, which means the backend has to catch misfires without derailing the entire process.
B2B marketing platforms turned out to be ideal testbeds for his agentic experiments. “They’re low risk compared to domains like healthcare or finance, and marketers are already used to constant iteration. Plus, the agents can analyze trends online and suggest campaigns or content ideas with minimal risk.” Still, even here, a misclick or misfire can mean real consequences, something Doshi learned firsthand.
In one early test, the agent got stuck in an infinite loop while trying to locate an “Add to Cart” button that didn’t exist on a certain site. “It couldn’t find what it was looking for and just kept trying,” he says. That experience cemented his belief in human-in-the-loop design. “Every correction becomes training data. Every mistake is a moment to learn.”
Transitioning from research-grade demos to production-ready agents hasn’t been simple. “Prototypes only need to work once. Production systems need to work every time,” Doshi emphasizes. He’s had to contend with latency stacking, unpredictable loads, and flaky browser DOMs. “What really matters isn’t just the model’s intelligence. It’s the orchestration layer. That’s where production lives or dies.”
When it comes to automation versus human control, Doshi uses a simple test: ask yourself if you’re okay with the worst possible outcome. “If I tell an agent to add a gaming chair to my Amazon cart, I can live with it adding 99 chairs. But placing the final order? That’s where I draw the line.” In his systems, agents rarely get the final say. Humans always hit the last button.
For teams trying to bring their own prototypes into production, Doshi offers a clear roadmap: “Start small. Plan for failure. Give users undo buttons. Log everything. Use gradual integration, and let data determine how autonomous you can go.” While agents may not be perfect, he argues, their supporting systems absolutely must be.
That’s what it takes to bring agentic AI out of the lab and into real workflows. Not just intelligence, but resilience.