Analytics

Anil Sharma on Building Scalable, Ethical AI Systems That Deliver Real-World Impact

The world of AI can be overwhelming but when you meet someone who makes it clear, practical and inspiring, you can’t help but share their insights.

I met Anil Sharma at a recent AI leadership summit, and what started as a casual conversation over coffee quickly turned into a simplified learning experience in machine learning (ML), Agentic-AI, LLM based and traditional software products.

Sharma is a software engineering executive leading machine learning and AI adoption as evolving ecosystems that demand ethical thinking, scalable design and visionary leadership. As we spoke, it became obvious that Sharma is shaping where these technologies are going next rather than keeping pace with them.

As a Senior Member of IEEE and a Fellow at the Soft Computing Research Society, Sharma is helping define the direction of AI. With five patents granted and another pending, he’s clearly put his Postgraduate degree in Artificial Intelligence and Machine Learning from The University of Texas at Austin to powerful, practical use.

A Framework That Changed the Game

When asked about his most impactful contribution to the field, Sharma elaborated on his proactive edge computing framework developed during his time at VMware.

This innovation solves a major bottleneck in machine learning deployment, ie, the latency and lag that come from centralized processing. Instead, Sharma’s system processes data right at the source, using distributed machine learning models to handle terabytes of logs while enabling real-time anomaly detection.

His patented invention — Processes and Systems for Decentralization of Data Produced at the Edge of a Distributed Computing System (US12174875B2) — is more than technical wizardry. It’s a foundation for the rise of agentic AI, where smart agents collect, monitor, and respond to data on the fly. It’s a leap toward decentralizing intelligence itself.

AI Agents as First-Class Citizens

In a time when large language models are dominating headlines, Sharma believes the rise of agentic AI and intelligent microservices indicates a deeper shift underway. His architectural philosophy treats AI agents as modular, reusable building blocks, components that don’t just support workflows but actively drive them.

To bring this to life, he’s scaled teams that merge PhD researchers with software engineers, bridging academia and industry. “It’s about balancing research innovation with practical delivery,” Sharma says, and his hybrid team structure reflects that ethos.

For many organizations, the path from research to production is riddled with friction. For this Sharma suggests the Dual-track development solution.

He runs separate but collaborative research and delivery teams, combined with innovation forums and regular demo sessions. This model creates a culture where feedback loops are tight, and ideas don’t just stay in whitepapers but they hit the ground running.

“When AI development reaches users quickly, the system learns, improves, and shows real value,” Sharma emphasizes.

Solving the Puzzle of Scalable ML Pipelines

Handling large-scale ML pipelines is no walk in the park. Sharma breaks it down simply by efficient data management, smart model architectures and hybrid infrastructure.

Using a Spark-based architecture, Sharma processes hundreds of terabytes of data while ensuring performance, cost-efficiency and speed. His key advice is to design edge-first, breaking ML components into services and always baking observability and automation into the pipeline.

One of Sharma’s proudest achievements comes with numbers to back it. At VMware, his decentralized processing approach reduced the need for centralized data handling by up to 98%. That is a huge leap in cost savings, processing efficiency and performance. That patent  was about tangible business ROI.

Leading with Ethics, Not Just Algorithms

In Sharma’s world, innovation means nothing if it isn’t ethical. He’s built internal frameworks for explainable ML, developed transparent documentation, and involved everyone from product managers to engineers in ethical discussions.

He also brings academic perspectives into his practice through university partnerships, blending practical deployment with academic rigor. “Ethics is a shared responsibility,” Sharma says, “and it starts with inclusive conversations across the organization.”

With the explosion of transformer and foundation models, Sharma follows a clear decision framework:

  • Custom models for high precision and control
  • Fine-tuning for tailored performance with faster adoption
  • Open-source/pre-trained models for cost-effectiveness
  • Hybrid models to balance flexibility with scalability

But he adds one critical insight saying “Understand how transformers work. That knowledge helps teams make smarter decisions about when to build or adopt.”

Author

  • Safaque Kagdi

    Safaque Kagdi is a New York-based Publicist and Freelance Journalist. With a career spanning over 12 years, she has worked with entrepreneurs, C-suite executives, brands and global corporations in the US, the UK, the Middle East and South Asia. Her PR campaigns have won international awards including SABRE Awards South Asia 2014 and PRWeek Awards Asia 2014. She was recognized as one of the Top 10 Women Entrepreneurs by Silicon India in 2018. As a freelance journalist, Safaque writes for multiple media outlets focusing on covering startup stories and entrepreneurial journeys.

    View all posts

Related Articles

Back to top button