
The hype surrounding artificial intelligence continues to accelerate, driven by large language models, generative tools, and a surge of startups promising AI transformation. But amid the noise, one principle remains crystal clear: if you’re going to scale AI, do it responsibly.
Few understand this better than Dr Peter Appleby, Head of Data Science at Autotrader. Leading a team with deep AI expertise, spanning PhDs and postdocs, and who are are all published researchers in their own right, Peter is helping guide one of the UK’s most prominent technology companies through the complexities of responsible AI development. Autotrader’s journey has been centred on deploying AI that adds value without compromising on efficiency, governance, or sustainability.
Here are five key lessons from Peter’s experience in scaling AI the right way.
- Start With Real Use Cases That Solve Real Problems
As a general rule of thumb, we believe the AI journey should start with use cases that improve existing workflows. AI should enhance what teams are already doing, not reinvent it. A standout example for us is Co-Driver, a suite of AI-powered tools built to help automotive retailers list and manage adverts more efficiently. One of its key features, AI Generated Descriptions, uses natural language generation to automate advert writing, a previously manual, time-consuming task. The results speak for themselves with time taken to build an advert reduced from 28 minutes to circa five minutes on average. Since launching in February 2025, over 200,000 AI-generated vehicle descriptions have been accepted by retailers 96% of the time, on the first suggestion. By integrating AI into an existing task, Co-Driver delivers immediate value without disrupting user workflows. Instead of requiring retailers to adopt new behaviours, the AI amplifies their productivity. This principle of enhancement ensures high adoption, clear ROI, and greater long-term sustainability and it’s a guiding philosophy across Autotrader’s AI initiatives.
- Resist AI for AI’s Sake
Not everything is an AI problem. One of the biggest mistakes organisations make is implementing AI because it sounds innovative, rather than because it solves something.
Poor use cases like chatbots that merely regurgitate website content are prime examples of this where many of the questions asked could be answered by just reading the webpage. These tools consume energy, increase infrastructure costs, and offer limited user benefit. This mindset has environmental implications too. Training and running large models is resource intensive. If they’re not solving valuable problems, their carbon footprint is unjustified. Only deploy AI when the problem is real and the value is clear.
- Design AI Systems to Last
Speed is tempting. But in AI, scaling too fast without a solid foundation leads to technical debt. A lot of people throw a large general-purpose model at every task and call it AI. But that’s inefficient, hard to govern, and hard to maintain. At Autotrader, the opposite approach is taken; smaller, targeted models, carefully architected to do one thing well. Rather than relying on monolithic AI systems, our team breaks tasks down and applies bespoke solutions that are more interpretable, more efficient, and easier to govern over time.
- Bake in Governance and Human Oversight
As AI becomes more autonomous, the importance of human oversight only grows. You need humans in the loop to ensure the content generated by AI is fact-based and reviewable. Take Co-Driver again. Its sentence generation is constrained by structured data provided by humans on everything from model, trim, engine size, colour, and so on. The AI doesn’t hallucinate; it recombines known facts into well-formed sentences. This reduces the risk of errors or bias and keeps the outputs grounded in truth/fact.
Bias is another concern. Without careful data curation, AI models can unintentionally reflect systemic biases, gendered language, regional assumptions. By maintaining a tight feedback loop between AI output and human review, Autotrader preserves trust, avoids ethical pitfalls, and meets internal governance standards. This is particularly important as AI moves beyond back-office support and into consumer-facing touchpoints like AI-powered ads, viewed by millions of buyers.
- Build Deep In-House Expertise
Responsible AI isn’t just about technology, it’s about people. Autotrader has invested heavily in assembling a team that understands the nuance of AI at scale. Our data science team includes postdoctoral researchers who are domain experts working in close collaboration with engineers, product teams, and broader business stakeholders. They don’t just deploy off-the-shelf models, they create new ones, they publish white papers, run simulations, and build custom infrastructure for experimentation and monitoring. This approach is absolutely necessary for long-term success.
Conclusion: Responsible AI at Enterprise Scale
Autotrader’s success with AI is not an accident, it’s the result of strategic restraint, careful engineering, and a deep respect for both the power and limitations of the technology.
Peter Appleby’s lessons are deceptively simple: solve real problems, avoid hype, build sustainable systems, enforce governance, and invest in your people. But in a field too often defined by reckless ambition and underbaked ideas, those lessons are more radical than they first appear.