Future of AIAI

Agentic AI will transform responsible sourcing, but human judgment remains essential

By Kevin Franklin, CEO, EIQ

Supply chains in flux 

From pandemic aftershocks to unpredictable tariffs, global supply chains continue to change faster than ever. Traditional sourcing territories have been disrupted and brands must rapidly re-evaluate suppliers for manufactured goods, apparel, food and energy. In this high-stakes environment, reputation and sustainability are as important as cost and quality – responsible sourcing is no longer optional. 

Supply chain risks, including forced labour, environmental violations and unethical practices – carry real consequences for growth, market access and brand trust. Increasingly, regulatory frameworks like the EU’s Corporate Sustainability Reporting Directive (CSRD) and the US Uyghur Forced Labor Prevention Act (UFLPA) make due diligence a legal requirement. 

The question isn’t whether to adopt AI, but how to deploy it responsibly to augment human expertise and decision-making and align with emerging data controls. 

AI in action: monitoring, auditing and predicting risk 

Responsible sourcing leaders are already using aspects of AI to monitor suppliers, flag geographic and product risks and evaluate compliance. Within supply chain intelligence software, AI can harmonise audit data against a patchwork of standards to create a single, actionable framework. This turns complex datasets into insights, enabling companies to act faster, more efficiently and more consistently. 

Beyond audits, wider third-party data is also vital. Often, investigative journalists or social media whistleblowers raise the first alarms. AI-powered adverse media scanning tools ingest and evaluate around a million news articles weekly to flag risks such as forced labour, child labour and other ethical breaches – something humans alone could never manage at scale. 

In the future AI will likely also uncover more subtle risks. For example, by linking order volumes with factory capacity, workforce size and production timelines, it can flag and forecast unauthorised subcontracting, quality or production risks – anomalies humans might miss. Over time, we should expect AI to enhance most stages of the responsible sourcing process: assessing programme maturity, scoring supplier risk, requesting and scheduling audits, following up on corrective actions, benchmarking performance and generating real-time natural-language reports and trend analysis. 

Human involvement remains essential 

AI extends our reach and insight, but it cannot replace human judgment. It can highlight risks, suggest local interventions and process massive datasets – but it cannot make decisions about risk appetites or budgets, assess trustworthiness or manage relationships. There may also be some proprietary or protected datasets that are not made available to AI tools. 

For example, AI might identify an alternative sourcing country with favourable cost and risk profiles, but it cannot make the decision to proceed, tell you whether a supplier is reliable or how community dynamics might affect implementation. It might support with recommendations about corrective actions, but the final course of action may remain with business leaders, buyers and category managers.  

Final decisions (at least for now) must remain in human hands. While AI can highlight potential issues, people should decide on what action to take. A red flag may need engagement, not exclusion, and experts are needed to interpret findings within cultural and business contexts. Without that human lens, interventions risk being ineffective or potentially damaging to business relationships. 

It’s important to remember that responsible sourcing isn’t just about data accuracy, it’s about values, accountability and performance, and ultimately, AI and automation stop where human rights and business decisions begin. 

Data governance: the foundation for success 

Deploying AI effectively requires robust data governance. Simply layering AI over unstructured data produces meaningless results.  

Key considerations have to include: 

  • Data security: protecting sensitive company and supplier information. 
  • Data quality: understanding data quality controls will significantly impact outputs 
  • Data structure: well-organised, labelled datasets that ensure output quality. 
  • Team readiness: closing knowledge gaps and building confidence in AI tools. 
  • Change management: addressing concerns over roles, responsibilities and trust. 

Cross-functional collaboration, between IT, legal and communications teams for example, is essential to safeguard data, ensure contractual compliance and maintain reporting quality. The legal requirements around AI are evolving very rapidly. Pilot projects, clear governance and transparent communication help organisations adopt AI responsibly. 

Practical steps for getting started 

Companies beginning their AI journey can build confidence with three steps: 

  1. Start small with low-risk, high-value areas like risk assessment or data analysis. Clearly defined use cases allow teams to explore AI’s potential without overwhelming the organisation or introducing undue risk. Starting small also reduces the risk of bias or misinterpretation, since outputs can be closely monitored allowing organisations to build internal expertise and frameworks for governance before scaling up. 
  2. Run pilots alongside existing processes to compare AI outputs with human judgment. This approach highlights where AI is strongest and where human expertise remains essential. It also allows teams to test accuracy, assess return on investment, and identify blind spots before integrating AI fully into core workflows. Crucially, it provides opportunities to define accountability: who reviews outputs, who signs off on actions, and how to ensure traceability when automated recommendations are used in supplier or sourcing decisions. 
  3. Support teams culturally, reassuring them that AI is here to elevate and not replace their work. Clear communication about AI’s role is vital to overcome scepticism and prevent disengagement. Training and upskilling initiatives should focus on helping employees understand how to interpret AI outputs, question findings, and apply them within ethical frameworks. Encouraging feedback from frontline teams also ensures AI tools evolve in line with real operational needs, rather than being imposed from the top down. 

The future: humans and machines in partnership 

AI is shaping business and daily life. For supply chain leaders it is inevitable that we learn to use AI responsibly, or risk falling behind. This also means being proactive in thinking about the implications for our people and teams and evolving our functions accordingly. 

When combined thoughtfully with human oversight, AI makes responsible sourcing programmes faster, smarter and more effective. The future of ethical sourcing lies in human-machine collaboration, where technology scales insight and humans provide judgment, context and relational understanding. Together, we can ensure compliance and build resilient supply chains. 

Author

Related Articles

Back to top button