
Artificial Intelligence (AI) is rapidly reshaping the manufacturing landscape, driving innovation across supply chains and revolutionising how products are designed, produced, and delivered. In this article, Simon Key, partner, and Dominic Simon, senior associate, at leading East Midlands law firm Nelsons,, explore the disputes risks of the use of AI in manufacturing supply chains and outlines practical strategies manufacturers can adopt to potentially mitigate those risks.
Manufacturing supply chains involve intricate, interdependent systems that coordinate the flow of raw materials, components, and finished goods, often across multiple geographies. They can consist of suppliers of components, logistics and transportation providers, assembly hubs, storage facilities, and customers.
Consequently, a single issue can have a cascading effect. For example, a delay in a single shipment or a miscommunication with a supplier can halt production lines, increase costs, result in legal disputes, and erode customer trust. This sensitivity is amplified by the just-in-time (JIT) inventory models many supply chains now rely on that leave little room for error or delay.
Use of AI in supply chains
Many manufacturers and other supply chain stakeholders are using AI to enhance supply chain management. This includes improving operational visibility, forecasting (such as supply and demand predictions), and decision-making, with the aim of streamlining operations, minimising costs and optimising inventory levels. Amazon, Siemens, GXO Logistics and Ocado are a few examples.
Additionally, AI is increasingly being used in quality control across these supply chains. Component providers and manufacturers are using AI-powered visual inspection systems to detect defects in components or finished products with greater speed and accuracy than human inspectors. Similarly, logistics and transportation providers are using AI to forecast operational demands to satisfy customer needs.
Systemic supply chain risks
While AI can enhance supply chain management, it can also introduce new risks. Errors generated by AI systems can have immediate and wide-ranging consequences, and inaccurate predictions create upstream or downstream issues that remain undetected until it’s too late. For example, if an AI system used by a component supplier (upstream) fails to detect a defect, and that defect is only discovered after the product reaches the retailer or customer (downstream), the manufacturer may face reputational or legal consequences, even though the fault originated earlier in the supply chain. If the issue is more widespread, there may also be costly recalls, significant reputational damage and a high volume of legal claims.
Conversely, if the AI system is overly conservative and rejects too many acceptable components, it can create artificial shortages, delay production, and disrupt delivery schedules.
These risks may sometimes be compounded when multiple organisations within the same supply chain use their own AI systems independently. For instance, one AI system might forecast a surge in demand and trigger increased production, while another, used by a logistics partner, might predict a slowdown, and result in scaled-back readiness.
Mitigating AI-driven risks
Proactively mitigating risks is far more effective than reacting after issues arise – in preventing disruptions and strengthening a manufacturer’s legal position if a dispute arises. Manufacturers should therefore develop comprehensive and tailored risk mitigation strategies that span legal, operational and technical domains.
Legal safeguards
Contractual protections
At the contractual level, clarity is paramount. Agreements should include well-defined roles and responsibilities, particularly around the use of AI systems and liability for errors or delays.
Force majeure (“act of god”) provisions, which are clauses that excuse parties from liability when extraordinary events beyond their control prevent them from fulfilling contractual obligations, may be especially useful, depending on their wording.
Additionally, limitation of liability, and indemnity clauses can, depending on their wording and the context, help allocate risk appropriately; however, such clauses can be subject to challenge and may, depending on the facts, be unenforceable.
Open dialogue
Open dialogue between supply chain partners is essential to managing AI-driven risks effectively. Transparent communication about the capabilities, limitations, and intended use of AI systems helps prevent misalignment, especially where systems interact across organisational boundaries.
Early discussions around integration plans, data sharing protocols, and contingency measures can foster mutual understanding and trust, while reducing the likelihood of operational or contractual disputes.
Insurance
Manufacturers should assess the specific risks at each stage of the supply chain and ensure they have appropriate coverage, e.g., cargo insurance, business interruption policies and liability insurance. However, AI-related risks may not be expressly covered under traditional policies, so Manufacturers should also engage with insurers early to negotiate tailored coverage.
Operational safeguards
Supplier due diligence
Manufacturers should conduct thorough supplier due diligence, working only with reputable vendors that are transparent with their AI use.
Supplier due diligence should aim to identify risks and how those risks could create upstream and downstream vulnerability. It should therefore involve verifying how AI systems are used in areas such as quality control, logistics, forecasting, and other relevant areas, and how those systems are monitored, tested, and audited.
Human oversight
An appropriate level of human oversight remains essential, particularly in high-impact areas, such as supplier selection, demand forecasting, inventory management and quality assurance, where the consequences of AI-driven decisions can be significant.
Manufacturers should establish clear protocols that define when and how human intervention is required. This may include thresholds for reviewing AI-generated outputs, escalation procedures for anomalies and designated oversight roles. Oversight should be proportionate to the level of risk and value offered.
Staff training and accountability culture
Effective risk management also requires role-specific training to ensure that staff understand how AI systems function, how to use them responsibly within the scope of their roles, and when and how to escalate concerns. Where appropriate, training should be ongoing to reflect evolving technologies and risks.
Equally important is fostering a culture of accountability and openness. Employees should feel confident in questioning AI-driven decisions and reporting concerns via clear and appropriate reporting channels.
Continuous monitoring
Ongoing monitoring may help to ensure that AI systems remain reliable, compliant and aligned with operational objectives. This may include real-time oversight across the supply chain, internal and external audits and robust record-keeping and data tracking These practices may support the early detection of issues and inform appropriate corrective or risk prevention measures.
Phased Integration
Phased integration involves gradually deploying AI tools across internal functions such as procurement, inventory management, production scheduling and quality control. Running pilot schemes in isolated workflows, such as demand forecasting, or inbound component inspecting, may prove particularly useful.
Ultimately, phased integration is not only concerned with technical strategy, but building confidence. By allowing teams to observe AI performance in real-world conditions, gather feedback and make incremental changes, manufacturers may be able foster trust in the technology and overcome internal and stakeholder resistance.
AI Mapping
Manufacturers should map out how AI adoption is coordinated across the wider supply chain, to help identify gaps, capabilities and areas for potential collabouration, and to understand where each partner stands in terms of AI readiness, infrastructure and governance.
In some cases, synchronising AI systems with other supply chain stakeholders may be viable. This level of integration requires a high degree of trust, and will likely require the involvement of specialist consultants, third-party platforms, and closer risk monitoring.
Contingency and Scenario Planning
Contingency and scenario planning may help manufacturers anticipate how AI systems will perform under adverse or unexpected conditions in the supply chain, and prepare appropriate responses. Manufacturers should consider a range of “what-if” scenarios, for example: delivery failures, transport disruptions, sudden demand spikes and geopolitical tensions creating supply shortages, and engage stakeholders from operations, IT, legal, and risk management to ensure that business continuity and damage control plans are actionable.
Technical safeguards
Specialist technical input from suppliers and consultants may be necessary to ensure that AI systems are appropriately integrated with existing infrastructure, machines and hardware, and that they possess appropriate fail-safe features, such as the ability to override, pause, or revert AI systems to manual control in the event of malfunction or unexpected behaviour.
Manufacturers should also ensure an appropriate level of cybersecurity where AI systems handle protected, sensitive, or operational data. This is particularly important in supply chain contexts, where breaches could expose supplier information, disrupt logistics, or undermine trust between partners. Protecting against unauthorised access, data leaks, and other cyber threats is essential to maintaining both operational continuity and strong intra-supply chain relationships.
Final thoughts
AI is becoming part of modern supply chain operations, particularly in the areas of forecasting, quality control and logistics. However, where an AI system in the supply chain fails, there can be a downstream effect that exposes the manufacturer to commercial and legal risks.
While it is not possible to eliminate risk entirely, the risks should be understood, contained, and mitigated as far as possible, through legal, operational and technical safeguards. This applies not only to a manufacturer’s own AI systems but, where feasible, also to those used by partners across the wider supply chain.
By taking a proactive, collaborative, and well-governed approach to AI integration, manufacturers can harness its benefits while maintaining operational resilience, trust across their supply networks and reducing the likelihood of legal disputes.