Future of AI

AI providers: focus on these 4 key issues for enterprise customer contract success

Negotiations between start-ups / scale-ups and potential customers are often challenging, especially at the enterprise-level. Nine-times-out-of-ten, enterprise customers will insist on contracting on their standard terms with minimal changes.

So, how should you ensure that their (highly generic) contract works for your artificial intelligence (AI) business? This article aims to help AI companies in scale-up mode focus on what’s important by exploring four key issues.

1. Are we liable for decisions made by our AI system?

Whether (and to what extent) it is ‘right’ for you to be financially responsible for decisions made by your AI system should always be considered.  The contractual starting point will usually require you to “stand-behind” your product and accept liability should things go wrong. 

At first glance, that might sound perfectly reasonable – and for most non-AI software-as-a-service (SaaS) systems it usually is. But where AI underpins a SaaS system is in the dynamic changes. 

Reduced human involvement combined with a reliance on potential bias / outdated/incomplete customer data means that you have far less control.

It follows that liability should (ordinarily) be excluded in the following circumstances:

  • Where humans are assisted by AI and remain in full control (so-called ‘human-in-the-loop’ systems), liability for recommendations made by an AI system should be excluded.  After all, they are just that…recommendations. It, therefore, seems unreasonable to expect an AI provider to bear the financial consequences should a recommendation ultimately turn out to be misguided, especially in circumstances where a human was given the option to accept/decline the recommendation.
  • Where human oversight of the AI system is required (so-called ‘human-over-the-loop’ systems), liability arising from a failure to adequately monitor should be excluded.  Again, because a human remains in ultimate control, an AI provider should not pick-up the bill for a customer’s lack of oversight.
  • Where humans cannot override the AI system (so-called ‘human-out-of-the-loop’ systems), it might be appropriate for liability to be excluded (or at least limited) for most, if not all, decisions.  Whilst this may seem counter-initiative, it is important to remember that no AI system is perfect and that human-out-of-the-loop systems should only be deployed by customers in their business where the risk and probability of harm are low.

2. Do we have the right to improve our AI system with customer data?

As a general rule, you need to ensure that the contract allows it.  Securing appropriate permissions should, therefore, be a high-priority item, especially as they are unlikely to feature in many standard-form enterprise-level contracts. You may want to consider obtaining permission to:

  • De-identify customer data (so as to limit responsibilities and liabilities under GDPR).
  • Format and cleanse the de-identified data (so that it is useful).
  • Use the cleansed data to identify patterns, draw conclusions, and train / improve your AI system.

In the absence of such permissions, you may find yourself building a business on data that you have no rights to use.

3. Do we own the analysis and improvements derived from customer data?

Analysis and improvements made by your team will, generally, be owned by your company.  It is not uncommon however for customer contracts (and especially enterprise-level standard terms and conditions) to reverse this position – so always keep a lookout for overly broad clauses that result in harmful IP-leakage.

This isn’t necessarily a straightforward exercise, but checking whether the definition of “Customer Data” captures improvements or derivative works is a good start. The importance of an AI company owning the rights in business-critical data cannot be overemphasised. 

4. Are we being held to an unreasonably high standard?

Even the most sophisticated AI systems will require some form of customer engagement. All too often, however, contracts fail to clearly specify the level and types of engagement that an AI provider requires.

In circumstances where the contract is deficient in this manner, you may find it difficult to later argue that delays or underwhelming performance stemmed from a customer’s lack of engagement. It is therefore usually best to:

  • List all information/materials/prerequisites required from the customer in order to configure and optimise the AI system.
  • Specify that the usefulness of the AI system depends on the quality of the customer-supplied data.
  • Stipulate that the AI system may learn inappropriate responses if the customer data contains inherent biases.
  • State that no guarantee or warranty is made in connection with the perceived effectiveness of the AI system.

As we have already seen, the COVID-19 pandemic will generate opportunities as well as challenges.  

Those AI providers that take stock of their negotiation strategy and ensure they remain focused on the core issues are likely to thrive. Those that don’t may find their contracts come back to bite them.

Author

  • Ben Williams

    Ben Williams is a solicitor-advocate and senior associate in the Withers tech team in London. Ben advises software and hardware providers (and all the hybrid companies in-between) on the protection, commercialisation and enforcement of new technologies. Ben has particular experience in helping AI providers to 'scale-up' by drafting and negotiating their technology-related contracts (everything from technology development, launch-to-market, and commercial partnerships). Ben also has higher-rights of audience and is able to represent clients in the senior civil courts in England and Wales.

    View all posts

Related Articles

Back to top button