AI is increasingly at the forefront for the insurance sector, as methods to tackle fraud, manage claims and improve customer service are developed. In terms of how these developments are addressed in the contracts that underpin such arrangements there are significant issues to bear in mind. This is particularly relevant for the thriving Insurtech ecosystem in Israel, where startups are leading the way in innovative AI applications.
Contracts involving AI still take the form of a technology contract under which services are being provided for specified outcomes or outputs. But there are some key concepts that warrant further consideration and that demand agility in contract drafting. With AI, we are working with complex technology that is autonomous and adaptive. And the “black box” nature of AI, where inputs and internal processes remain invisible to the user, means that is harder to monitor AI in comparison to non-AI technologies.
Taking a few key concepts in turn that impact AI-related contracts for the insurance sector, we consider how to address these from a contractual perspective.
Regulatory requirements
Insurance sector specific regulatory requirements and guidelines for outsourcing, material third party arrangements and cloud continue to be relevant to contracts involving AI. In the UK this includes the PRA Supervisory Statement on outsourcing and third party risk management and, in Europe, the EIOPA Guidelines on outsourcing to cloud service providers. As a general rule, if the contract falls within the remit of these regulatory requirements, it is necessary to consider them regardless of whether or not AI is involved.
With the implementation deadlines for the EU AI Act now looming, solutions will also need to be compliant with the new, broad-reaching legislation to avoid a costly and difficult procurement.
Accuracy
There is a need to assess the accuracy and service commitments that the provider is willing to offer. Often this is an assumption that computers, including AI systems, are inherently reliable – or at least more reliable than humans.
But AI models can produce “hallucinations”, or unexpected results (for example due to bias or limitations in the underlying data). So, before using an AI solution it is important to understand:
- the reliability of the training data;
- whether the tool has been subject to independent evaluations and auditing to verify the tool’s accuracy;
- to what extent the service provider will accept liability for errors caused by the AI solution; and
- the remedies that might be available in the event of any such errors, e.g. a commitment to retrain the model or to provide financial compensation.
Free to use tools are unlikely to come with any accuracy commitments, however. It is therefore important to understand the intended use cases involving AI and ensure that appropriate ring fencing and checks and balances are in place in the contract.
Exit and Lock in
Many organisations already look to the big tech providers for a range of enterprise and cloud services across their organisation. Utilising AI solutions from a provider that you are already heavily dependent upon will therefore further increase the risk of a technical lock in with that provider. It is therefore important to think about your exit strategy from the outset and how easily and how quickly you could move away from the provider in question.
And, as regulators are increasingly focussed on operational resilience and concentration risk, again the regulatory aspect must not be overlooked.
IP considerations
Perhaps one of the most important contractual aspects to consider is intellectual property. It is important to really understand the different types of IP at play, and who owns these and where responsibility lies. There will be IP in:
- the data used to train the model;
- the data inputted when using the model;
- the outcomes or analysis that is created through use of the model; and
- improvements to the AI, especially through a customer’s use of the model.
In each case it is important to think about who owns the IP and the extent to which the other party is entitled to use it. In most cases providers will insist on owning improvements to the AI model. But this can lead to unexpected consequences.
If the solution is trained on or further learns from your organisation’s own data, but the service provider owns the improvements to the model, this could mean that they are able to provide the improved model to their other clients which could give away a competitive advantage.
This can arise following proof of concept stages, where there can be less focus given that the trial is often limited both in terms of scope and duration. It is necessary to be careful at that stage therefore as the consequences may have longer lasting and significant effects.
On a similar note, it is important to think about the confidentiality aspects, particularly in respect of large language models. Potentially confidential information could be being inputted, including that of third parties to whom duties of confidentiality are owed. Does the AI contract expressly prevent uploaded information being retained or accessed by unauthorised third parties?
And finally, consider the risk of third party IP infringement. The provenance of full training data sets for many generative AI models may be unknown, and so this – and especially the content generated by the AI – may infringe the rights of others. What protection does the contract offer in respect of any third party infringement claims?
Liability
Liability is an evolving market and a common position (that may be seen in cloud contracts, for example) has not yet emerged. For standardised products, low levels of liability can be expected, with broad exclusions and little scope for negotiation. Generally speaking, liability positions are very one sided in favour of the supplier at the moment, but the potential exposure could be significant. The usual contractual approaches to liability, e.g. specifying a percentage of fees paid, are unlikely to come anywhere near compensating you for the potential losses. Think about that risk reward balance – is the business benefit to using the AI worth the potential exposure in the event of a catastrophic failure?
Restrictions on use
Many contracts may specify lots of restrictions as to how the product can be used. If you want to use AI solutions broadly, especially as part of a transformation programme, it is important to understand the solution and the restrictions that the provider has imposed and assess these against the intended use cases.
Reporting and audit
As AI technology develops, AI can be difficult to explain and understand, which makes dealing with oversight and control in the contract more challenging. Putting the onus on the supplier to have authorised quality assured systems and controls and a pro-active obligation to notify when things go wrong so that there is sufficient time to implement mitigating steps to reduce the impact of the issue could be an effective approach in addressing this challenge.
As contractual approaches to addressing the challenges and opportunities to deploying AI develop, above all it is important to apply a risk-based approach and keep in mind the principles summarised above. A carefully considered contract with sufficient flexibility – where possible to negotiate – will reduce risk and ensure that the huge opportunities offered by AI to the insurance sector can be successfully leveraged.
For further information or if you have any questions, please contact Nichola Donovan or your usual DLA Piper team member.