Healthcare

The challenges of regulating transformative technology: what new AI regulation can learn from life sciences

This article was co-authored by Julian Hitchcock and Alex Denoon from Bristows.

The recent publication of the European Commissionā€™s proposed Artificial Intelligence Regulation (AIR) has been met with intense interest across the globe, being the first-ever legal framework intended to govern the use and development of AI.

The regulation aims to go above and beyond existing frameworks and position the EU as the leader in global AI development. It does so with an eye to ā€˜high impactā€™ sectors identified as key areas of future AI development;Ā the economic opportunities presented by the technology, and the potential for AI to impinge on individual rights enshrined in the EUā€™s Charter of Fundamental Rights.Ā 

However, this determined attempt to raise the bar on existing AI standards may seem likeĀ dĆ©jĆ  vuĀ to the life sciences sector. NoĀ industry is more familiar with the hype around new technologies, exaggerated claims and the difficulties of future proofing than the field of life science regulation.

Indeed, AIRā€™s attempts to create a new tranche of notified bodies, require permits on software for the first time ever and prohibit the use of AI for a purpose other than that for which it is permitted, present new challenges to developers of software who have not been subject to product regulation until now. It also adds extra compliance obligations to developers of technologies that are already regulated, such as medical devices and diagnostics (IVDs).

For life sciences, the ethical and social issues that now surround AI are hardly novel: they are part of the very fabric of life science policy and of the regulation that emerges from public and legislative debate. IVF, cloning, human embryonic stem cells, hybrid and synthetic embryos, GMOs, human genome editing: truly, the sector has seen it all, watching the fresh ink of new laws being pelted by the rain of new science, new technologies and new attitudes. 

The interest in AI within the life sciences is considerable and, whatever the hype, its transformative power is something to be taken extremely seriously. It is for exactly this reason that those who develop regulations for legislative approval have to keep their feet firmly on the ground, just as they have done and must do with advances from regenerative medicine to CRISPR Cas9. 

Raising the bar raises the difficulty

Life sciences lawyers are almost spoiled for choice when it comes to examples of previous efforts of aspirational regulatory ā€˜bar raisingā€™ that have not quite gone as planned. 

The sector has just seen the EU Medical Devices Regulation (MDR) reach the end of its transition period on the 26th May 2021, following four years of market turbulence after the requirements to gain a CE-mark were raised. Like the AIR, the MDR did ā€œgrandfatherā€ existing lawful products, but required re-certification of all products. Beyond this, the MDR also required the re-qualification of all notified bodies, which predictably created very painful bottlenecks. 

Perhaps more concerning for the sector is the In Vitro Diagnostics Medical Device Regulation (IVDR), scheduled to come into force in the EEA in May 2022. Building on the MDR, the IVDR requires the vast majority of in vitro diagnostics to obtain a CE-mark with the involvement of a notified body, whereas previously manufacturers were able to self-certify. Notified bodies, only just recovering from the MDR, are already faced with a mountain of applications. Notably the European Federation of Pharmaceutical Industries and Associations and the European Cancer Patient Coalition recently called for a delay to implementation of the IVDR. The MDR/IVDR experience teaches us that regulations that look good on paper can fail if thereā€™s simply too much paper and not enough infrastructure.

Regulation is warranted, necessary and vital for a healthy, trustworthy life sciences sector, but turn the regulatory throttle too far and it can quickly become counterproductive, a fact vividly illustrated by the EUā€™s GMO Directives. 

Designed to protect human health and the environment from the theoretical dangers of organisms produced using recombinant DNA techniques, the GMO directives ended up super-bureaucratising the use of precision-edited organisms, whilst exempting the production of random mutants by ionising radiation. Far from protecting the environment or human health, the GMO Directives have spelled disaster for European research and competitiveness. This is at a time when genetic editing technologies have a vital role to play in meeting the global challenges of food security, climate change, biodiversity, health, energy use and sustainability. 

The European Commission is well aware of the problem. Tasked by the EU Council to reassess the regime, it recently concluded that ā€œthere are strong indications that the applicable legislation is not fit for purpose for some [new genetic technologies] and their products, and that it needs to be adapted to scientific and technological progressā€. Critically, many senior Green politicians agree. Having posed EU failures in GM regulation to argue for legislative independence, the UK government is now undertaking a similar review to the Commission.

If the MDR/IVDR story acts as a warning of regulatory overload, the GM story teaches that regulations really can be too cautious. AI regulation MUST learn from this experience: protect that which is at risk, but encourage technologies that offer benefits to the environment, human health and economy. It is entirely feasible to get the balance right and to draft legislation that brings a win-win, while building in a review process to respond to future developments. That, in essence, is what AIR is seeking to achieve. But can it succeed?

Frameworks and aspirations   

One method through which to get this balance right is by relying on existing frameworks and templates to serve as foundations for more aspirational regulation. 

This has not been lost on the Commission, as AIR is almost pedantically based on the template of ā€œNew Legislative Frameworkā€ (NLF) regulations, under which conformity assessment is undertaken by notified bodies that are accredited for that purpose. The NLF is a tried-and-tested approach for many product areas, including medical devices and IVDs. The approach has provenance but, as we have suggested may be the case for the IVDR, if the infrastructure is overloaded it can impeded development and access to important technologies.

In the case of AIR, complexity is introduced through the breadth and depth of the proposed regulatory remit. It also arises from the interplay between the ā€˜standards and conformityā€™ approach to certification that seeks to encourage new AI development, and the principles-based aim of protecting individual rights that is folded into it. The test of any AI regulation is going to be balancing economic advantage against risks to human rights in a pragmatic way. But can AIR succeed?

Responding to requests from both the European Parliament and European Council, the Commission drafted the AIR to establish ā€˜an ecosystem of trustā€™ in which AI development is promoted while upholding the rights of citizens under the EU Charter. In introducing its proposal, the Commission lists 17 Charter provisions that AIR would uphold. Indeed, the Commission was, in the words of its AI White Paper of 2020, ā€˜closely involvedā€™ in developing the OECDā€™s five AI Principles of the previous year. 

In that same White Paper, the Commission had approved the seven ā€˜key requirementsā€™ recommended by an independent ā€˜High Level Expert Group on AIā€™ that it had set up for the purpose. In principle, therefore, the AIR could have followed the example of the GDPR by incorporating those principles. It didnā€™t, because AI systems are products. Quite aside from human rights, a faulty AI system can kill.

But if AIR is less explicit about its human rights underpinnings than the GDPR, the Commission nonetheless appears confident that the AIR will set an ā€˜international gold standardā€™ for AI regulation, just as the GDPR did with data protection. Given that forty-two countries have agreed to adopt the OECD principles, this is a bold claim. Squaring the protection of rights with making the EU ā€˜a global leader in the development of [ā€¦] artificial intelligenceā€™ may be a bolder one, and certainly more questionable. Where rights are respected, the test will be whether businesses can afford the regulatory burdens of development.Ā 

The GMO experience shows that the cost of over-regulation can deter small innovative businesses leading to a market dominated by corporate goliaths. By contrast, the AIR aims to help SMEs by including provisions for ā€˜regulatory sandboxesā€™ to foster the development of AI systems in a controlled environment for a limited time. But is it enough? Will there even be enough expertise to fill the job description of an AI notified body? Are we heading for a regulatory bottleneck?

Time to sand off the rough edges

Just as AI algorithms need human oversight to function safely and effectively, regulation needs continued input from industry to be fit for purpose. Fast approaching horizons are par for the life science course and must be for AI too. The review process included in the draft AIR is a welcome sign that the Commission has recognised this notion, hopefully future-proofing the AIR against the infamously exponential curve of technological development.Ā 

The potential of AI for gene editing, drug discovery, in vitro diagnostics and beyond is staggering, and it is up to the life sciences sector to work within the system of consultations and iterative reviews to ensure that a regulation with such potential becomes, and stays, fit for purpose. The current draft raised eyebrows, the next should raise hopes.

Author

Related Articles

Back to top button