Artificial intelligence, coming to a hospital near you

Complications on the horizon for EU regulators 

By Mark Swift 

07/08/2020

Artificial intelligence (AI) will soon have a huge impact on your everyday life. This is not only true in the high-profile case of autonomous vehicles, but across a broad spectrum of domains, including medicine. This disruptive technology presents numerous legal and ethical challenges. In healthcare, these could outstrip European regulators’ capacities to provide oversight as the technology develops. 

AI is the area of computer science focused on creating systems capable of cognitive tasks more commonly associated with human intelligence. This includes, for example, learning and problem solving. 

Recent milestones have revolutionised the AI field: namely, Machine Learning (ML) and the more complex subset, Deep Learning (DL). These two approaches – spearheaded in the 1980s and in 2010 respectively – make use of layers of artificial neural networks which are highly capable in raw data processing tasks, such as data sorting and image processing. This is particularly true in the case of Deep Learning. 

Healthcare innovators suggest that the benefits of the use of AI within medicine could be staggering, opening new avenues of hidden value to professionals and patients alike. From nonclinical applications, such as AI scribes with which doctors can simplify their note taking practices, to algorithms capable of complete diagnostic assessments and treatment suggestions — the technology could bring incredible advancements to the field.  

Medical imaging is quickly becoming the AI frontier in healthcare due to the appropriateness of image recognition tasks for DL solutions, as well as available access to the data needed to correctly educate algorithms. For tasks in which a medical professional would need to study and assess the images from medical scans to provide diagnostic opinion, AI algorithms show remarkable aptitude.  

In 2018, the AI company DeepMind Health collaborated with London’s Moorfields Eye Hospital to develop a system capable of identifying dozens of eye diseases. After training the algorithm using hospital medical data, the company demonstrated proof of concept through a prototype. The online platform showed that a DL algorithm could learn to recognise 3-D retinal scans of a patient, and offer referral suggestions in 30 seconds. 

Endless possibilities, endless challenges 

There are many ways in which the development of DL algorithms for healthcare presents huge legal and ethical challenges for society. This is particularly true within the regulatory context of the European Union (EU), where rigid data laws exist. Since 2018 when it came into effect, the General Data Protection Regulation set strict data protection and privacy rules that sent regulatory shockwaves around the world. 

The EU Commission produced a white paper at the start of the year outlining the strategic approach it will take as AI technology continues to mature. The paper states that the EU will “act as one and define its own way, based on European values, to promote the development and deployment of AI.” 

Ursula von der Leyen, the president of the EU Commission, made clear to MEPs that, with AI and other technologies, she supports the drafting of GDRP-style regulations.  

“We must have mastery and ownership of key technologies in Europe. These include quantum computing, artificial intelligence, blockchain, and critical chip technologies,” Von der Leyen said

The white paper, taken together with statements from the Commission, sets the bloc on a path that starkly counterpoints those being made in China or even in the United States. 

“The innovations of AI in healthcare face several challenges [in the EU] such as issues around the ownership and control of data, regulations of data sharing, protection of sensitive data through anonymization and cybersecurity, [and] intense debate over the special accountability and responsibility requirements,” writes Filippo Pesapane, a radiologist whose research touches on the application of AI in medicine. 

Pesapane and his colleagues highlight a number of dimensions within the field of AI which present crucial issues to a robust European values-based regulatory approach for the health sector. 

Who owns the data produced by an algorithm? 

While big tech firms might prefer a legally loose data landscape, it has been clear since 2018 that the EU expects them to adapt to its rules and values. This makes the hungry data requirements of fledgling AI companies all the more complicated.  

Not only do algorithms need access to large quantities of patient personal data in order to acquire knowledge on a medical subject, they themselves produce large data sets as part of their cognitive processes. For this reason, Pesapane suggests that a re-examination of regulations around data privacy and consent will ultimately be necessary before AI can be properly accounted for. 

Protecting your medical data from cybercriminals 

The proliferation of medical AIs will necessitate a growing amount of personal medical data in society, which will require careful control, monitoring, and security considerations. Not only does data need to be filed anonymously, it will also have to be policed against illicit, unknown sourcing techniques from third parties who seek to deal in data.  

Blockchain technology, while currently more well-known for its role in underpinning cryptocurrencies, could soon be an essential tool in securing the future abundance of health data from cybersecurity threats arising in equal measure.  

One solution generally proffered for problems around personal data is anonymisation. If information about someone is irreversibly anonymised, then it no longer constitutes personal data under the GDPR. However, not only does truly effective anonymisation appear hard to accomplish, there are certain trade-offs as a consequence of effacing data. 

A problem regulators and healthcare professional have is finding a balance at which point identity is acceptably protected, but the performance of the AI is not significantly negatively affected. This, of course, also suggest that there will be a loss of competitiveness for AIs produced under stricter EU regulation when compared to their non-EU peers less constrained by their own local data laws. 

My weakness is my strength 

Some commentators suggest that the EU regulatory approach can take a third way, different from that of state-run China, or the influence of Big Tech in the United States. While there are business drawbacks to a social economy, there could also be benefits.  

“The stronger European regulation compared to China and the U.S. in this field might decrease our ability to scale revenue; however, in the long run, this focus on AI for the people can serve as our competitive advantage, and we become [a] role model for the rest of [the] world,” argues Anna Metsäranta, a business designer at Solita, which readies companies for AI technology. 

While the EU has an abundance of AI literature and commentary, the concentration of AI technology companies is overtly balanced in the favour of China and the US. Starting in third place, tight controls on technology companies will inevitably dampen European AI competitiveness further.  

On the other hand, some studies are showing European concerns about data privacy are declining. Whether or not this suggests that European high-level sensibilities about ethics and law will soften over time to accommodate AI is hard to say. 

%d bloggers like this: