Each October, the European Union Agency for Cybersecurity (ENISA) holds #CyberSecMonth, an annual campaign to promote cybersecurity among EU citizens and organisations, and to provide up-to-date information on online security through awareness-raising activities and the exchange of best practices. As part of this campaign, Didier Domínguez, an expert in artificial intelligence at the TIC Salut Social Foundation, and Oriol Castaño, an expert in cybersecurity at the Health Data Protection Office, have written the following article on cybersecurity and artificial intelligence in the field of health.
The integration of artificial intelligence (AI) in the field of health is now a reality with great potential to improve the diagnosis, treatment and management of patients. However, these new opportunities raise important challenges, especially in the field of cybersecurity, which poses an additional challenge to implementing AI in hospitals and health systems in general.
Cybersecurity is central to AI-aided health care. AI systems process huge amounts of clinical data, such as medical records, images and analysis results. These data are very sensitive and must be protected to prevent misuse or unauthorised access. In fact, this information is considered a special category of personal data by the data protection regulations, as it affects the most personal aspects of people’s lives and may be used to discriminate, harm people or commit crimes. So it is necessary to adequately protect health data with the maximum suitable security measures in each case.
The health sector is not immune to cyberattacks, despite the serious ethical implications of an attack on any medical centre. In 2017, the WannaCry ransomware campaign paralysed parts of the UK’s National Health Service for days. And in 2019, a malicious individual leaked the personal data of thousands of HIV patients in Singapore [1]. There have also been more recent incidents, such as the cyberattack on Hospital Clínic, Barcelona in March 2023, which rendered the computer system inoperable and affected critical services, including the emergency room, outpatient consultations and the clinical analysis laboratory. It also caused oncological treatments and major medical interventions to be postponed [2]. These events highlight the health care sector’s vulnerability to cyberthreats.
Attacks on the health care sector are on the rise. They target medical systems with ransom demands ranging from thousands to millions of euros. The consequences of a “successful” attack affect all levels of medical care, resulting in rescheduled visits, treatment delays and possible diagnostic errors.
Since 2020 INTERPOL has warned of the growth in ransomware attacks targeting hospitals and other institutions involved in the global response to COVID-19. These attacks seek to block critical systems and extort payments, so the international body has issued an alert to its member countries’ police forces about this threat [3].
In addition, the proliferation of IoT devices in the medical sector, such as internet-connected pacemakers and insulin pumps, increases vulnerability as there may be errors in cybersecurity protections due to the race to bring new medical equipment to market.
Government institutions are taking steps to deal with the situation. DIRECTIVE (EU) 2022/2555 [4], on measures for a high common level of cybersecurity across the Union, establishes the minimum cybersecurity requirements that must be met by European organisations that manage networks and information systems in sectors considered critical: energy, transport, water, health sector, digital infrastructure, public administration, banking and finance, among others. It also stipulates that member states must adopt national cybersecurity strategies and appoint competent national authorities, single points of contact on cybersecurity, and cybersecurity incident response teams. Therefore, this directive combines multiple strategies to improve the cybersecurity of organisations in the health sector. These include risk analysis, incident management, business continuity, supply chain security and the use of secure emergency communication systems within the organisation. This directive will enter into force on 18 October 2024 and will repeal the NIS 2 Directive.
Moreover, the US Food and Drug Administration (FDA) issued a series of reminders and warnings after discovering the aforementioned flaws in IoT devices in the medical sector in an attempt to improve its default safety standards. The recent passing of the Consolidated Appropriations Act, 2023 by the United States Congress, in particular Section 3305 entitled ‘Ensuring Cybersecurity of Medical Devices’ [5], amended the Federal Food, Drug, and Cosmetic Act to strengthen cybersecurity regulations for medical devices. These amendments, which were to enter into force on 29 March 2023, underline the growing recognition of the need to prioritise cybersecurity when developing medical technologies.
In short, while no device or network is completely immune to vulnerabilities, regulatory measures such as these are critical to mitigate risks and foster safer health care technologies.
AI is having a significant impact on cybersecurity, as attackers are using AI to both design and carry out cyberintrusions. This includes the development of more sophisticated phishing e-mails, identity theft attacks, the rapid exploitation of vulnerabilities, the creation of complex malware, increased collection of information about targets, attack automation and overloading of human defences. In addition, AI is being used to expand ransomware and make it more evasive, which poses an additional challenge to cybersecurity [6].
All of these factors take the concept of cybersecurity to another level as it involves anticipating vulnerabilities instead of repairing them after the fact. Cyberthreats are constantly evolving and overcoming traditional security measures. To address this gap, organisations are increasingly turning to AI in cybersecurity. ‘Cyber AI’ is able to identify and respond to malicious activity autonomously, stopping attacks such as ransomware before they do harm by understanding what is normal and abnormal on the network. This approach, similar to the human immune system, has been successful in detecting sophisticated cyberattacks in recent years, catching attackers in the early stages [1].
Input data poisoning
There are many cybersecurity concerns specifically associated with the implementation of AI in health centres such as data poisoning. This is the introduction of malicious data into the training set of a machine learning application, affecting the model’s output. This can happen if malicious individuals have access to the input data used to train the model.
Potential targets in a medical care environment include data analytics systems, IoT devices, network and security monitoring systems, and facility control systems. Various methods can be used to carry out attacks during the training and testing stages. This can result in technical damage, including data integrity, service availability, and performance degradation. To mitigate this threat, it is suggested to properly clean and review input data during the training and testing process, identify outliers and anomalies, and maintain model integrity and performance with appropriate monitoring and security measures.
One-pixel attack
Some papers argue that the one-pixel attack is a real danger in the context of computer vision and machine learning applied to cancer detection and diagnosis [7]. When critical tasks such as medical diagnosis are automated, malicious manipulation of this process can have devastating consequences, even leading to incorrect diagnoses and treatments. For example, one-pixel attacks are used in a real-life scenario using a real pathology dataset (TUPAC16) and targeted at IBM CODAIT’s MAX breast cancer detector using adversarial images [7]. The results show that a minimal modification of a single pixel in a complete image can reverse the result of the automatic diagnosis. This poses a real threat from a cybersecurity perspective, since this one-pixel method could be used as an attack vector by a motivated attacker. This underlines the importance of protecting AI systems in medical applications against cyberthreats and attacks.
Model inversion, inference and extraction attacks
Model inversion, inference, and extraction attacks focus on obtaining confidential information about a machine learning model or training set. Examples of these attacks include:
In this case, the threat sources of model inversion, inference and extraction attacks are likely to be external actors who have access to the model interfaces as regular consumers or insiders with partial knowledge of the model’s architecture, hyperparameters, training settings or full access to the model itself. The type and methods of attack available to an attacker vary according to the amount of information the attacker can obtain about the model and training set, the learning architecture, and the machine-learning algorithm used. Such attacks can have a significant impact on the security and privacy of personal data. They may lead to the disclosure of personal information. They may also have legal and financial implications for the organisation that owns or controls the machine-learning model. To defend against these attacks, various security measures can be implemented, such as detecting anomalies in model queries, applying random noise, using Differential Privacy (DP), and standardisation by reducing model overfitting.
In addition to the types of attack set out above, there are a whole set of issues that can compromise an AI system’s cybersecurity. These concerns include the introduction of adversarial data in training sets, lack of strong regulation, opacity in model transparency, business leaders’ lack of knowledge, and the need to manage the consequences of unintentional changes to AI models. These risks can lead to unexpected results, biases in data and lack of accountability [8].
There are challenges that can directly or indirectly affect the cybersecurity of AI systems in medical centres, such as a lack of clear articulation of outcomes and organisational expectations for performance, quality and accuracy. This is a critical point in implementing AI and machine learning, because if specific technological goals are not established for providers, defining their expected performance, accuracy, margin of error and defined outputs, and how they relate to the quality of medical care, the results can be unhelpful, have a negative impact on decision-making, and undermine trust in the technology.
Sources of potential threats include poor data quality, lack of transparency and system controls, and de-identification of data, which can lead to biases and differences in treatment plans or patient workflows. In addition, it is important to recognise the limitations of AI and the possible biases in the algorithms, as well as the risk of deficient or excessive adjustment when some kind of data change or transition is applied.
Various solutions have been proposed to address these challenges in the implementation of AI and machine learning [8]:
In addition to the cybersecurity risks inherent in any IT system, there are some risks specific to AI solutions. The health care field is not exempt from these risks. In fact, it is the specific target of many cyberintrusions. The purpose of these attacks can range from gaining access to medical data, poisoning the data or manipulating the models to copy them, impairing performance or inferring patient information. In addition, constant advances in AI mean that it is used to develop more effective and complex attacks. But fortunately, these advances also make it possible to anticipate vulnerabilities, stop attacks in time and prevent the overloading of human defences.
In addition, there are other cybersecurity challenges in AI more closely-associated with the organisational structure, opacity in the transparency of the models, and business leaders’ lack of knowledge. To address them, measures such as model verification, suitable regulation, transparency in decision-making, education of business leaders and implementation of change management practices should be implemented.
In short and in conclusion, we can say that cybersecurity in AI-based medical care systems is essential to guarantee the accuracy of diagnoses, as well as privacy and data integrity. These factors are fundamental in maintaining trust in these critical systems for public health.
[1] AI in Healthcare: Protecting the Systems that Protect Us. (2020). Wired. Recuperat de https://www.wired.com/brandlab/2020/04/ai-healthcare-protecting-systems-protect-us/ (Visitat el 26 d’octubre del 2023).
[2] Hospital Clínic. (2023). Ciberatac a l’hospital Clínic Barcelona. Recuperat de https://www.clinicbarcelona.org/ca/premsa/ultima-hora/ciberatac-a-lhospital-clinic-barcelona (Visitat el 26 d’octubre del 2023).
[3] INTERPOL. (2020). Cybercriminals targeting critical healthcare institutions with ransomware. INTERPOL. Recuperat de https://www.interpol.int/en/News-and-Events/News/2020/Cybercriminals-targeting-critical-healthcare-institutions-with-ransomware (Visitat el 26 d’octubre del 2023).
[4] Eur-Lex, Access to EU law. (2022). Ciberseguridad de las redes y sistemas de información. Recuperat de https://eur-lex.europa.eu/ES/legal-content/summary/cybersecurity-of-network-and-information-systems-2022.html (Visitat el 26 d’octubre del 2023).
[5] Federal Register, the Daily Journal of the United States Government. (2023). Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions; Guidance for Industry and Food and Drug Administration Staff; Availability. Recuperat de https://www.federalregister.gov/documents/2023/09/27/2023-20955/cybersecurity-in-medical-devices-quality-system-considerations-and-content-of-premarket-submissions (Visitat el 26 d’octubre del 2023)
[6] Departamento de Salud y Servicios Humanos de EE. UU. (2023). Artificial Intelligence, Cybersecurity and the Health Sector. Departamento de Salud y Servicios Humanos de EE. UU. Recuperat de https://www.hhs.gov/sites/default/files/ai-cybersecurity-health-sector-tlpclear.pdf (Visitat el 26 d’octubre del 2023).
[7] Korpihalkola, J., Sipola, T., Puuska, S., & Kokkonen, T. (2021). One-Pixel Attack Deceives Computer-Assisted Diagnosis of Cancer. En SPML 2021 (págs. 100–106). ACM. doi: 10.1145/3483207.3483224.
[8] Healthcare & Public Health Sector Coordinating Councils. (2023). Health Industry Cybersecurity-Artificial Intelligence Machine Learning (HIC-AIM). Recuperat de https://www.aha.org/cybersecurity-government-intelligence-reports/2023-02-08-new-hscc-cwg-publication-artificial-intelligence-cybersecurity (Visitat el 26 d’octubre del 2023).
Subscriu-te i rep cada mes novetats i notícies al teu email