We cannot imagine life without AI anymore. The health care world too makes increasing use of AI. In this fact sheet we will describe in broad outline what care providers should pay attention to when purchasing and using AI from a health care law perspective. We will also give tips and recommendations for good use.
What is AI?
Artificial Intelligence (or ‘AI’) is not a clear-cut legal term. AI refers to systems, mostly software, which can perform tasks and display “intelligent” behaviour. For example, a tool that is able to predict with a high degree of probability which patients will show up at a consultation, or that supports doctors in analysing MRI scans.
What requirements are set on AI in health care?
The AI Act, which will enter into effect step by step from 2024 to 2026, is important to all users of AI. The GDPR is relevant too. Specifically for AI that is used in health care, it is important to determine whether it is a medical device. AI is a medical device if it has a medical objective. Medical devices must comply with the requirements (of quality) from the MDR (Medical Device Regulation). The MDR has four risk classes; the higher the risk class, the stricter the requirements to which the medical device is subject. All medical devices must have a CE marking. The CE marking shows that the device meets the minimum requirements from the MDR.
Responsibilities and liability
Good care
Under the Healthcare Quality, Complaints and Disputes Act (‘Wkkgz’), care providers must ensure that all AI that is deployed in the care process is safe and of good quality. The deployment of AI must result in good care. Besides, as soon as the AI Act has entered into effect, the AI has to meet the requirements from the AI Act. It is the responsibility of the care provider to check this before the AI is deployed.
If the AI application has a medical purpose, and it is a medical device, its application must also meet the minimum quality requirements from the MDR. The care provider has to check when it is purchased that the AI application has a CE marking and is registered in Euramed.
Employees working with AI applications must get good instructions. The care provider also has to perform updates performed by the supplier, and provide proper ‘maintenance’ this way.
Informing the patient
AI is often used in the context of the performance of a medical treatment contract, but not always.
Under the Medical Treatment Contracts Act (‘Wgbo’) and the Wkkgz, patients have to be informed well about their treatment.
If AI is a medical device, the patient must be informed well: what is the AI application suitable for? What are the risks of wrong use? If the AI is not owned by the care provider, patients must be informed about this.
The use of AI may coincide with unexpected incidents that lead to damage to a patient. The bar for liability is high. It will always depend on the circumstances of a case whether the care provider is liable for damage to a patient. By insuring themselves care providers may limit, but never completely eliminate liability risks.
The supplier is responsible for the AI being compliant with the quality requirements. If it does, the supplier will usually be liable. The patient will often hold the care provider or the carer liable, rather than the supplier. If the care provider is sued for a defective AI application, it may recover the damages from the supplier, because the supplier has product liability.
If the AI application is a medical device, the supplier must ensure that a product has CE marking. A CE marking is just a confirmation that the product meets the statutory minimum requirements. It does not necessarily make the product suitable for use by the care provider. The product may still turn out to have defects later on.
Responsibilities and liabilities of individual care provider
If individual care providers use AI in health care, they can in principle assume that a safe and qualitatively good tool is used. If they follow the policy of the care provider and there are no signals that this policy is wrong, individual care providers cannot be called to account for this. When in doubt about the suitability of the digital triage tool or the policy, the care provider should discuss this with the care provider. It is also important that the care provider himself knows very well how the AI application works and can inform patients about this. If a doctor acts contrary to what may be expected of a reasonably acting and reasonably skilled professional colleague, that doctor may be sued in disciplinary proceedings. This threshold too is high.
The article above is not a piece of advice; it offers insight in the main aspects. Each application and each situation is different.