Artificial intelligence and laboratory medicine: at the crossroads of value ethics and liability
1Pôle de recherché an Endocrinology, Diabetes et Nutrition, Institute de Recherché Experimental et Clinique, Clinique’s Universities St-Luc and University Catholique de Louvain, Brussels, Belgium.
2Department of Clinical Biochemistry, Clinique’s Universities St-Luc and University Catholique de Louvain, Brussels, Belgium
3Department of Biochemistry, G.B. Pant Institute of Postgraduate Medical Education & Research, Associated Maulana Azad Medical College, New Delhi, India
Corresponding Author
Prof. Damien Gruson,damien.gruson@saintluc.uclouvain.be
AI has the potential to transform laboratory medicine by enabling better decisionmaking, faster diagnosis, and more personalized treatment. AI can analyze and integrate vast amounts of data, identify patterns, and make predictions that can assist healthcare professionals in making accurate diagnoses. AI can also optimize laboratory workflows, reducing turnaround.
Time and improving patient outcomes. AI is also at the basis of a next generation of clinical decision support systems (CDSS) that can assist healthcare professionals in making clinical decisions. CDSS can integrate patient-specific data, imaging, and clinical guidelines to provide personalized treatment recommendations. However, CDSS must be carefully designed and evaluated to ensure that they are accurate and reliable and laboratory specialists are playing a fundamental role for that.
However, these advancements also raise ethical concerns and questions of liability. The use of emerging technologies and AI in laboratory medicine raises ethical concerns such as patient privacy, informed consent, and the potential for biases. AI algorithms must be transparent, explainable, and accountable to ensure that they are not perpetuating biases or making decisions that are not in the best interest of the patient. Patient privacy must also be protected when using AI, as patient data can be vulnerable to hacking and misuse.
Another important concern for the use of AI is liability. Who is responsible if an AI algorithm makes an incorrect diagnosis or recommendation? Is it the healthcare professional who uses the tool, the manufacturer of the tool, or the AI algorithm itself? Liability must be carefully considered and addressed to ensure that patients are protected, and healthcare professionals are not held responsible for errors that may be beyond their control.
Specialists in laboratory medicine are central players in the transition of emerging technologies as well as in the application of AI. They should be engaged, collectively and in multidisciplinary teams to achieve it. They must also carefully evaluate and implement these technologies to ensure that they are accurate, reliable, and ethical. It is crucial to strike a balance between the benefits and potential risks of using emerging technologies and AI in laboratory medicine to ensure the best possible outcomes for patients.
Jiang F, Jiang Y, Zhi H, et al. Artificial intelligence in healthcare: past, present and future Stroke and Vascular Neurology 2017; 2: doi: 10.1136/svn-2017-000101
Wiens J, Saria S, Sendak M, et al. Do no harm: a roadmap for responsible machine learning for health care. Nat Med. 2019; 25:1337–1340
Waechter, S., Mittelstadt, B., Russell, C.: Bias preservation in machine learning: the legality of fairness metrics under EU non-discrimination law. W. Va. L. Rev. 123, 735 (2020)
Irene Y. Chen, Emma Pierson, Sherri Rose, Shalmali Joshi, Kadija Ferryman, Marzyeh Ghassemi. Annual Review of Biomedical Data Science 2021 4:1, 123-144