Vol V, Issue 1 Date of Publication: February 17, 2020
DOI: https://doi.org/10.20529/IJME.2019.074

Views
, PDF Downloads:

LETTERS

Published online on November 9, 2019
DOI: 10.20529/IJME.2019.074

Globalising artificial intelligence for improved clinical practice

Artificial intelligence (AI) technologies are facilitating the work of modern healthcare organisations to leverage the power of big data in clinical practice (1). In most cases, AI-based systems improve clinical decision-making using multiple layers of information and pre-specified algorithms (2). In addition, recent AI technologies like machine learning can learn from existing data and perform predictive operations resulting in a robust performance in clinical settings (1, 2). Such innovations are likely to serve the healthcare industry by minimising human error, savings costs, and maximising informed decision-making (2). However, critical challenges may affect the applications of AI in clinical settings, which include the effects on patient-provider communication, safety and efficacy of health services, and humane aspects of caregiving (1, 2). These issues suggest the need for a more careful analysis of different ethical aspects before adopting AI in clinical practice.

Several agencies have started developing guidelines and regulatory frameworks for using AI in clinical practice, like, the High-Level Expert Group on AI of the European Commission, which has presented its “Ethics guidelines for trustworthy AI,” which proposed that the development of AI should be lawful, ethical, and robust (3). This highlights the need for considering multi-dimensional aspects of AI, with potentially complex medico-legal and ethical implications. Moreover, AI-based systems are continuously evolving, making it essential to maintain the balance between technological advancements and their safe use in clinical operations. A recent regulatory framework proposed by the US Food and Drug Administration acknowledges this issue, arguing that future modifications of AI-based technologies should emphasise safe and effective use (4). While such guidelines are essential for optimal development and implementation of AI-based clinical systems, most of them have local or regional scope rather than a global vision. In the era of continued globalisation, it is critical to recognise the pre-existing digital divide among global nations; how it can be aggravated by newer technologies like AI; and how future advancements should address these challenges.

Digital health technologies are increasingly being used to strengthen health systems in many low- and middle-income countries. However, very few of those technologies are applied in clinical settings in these countries. In such resource-constrained contexts, AI-based clinical systems are likely to arrive late and incur a high cost to the users or health systems. In addition, developing nations do not have adequate resources to pursue advanced research and development in such advanced technologies. Therefore, a digital divide continues to exist between the developed and developing nations.

Furthermore, clinical practice guidelines are diverse across contexts and populations. In this scenario, new guidelines for AI in different countries may add more complexities in clinical practice, globally. Interestingly, such crises can be prevented through the same AI technologies, which can be used to reduce complexities and improve clinical practice given an integration of AI in clinical settings under uniform guidelines all over the world. In this process, the AI-based systems will be exposed to diverse and big data essential for training and testing, yielding greater precision in clinical decision-making in different contexts. Moreover, the use of AI in integrating genomic, epigenetic, and behavioural data can better inform personalised diagnosis and treatment across populations (2). AI can also be used to analyse economic, political, and technological challenges in a population and inform clinical decision-making accordingly, which can help in achieving equality and sustainability in global health systems (1).

To unleash these opportunities, a global vision for developing and using AI in clinical practice is essential. It can be achieved by fostering collaboration between scholars and institutions across the globe with a focus on the developing countries, which have a more significant proportion of the global burden of diseases, along with a focus on capacity building. Without advancing medical education in the era of digital health, clinical practitioners may not achieve the competencies to serve within a technologically advanced healthcare system. Recent initiatives by the World Health Organization and the International Telecommunication Union for benchmarking AI in healthcare offer promise of improving AI-driven processes and outcomes (5). As procedures are being developed, adopting globalised approaches within these efforts may facilitate the overcoming of the existing digital health challenges and prevent future disparities in AI-based clinical practice.

Md Mahbub Hossain ([email protected]) School of Public Health, Texas A & M University, Texas, USA; Rachit Sharma ([email protected]), The INCLEN Trust International, Okhla Industrial Area Phase-1, New Delhi 110 020 INDIA; Abida Sultana ([email protected]) Nature Study Society of Bangladesh, Khulna 9000, BANGLADESH;Samia Tasnim ([email protected]) School of Public Health, Texas A & M University, Texas, USA; Farah Faizah ([email protected]) United Nations Population Fund, IDB Bhaban, Dhaka 1207, BANGLADESH

References

  1. Jiang, F, Jiang Y, Zhi H, Dong Y, Li H, Ma S, Wang Y. Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol. 2017 Jun 21; 2(4): 230-43.
  2. Shahid N, Rappon T, Berta W. Applications of artificial neural networks in health care organizational decision-making: A scoping review. PloS One. 2019 Feb 19; 14(2): e0212356.
  3. European Commission. Ethics guidelines for trustworthy AI | Digital Single Market. 2019 Apr 8[cited 2019 Oct 20]. Available from: https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai
  4. Food and Drug Administration. Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD)-Discussion Paper and Request for Feedback. 2019 [cited 2019 Oct 21]. Available from: https://www.fda.gov/downloads/medicaldevices/deviceregulationandguidance/guidancedocuments/ucm514737.pdf.
  5. International Telecommunication Union (ITU). ITU-WHO Workshop on Artificial intelligence for health. 2018 Sep 25 [cited 2019 Oct 21]. Available from: https://www.itu.int/en/ITU-T/Workshops-and-Seminars/20180925/Pages/default.aspx
About the Authors
Md Mahbub Hossain ([email protected])
School of Public Health, Texas A & M University,
Texas, USA
Rachit Sharma ([email protected])
The INCLEN Trust International, Okhla Industrial Area Phase-1,
New Delhi 110 020 INDIA
Abida Sultana ([email protected])
Nature Study Society of Bangladesh,
Khulna 9000, BANGLADESH
School of Public Health, Texas A & M University,
Texas, USA
United Nations Population Fund, IDB Bhaban,
Dhaka 1207, BANGLADESH
Help IJME keep its content free. You can support us from as little as Rs. 500 Make a Donation