ICMR Sets Ethical Boundaries for AI in Healthcare and Research
Artificial intelligence in healthcare is rapidly transforming the industry, with discussions gaining significant momentum. As AI technologies continue to evolve, they are becoming integral in improving patient outcomes, enhancing diagnostics, and streamlining healthcare operations. From personalized treatments to predictive analytics, AI holds the potential to revolutionize how healthcare is delivered, making it more efficient and accessible. Indian Council of Medical Research (ICMR) has formulated ethical guidance documents from time to time for promoting ethical and high quality research in India. The most recent version of ICMR’s National Ethical Guidelines for Biomedical and Health Research involving human participants, was released in 2017.
All health and biomedical research, whether AI-based or conventional, must adhere to fundamental ethical principles: respect for persons (autonomy), beneficence (doing good), non-malfeasance (doing no harm), and distributive justice. These principles ensure the protection of the dignity, rights, safety, and well-being of participants.
Responsible AI
Responsible AI frameworks emphasize core principles such as inclusivity, fairness, security, and transparency. However, the implementation and interpretation of these principles can differ across various domains.
Ethical Guidelines for the Application of AI in Healthcare
The integration of AI technology in healthcare must follow ethical principles upheld by all stakeholders. Given its direct impact on patient lives, AI applications require careful, ethical approaches, ensuring patient safety, confidentiality, and cautious deployment at every stage.

Autonomy
Autonomy When AI technologies are used in healthcare, there is a possibility that the system can function independently and undermine human autonomy. The application of AI technology into healthcare may transfer the responsibility of decision-making into the hands of machines.
Safety and Risk Minimization
Before an AI technology based system is put into widespread use, affirmation is required that the system will operate safely in a reliable manner. The responsibility of ensuring the safety of participant’s lies with all the stakeholders involved in the development and deployment of the AI technology. Protection of dignity, rights, safety, and well-being of patients/ participants must have the highest priority. The risk involved with the deployment of AI technology and techniques in clinical research or patient care will differ based on the type of use case and subsequent deployment methodology used.
Trustworthiness
Trustworthiness is the most desirable quality of any diagnostic or prognostic tool to be used in AI healthcare. Clinicians need to build confidence in the tools that they use and the same applies to AI technologies.
Data privacy
Data privacy must aim to prevent unauthorized access, modification, and/ or loss of personal data. The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty [16]. These practices are crucial in the healthcare sector where medical information represents sensitive data that, if misused, could harm patients or subject them to discrimination even if it is unintended. Individual patients’ data should preferably be anonymized unless keeping it in an identifiable format is essential for clinical or research purposes.
Accountability
Accountability is described as the obligation of an individual or organization to account for its activities, accept responsibility for their actions, and to disclose the results in a transparent manner. AI technologies intended to be deployed in the health sector must be ready to undergo scrutiny by concerned authorities at any point in time.
Optimization of Data Quality
AI is a data driven technology, the outcomes of which largely depend upon the data used for training and testing the AI. This is of particular importance in the field of AI for health as a dataset which is skewed and is not sufficiently large can produce issues related to data bias, errors, discrimination etc.
Accessibility, Equity and Inclusiveness
The use of computers for development as well as the deployment of AI technologies in healthcare presupposes wider availability of infrastructure. The digital divide is known to exist in almost all countries and is more prominent in low- and middle-income countries (LMICs).
Collaboration
Needless to say, the collaboration among AI researchers and health professionals throughout the process of development and adoption of AI- 28 Ethical Guidelines for Application of Artificial Intelligence in Biomedical Research and Healthcare based solutions is likely to improve the yield from this promising technology.
Validity
The divergence of AI-based algorithms may be amplified due to differences in the datasets used for training of AI algorithms. AI technology in healthcare must undergo rigorous clinical and field validation before application on patients/participants. These are necessary to ensure safety and efficacy.
Summary
In conclusion, India’s journey toward becoming an AI-driven healthcare hub reflects its ongoing efforts to blend technology with healthcare to tackle long-standing challenges, improve patient outcomes, and expand access to quality medical care. With continued advancements in AI, the country is poised to lead the way in global healthcare innovation.
Stay tuned to TISHHA NEWS for the latest update.