Translate this page into:
Publication-equivalent & beyond: Strengthening the ICMR-IRIS framework
sekarcandra@gmail.com
-
Received: ,
Accepted: ,
Sir,
I read with great interest the article by Bahl1 describing the ICMR-Impact of Research and Innovation Scale (ICMR-IRIS). This framework represents a commendable effort to quantify research impact through the novel concept of Publication-Equivalent (PE), which standardizes outputs such as publications, patents, clinical/public health guideline influence, and technology commercialization. It responds to the pressing need for a unified, scalable metric to ensure accountability in the use of public research funds.
The ICMR-IRIS demonstrates several strengths, including a clear rationale, transparent indicators, and scalability from individual projects to institutional and national levels. It bridges the gap between simplistic citation metrics2 (e.g., h-index, i10-index) and complex multi-indicator frameworks (e.g., Translational Research Impact Scale)3. However, its uniform application across all researchers and research stages highlights opportunities for further refinement to ensure fairness and inclusivity.
The ICMR-IRIS seeks to establish a unified, quantifiable metric for assessing research and innovation impact across biomedical domains. Positioned as a practical alternative to existing metrics such as the h-index, i10-index, and the Translational Research Impact Scale2,3, it addresses the need for scalable and transparent impact assessment within the Indian research ecosystem, particularly to ensure accountability in the use of public funds.
While the ICMR-IRIS scale assigns fixed weights to eight indicators, its credibility would be strengthened through consensus methods (e.g., Delphi, multi-stakeholder consultation) and empirical validation. Future refinements could further justify the PE-based approach by benchmarking it against international frameworks (e.g., UK REF4, NHMRC5, CIHR6).
Enhancing field normalization would help ensure that disciplines with longer translation timelines (e.g., basic science) are equitably represented. Similarly, calibrated allocation of PE in collaborative outputs could provide balanced recognition across institutions, avoiding duplication. Expanding the reporting of initial applications from ICMR intramural and extramural programs with more detailed data would also increase transparency. Moreover, empirical validation studies—such as testing correlation with downstream health outcomes, assessing inter-rater reliability, or undertaking sensitivity analyses—would further reinforce confidence in the framework’s robustness.
Although scalable from individual projects to institutional and national levels, the metric’s impact could be broadened by complementing immediate outputs with long-term outcomes such as equity improvements, capacity building, and non-publication policy influence. International comparability could also be strengthened by aligning weighting decisions with broader global dialogues while preserving context-specific priorities. Addressing potential negative or neutral impacts (e.g., retractions, research waste) would enhance comprehensiveness. In parallel, proactive measures could mitigate risks such as incentivization of publication inflation, premature patent filings, or disproportionate emphasis on short-term translational outputs at the expense of foundational research.
To enhance fairness, several refinements are recommended. First, career-stage normalization—calculating PE per scientist-year or adjusting for years post-PhD/MD—can account for variations in researcher experience. Second, incorporating stage-of-research sensitivity allows early-phase or developmental research to receive appropriate recognition. Third, expanding indicators to encompass open science practices, health equity contributions, and capacity-building outcomes provides a holistic reflection of research impact. Fourth, transparent assignment of weights through multi-stakeholder consensus ensures credibility and acceptance. Finally, establishing governance mechanisms to prevent double counting and audit high-impact claims is essential for maintaining integrity, accountability, and trust in the framework.
In parallel, with increasing emphasis on research promotion and the use of research outcomes in accreditation and ranking systems such as NIRF and NAAC, the development of strong institutional policies on responsible conduct of research and research integrity becomes vital. Apex bodies and institutions could play a pivotal role in embedding these policies alongside evaluation frameworks such as ICMR-IRIS, ensuring that impact is pursued in alignment with ethical standards. Furthermore, prioritizing dedicated funding for bioethics and research ethics in these areas would strengthen the ecosystem by encouraging responsible innovation and safeguarding public trust.
To ensure a more equitable and context-sensitive assessment, piloting a career-stage–adjusted version of ICMR-IRIS across select institutes. This pilot will allow real-world testing of the modified framework and help identify implementation challenges. Following the pilot, a validation study should compare standard and adjusted models, focusing on their ability to reflect both foundational and translational contributions fairly. Finally, a national expert working group could review pilot results, refine stage-sensitive indicators, and establish a roadmap for nationwide adoption.
Overall, the ICMR-IRIS is a pragmatic step toward standardized research impact evaluation in India. Its simplicity is laudable, yet career-stage and research-phase sensitivity, broader impact inclusion, and empirical validation would help ensure fairness and global relevance. By embedding fairness, inclusivity, and integrity into its evolution, ICMR-IRIS can not only transform national research evaluation but also position India as a global leader in developing responsible and ethically grounded models of research impact assessment.
Financial support & sponsorship
None.
Conflicts of Interest
None.
Use of Artificial Intelligence (AI)-Assisted Technology for manuscript preparation
The authors confirm that there was no use of AI-assisted technology for assisting in the writing of the manuscript and no images were manipulated using AI.
References
- Publication-Equivalent as the new single currency of research impact: The ICMR-Impact of Research and Innovation Scale (ICMR-IRIS) Indian J Med Res. 2025;162:1-4.
- [CrossRef] [PubMed] [PubMed Central] [Google Scholar]
- Google Scholar. Google scholar metrics. Available from: https://scholar.google.com/intl/en/scholar/metrics.html#metrics, accessed on September 30, 2025.
- The translational research impact scale: development, construct validity, and reliability testing. Eval Health Prof. 2014;37:50-70.
- [CrossRef] [PubMed] [PubMed Central] [Google Scholar]
- UK Research and Innovation (UKRI). How research England supports research excellence. Available from: https://www.ukri.org/who-we-are/research-england/research-excellence/research-excellence-framework/, accessed on September 30, 2025.
- NHMRC. Building a Healthy Australia. Research impact position statement. Available from: https://www.nhmrc.gov.au/research-policy/research-translation-and-impact/research-impact?utm_source=chatgpt.com, accessed on September 30, 2025.
- Assessing health research and innovation impact: Evolution of a framework and tools in Alberta, Canada. Front Res Metr Anal. 2018;3:25.
- [CrossRef] [Google Scholar]