Share on

The double-edged sword of automation and the risks of AI’s uneven impact on healthcare professions: a comment on the OECD artificial intelligence papers report

Authors

Abstract

The integration of artificial intelligence (AI) is rapidly transforming healthcare, enhancing accuracy, personalizing care, and streamlining administrative tasks. The report published in May 2025 by the OECD (Organization for Economic Co-operation and Development) offers a taxonomy of health roles based on their susceptibility to automation through GenAI and robotics, ranging from low risk (e.g., physicians) to high risk (e.g., orderlies, transcriptionists). While AI augments many routine clinical tasks, roles requiring
complex judgment and empathy remain largely human-driven. Key concerns arise from the potential uncritical adoption of these evaluations, which fragment healthcare roles and risk overlooking their integrative, relational nature. Additionally, industry-led implementation of AI may undervalue frontline clinical expertise and ethical considerations. To ensure responsible integration, multi-stakeholder strategies such as the European Pact for Skills and targeted upskilling initiatives are essential. Policymakers must guide AI adoption to redesign roles and education in ways that empower the workforce without sacrificing the core values of care.

Article

The integration of artificial intelligence (AI) into healthcare is rapidly transforming practice. AI enhances accuracy, enables treatment plans personalization, and administrative tasks streamlining. AI-driven tools have demonstrated potential in reducing the administrative and cognitive burdens that contribute to clinician burnout [1].

Despite these advancements, the adoption of AI in healthcare raises significant concerns. A study highlights the importance of addressing these ethical and safety challenges to ensure that AI technologies are developed and deployed responsibly [2]. A paper exploring the implications of AI for the nursing workforce emphasizes the need for strategic planning to manage these transitions effectively [3]. The Food and Drug Administration (FDA) recently announced the adoption of “Elsa”, a GenAI tool designed to “help employees, from scientific reviewers to investigators, work more efficiently” (https://www.fda.gov/news-events/press-announcements/fda-launches-agency-wide-ai-tool-optimize-performance-american-people).

In this context, the last OECD Report from the artificial intelligence papers series (May 2025) [4] presents a data-driven taxonomy of health roles based on their susceptibility to automation through GenAI and advanced robotics (AR). Jobs are classified into four risk categories: low risk, potential augmentation, potential automation, and high risk.

Low-risk occupations include roles that demand complex decision-making, interpersonal care, and ethical judgement. These are exemplified by physicians in general practice, psychiatry, and oncology. The inherently human dimensions of empathy, diagnostic reasoning, and shared decision-making seem to spare these roles from full automation, even as AI tools increasingly support clinical workflows.

Reportedly augmentable roles such as registered nurses and physician associates are characterized by a blend of routine and high-stakes tasks. While elements like triage, documentation, and remote monitoring are already enhanced by AI systems, many tasks like patient interaction still demand a human presence.

On the other hand, roles with reported high potential for automation include pharmacy technicians, radiologic technologists, and laboratory technicians. These positions involve highly structured, repetitive tasks susceptible to automation by AI-based image recognition, diagnostic platforms, and robotic dispensing systems. The OECD estimates that about 4.3% of the US health workforce falls into this “potential automation” category.

Finally, reportedly high-risk occupations such as orderlies and medical transcriptionists are described as replaceable by process automation tools and speech-to-text systems, with 0.6% of such health roles present in the US in 2025.

Some considerations seem necessary. The reasoning and methodology behind automatability scores assigned to healthcare occupations, while allowing for a detailed analysis, risk fragmenting healthcare roles into discrete functions and ignoring the integrative, relational, and holistic nature of care, potentially underrepresenting many critical aspects of healthcare work, such as clinical intuition, ethical decision-making, and emotional intelligence. If not properly translated into healthcare workforce skilling and employment processes, this scoring could produce more harm than benefit.

Another significant concern regards how AI adoption appears predominantly driven by private technology developers and healthcare solution providers, potentially undervaluing key experiential insights and ethical considerations from frontline healthcare professionals. This industry-led adoption risks prioritizing technological capabilities over patient care instances and workforce well-being. It is therefore imperative for policymakers to proactively ensure responsible and inclusive AI integration in the healthcare sector.

In this regard, initiatives such as the European Pact for Skills [5], promoting a multi-stakeholder approach to upskilling and reskilling, offer a valuable model. The decision of the European Public Health Association (EUPHA) to create a dedicated “digital health and artificial intelligence” (https://eupha.org/digital-health-and-artificial-intelligence), for instance, is a commendable effort to enhance the AI-readiness of a health workforce sector. This is true especially considering how Preventive Medicine Physicians are reported to have one among the highest GenAI automatability scores (0.45) [4]. Similarly, in the US, programs like the National Initiative for Cybersecurity Education (NICE) [6] or broader federal investments in STEM education and workforce development, including AI-focused traineeships and reskilling programs promoted by the National Science Foundation or Department of Labor, aim to equip the workforce for technological shifts. Adapting and expanding such frameworks specifically for the healthcare sector could ensure that the AI transition is guided by a broader coalition of stakeholders, including professional bodies, educational institutions, and workers themselves, thereby aligning technological advancement with the core values and practical needs of healthcare.

In conclusion, the digital transformation of healthcare is real, present, and risks becoming uneven. The Report provides crucial empirical evidence that health occupations are experiencing divergent exposure to automation and augmentation. These differences, deeply rooted in task complexity and clinical context, must inform all future policy on health workforce development. Avoiding obsolescence is not enough. We must invest in redesigning roles, workflows, and education to build a digitally empowered, AI-ready health workforce, one that harnesses the power of technology without sacrificing the human complexities or health care.

Other Information

Funding information

No funding was required.

Authors’ contributions

Conceptualization: MDP, FAC; investigation: MDP, FAC; project administration: MDP; supervision: SB, WR; writing – original draft: MDP, FAC; writing – review and editing: MDP, FAC, SB, WR.

Conflict of interest statement

The Authors declare no conflict of interest.

Generative use AI disclosure

During the preparation of this work the authors used Generative AI to ease the writing process. After using this tool, the authors reviewed and edited the text as needed and took full responsibility for the content of the publication.

Address for correspondence: Marcello Di Pumpo, Sezione di Igiene, Dipartimento di Scienze della Vita e Sanità Pubblica, Università Cattolica del Sacro Cuore, Largo F. Vito 1, 00168 Rome, Italy. E-mail: marcello.dipumpo@unicatt.it.

References

  1. Pavuluri S, Sangal R, Sather J, Taylor R. Balancing act: the complex role of artificial intelligence in addressing burnout and healthcare workforce dynamics. BMJ Health Care Inform. 2024;31(1).
  2. Chustecki M. Benefits and risks of AI in health care: narrative review. Interact J Med Res. 2024;13.
  3. Rony M, Parvin M, Ferdousi S. Advancing nursing practice with artificial intelligence: enhancing preparedness for the future. Nurs Open. 2024;11(1).
  4. Manca F, Eslava D. Digital and AI skills in health occupations: what do we know about new demand?. OECD. 2025;.
  5. Pact for skills large scale and regional partnerships: guidance handbook – Introducing and setting up skills partnerships. Publications Office of the European Union. 2022;.
  6. Appendix 7: National Initiative for Cybersecurity Education (NICE) strategic plan. Supporting the growth and sustainment of the nation’s cybersecurity workforce: building the foundation for a more secure American future. 2017;.

Downloads

Authors

Marcello Di Pumpo - Sezione di Igiene, Dipartimento di Scienze della Vita e Sanità Pubblica, Università Cattolica del Sacro Cuore, Rome, Italy

Francesco Andrea Causio - Società Italiana di Intelligenza Artificiale in Medicina (SIIAM), Rome, Italy

Stefania Boccia - Sezione di Igiene, Dipartimento di Scienze della Vita e Sanità Pubblica, Università Cattolica del Sacro Cuore, Rome, Italy

Walter Ricciardi - Sezione di Igiene, Dipartimento di Scienze della Vita e Sanità Pubblica, Università Cattolica del Sacro Cuore, Rome, Italy

How to Cite
Di Pumpo, M., Causio, F. A., Boccia, S., & Ricciardi, W. (2026). The double-edged sword of automation and the risks of AI’s uneven impact on healthcare professions: a comment on the OECD artificial intelligence papers report. Annali dell’Istituto Superiore Di Sanità, 62(1), 13–15. https://doi.org/10.4415/ANN_26_01_04
  • Abstract viewed - 100 times
  • pdf downloaded - 32 times