Share on
Article
The integration of artificial intelligence (AI) into healthcare is rapidly transforming practice. AI enhances accuracy, enables treatment plans personalization, and administrative tasks streamlining. AI-driven tools have demonstrated potential in reducing the administrative and cognitive burdens that contribute to clinician burnout [1].
Despite these advancements, the adoption of AI in healthcare raises significant concerns. A study highlights the importance of addressing these ethical and safety challenges to ensure that AI technologies are developed and deployed responsibly [2]. A paper exploring the implications of AI for the nursing workforce emphasizes the need for strategic planning to manage these transitions effectively [3]. The Food and Drug Administration (FDA) recently announced the adoption of “Elsa”, a GenAI tool designed to “help employees, from scientific reviewers to investigators, work more efficiently” (https://www.fda.gov/news-events/press-announcements/fda-launches-agency-wide-ai-tool-optimize-performance-american-people).
In this context, the last OECD Report from the artificial intelligence papers series (May 2025) [4] presents a data-driven taxonomy of health roles based on their susceptibility to automation through GenAI and advanced robotics (AR). Jobs are classified into four risk categories: low risk, potential augmentation, potential automation, and high risk.
Low-risk occupations include roles that demand complex decision-making, interpersonal care, and ethical judgement. These are exemplified by physicians in general practice, psychiatry, and oncology. The inherently human dimensions of empathy, diagnostic reasoning, and shared decision-making seem to spare these roles from full automation, even as AI tools increasingly support clinical workflows.
Reportedly augmentable roles such as registered nurses and physician associates are characterized by a blend of routine and high-stakes tasks. While elements like triage, documentation, and remote monitoring are already enhanced by AI systems, many tasks like patient interaction still demand a human presence.
On the other hand, roles with reported high potential for automation include pharmacy technicians, radiologic technologists, and laboratory technicians. These positions involve highly structured, repetitive tasks susceptible to automation by AI-based image recognition, diagnostic platforms, and robotic dispensing systems. The OECD estimates that about 4.3% of the US health workforce falls into this “potential automation” category.
Finally, reportedly high-risk occupations such as orderlies and medical transcriptionists are described as replaceable by process automation tools and speech-to-text systems, with 0.6% of such health roles present in the US in 2025.
Some considerations seem necessary. The reasoning and methodology behind automatability scores assigned to healthcare occupations, while allowing for a detailed analysis, risk fragmenting healthcare roles into discrete functions and ignoring the integrative, relational, and holistic nature of care, potentially underrepresenting many critical aspects of healthcare work, such as clinical intuition, ethical decision-making, and emotional intelligence. If not properly translated into healthcare workforce skilling and employment processes, this scoring could produce more harm than benefit.
Another significant concern regards how AI adoption appears predominantly driven by private technology developers and healthcare solution providers, potentially undervaluing key experiential insights and ethical considerations from frontline healthcare professionals. This industry-led adoption risks prioritizing technological capabilities over patient care instances and workforce well-being. It is therefore imperative for policymakers to proactively ensure responsible and inclusive AI integration in the healthcare sector.
In this regard, initiatives such as the European Pact for Skills [5], promoting a multi-stakeholder approach to upskilling and reskilling, offer a valuable model. The decision of the European Public Health Association (EUPHA) to create a dedicated “digital health and artificial intelligence” (https://eupha.org/digital-health-and-artificial-intelligence), for instance, is a commendable effort to enhance the AI-readiness of a health workforce sector. This is true especially considering how Preventive Medicine Physicians are reported to have one among the highest GenAI automatability scores (0.45) [4]. Similarly, in the US, programs like the National Initiative for Cybersecurity Education (NICE) [6] or broader federal investments in STEM education and workforce development, including AI-focused traineeships and reskilling programs promoted by the National Science Foundation or Department of Labor, aim to equip the workforce for technological shifts. Adapting and expanding such frameworks specifically for the healthcare sector could ensure that the AI transition is guided by a broader coalition of stakeholders, including professional bodies, educational institutions, and workers themselves, thereby aligning technological advancement with the core values and practical needs of healthcare.
In conclusion, the digital transformation of healthcare is real, present, and risks becoming uneven. The Report provides crucial empirical evidence that health occupations are experiencing divergent exposure to automation and augmentation. These differences, deeply rooted in task complexity and clinical context, must inform all future policy on health workforce development. Avoiding obsolescence is not enough. We must invest in redesigning roles, workflows, and education to build a digitally empowered, AI-ready health workforce, one that harnesses the power of technology without sacrificing the human complexities or health care.
Other Information
Funding information
No funding was required.
Authors’ contributions
Conceptualization: MDP, FAC; investigation: MDP, FAC; project administration: MDP; supervision: SB, WR; writing – original draft: MDP, FAC; writing – review and editing: MDP, FAC, SB, WR.
Conflict of interest statement
The Authors declare no conflict of interest.
Generative use AI disclosure
During the preparation of this work the authors used Generative AI to ease the writing process. After using this tool, the authors reviewed and edited the text as needed and took full responsibility for the content of the publication.
Address for correspondence: Marcello Di Pumpo, Sezione di Igiene, Dipartimento di Scienze della Vita e Sanità Pubblica, Università Cattolica del Sacro Cuore, Largo F. Vito 1, 00168 Rome, Italy. E-mail: marcello.dipumpo@unicatt.it.
pdf
