Share on
INTRODUCTION
Artificial intelligence (AI) is revolutionizing healthcare, bringing forward a transformative era across diagnostics, therapeutics, prevention and organization. Recent scientific literature illustrates how AI tools are enhancing clinical accuracy, optimizing workflows, and reducing administrative load [1]. AI-powered algorithms are now demonstrating high accuracy in multiple medical tasks. A 2023 umbrella review reported that AI systems for cancer imaging achieve strong diagnostic metrics. In breast imaging specifically, deep-learning tools improve detection rates, reduce false positives, and lighten radiologist workloads [2]. Public health and primary care also benefit greatly from this revolution [3, 4]. While much of AI’s success centers on single-use-case applications (e.g., image analysis or chatbot assistants), the bigger challenge lies in embedding AI within population health systems, those that manage entire community health, chronic diseases, and service networks. Addressing this requires robust population-level strategies: comprehensive data infrastructures, interoperability, equitable algorithm design, governance frameworks, stakeholder engagement, and outcome-driven evaluation. Population health implementation must ensure AI tools are interoperable across primary, secondary, and social care platforms, calibrated for diverse socioeconomic, age, and ethnic groups, aligned with policy standards and voted dedicated to real-world application [5].
The current landscape of advanced healthcare technology is characterized by a significant gap between high-level scientific findings and standardized practical implementation. While literature demonstrates numerous cutting-edge experiences, these are typically isolated, existing as single-institution pilot studies or research trials with limited, non-diverse datasets. This restriction may compromise their generalizability and robustness when applied to new settings, a phenomenon supported by scientific evidence indicating potential performance drop-offs during external validation [6].
Comprehensive frameworks, spanning from ethics regulation (EU and Italian regulations) to implementation science (RE-AIM) [7] have been extensively developed to provide a structured, repeatable blueprint for responsible scaling. However, the practical implementation of these frameworks at the healthcare system level remains scarce. This scarcity is principally due to systemic barriers, the main ones including organizational and cultural inertia within risk-averse healthcare systems, the high cost and complexity of achieving interoperability across fragmented IT infrastructures (EHRs), and the challenge of establishing sustainable standardized reimbursement models, which prevents hospitals from establishing a clear, sustainable return on investment (ROI) [8, 9].
It is in this context that the Azienda ULSS 6 Euganea (AULSS6) Local Health Authority of the Padua Province has launched a comprehensive strategy to integrate AI within its healthcare and administrative systems according to the following dimensions: Governance, Ethical and Regulatory Assessment, Education, Technology Evaluation, Healthcare Organization and Public Health, Research, and Communication.
METHODS
The aim of this study was to perform a narrative account of an institutional initiative for AI multilevel implementation strategy at AULSS6. A series of structured meetings of a multidisciplinary Steering Committee were conducted between December 2024 and December 2025. Ideas, priorities and initiatives were systematically collected, categorized, and refined through consensus. The output of the analysis was descriptively reported.
RESULTS
With regards to Governance, the first move was the establishment, in November 2024 [10] of a multidisciplinary AI Steering Committee mandated with governing AI implementation in AULSS6, meeting monthly, composed by: Director General, Administrative Director, Innovation and Organizational Development Director, Information and Communication Technology Director, Data Protection Officer, Department of Hospital Management Director, Department of Prevention Director, Department of Primary and Community Care Director, General Affairs Director, Anti-corruption Director, Clinical Engineering Director and AI scientific expert.
Considering how The EU AI Act [11] classifies healthcare AI as “high-risk”, mandating rigorous compliance with risk management, data quality, transparency, human oversight, and cybersecurity standards is necessary. The Steering Committee’s dedicated staff and the Data Protection Officer has provided an organizational AI Ethics and Readiness Checklist. Main principles that are required to be upheld are: GDPR compliance i.e., ensuring all data processing activities, particularly the use of sensitive patient data for AI training, strictly adhere to the principles of data minimization, purpose limitation, and lawful basis for processing, alongside robust data subject rights management; data quality, i.e., guaranteeing that training, validation, and testing datasets are representative, accurate, and free from bias to prevent discriminatory outcomes; transparency and explainability i.e., requiring clear documentation of how AI models function and the rationale behind their outputs, implementing proactive strategies to identify and mitigate algorithmic bias and enhance explainability (e.g., SHAP or LIME methodologies) to end-users and patients; i.e., human oversight, ensuring that autonomous decision-making remains subject to appropriate human intervention and validation; cybersecurity resilience, i.e., protecting sensitive health data and the AI infrastructure from attacks; human-in-the-loop (HITL) approach: mandating clearly defined protocols for human review, validation, and override of AI outputs, particularly for critical healthcare tasks, transforming the clinician’s role into one of augmented decision-maker.
This last principle is highly emphasized by the AULSS6 strategy. Under this model, every decision is AI-supported, but the AI is never intended as a substitute for humans. The final decision-makers are always healthcare professionals, and their decision is considered a healthcare act with related responsibility. Professionals, however, never act alone, but always inside a framework of clear policies and regulations provided by the governance. Their daily duties are, in fact, included within a comprehensive care process, never resulting in automated or fragmented services but always upholding the importance of each healthcare-related act (e.g., diagnostic, therapeutic, etc.). AI implementation within complex clinical environments poses significant practical and organizational challenges that extend far beyond technical integration. A primary concern may be alert, where multiple AI warnings could cause clinicians to ignore truly critical alerts, fatigue and the risk of de-skilling, with over-reliance on accurate predictions (automation bias) potentially eroding fundamental professional capabilities and judgment. Moreover, the introduction of AI may complicate the handling of accountability and liability among the different actors of the process, namely technology providers, management, and end-users. Therefore, upholding the HITL principle requires not just technical integration but a fundamental coordinated rethinking of clinical workflows, training, and risk management procedures.
In the domain of education, the Steering Committee developed a comprehensive training strategy to build awareness, skills, a shared vision and ethical culture around AI in healthcare. This included the creation of multiple targeted courses and dissemination events: a foundational course on AI and LLMs (large language models) for healthcare and administrative professionals (hosted in collaboration with the Italian Society for Artificial Intelligence in Medicine (Società Italiana Intelligenza Artificiale in Medicina, SIIAM), a specialized program focused on ethics and regulatory frameworks tailored for the Scientific Committee, a regional-level Congress dedicated to AI implementation for hospital management, and also dissemination of free institutional learning materials. To enhance practical understanding and foster innovation, site visits to leading AI development centers were organized, offering hands-on exposure to real-world applications. This education providers will also help guide the AI implementation process at various level of the AULSS6 organization. Additionally, the results of scientific research were presented in dedicated sessions of scientific international conferences to broaden knowledge dissemination and stimulate interdisciplinary dialogue across the healthcare community.
In the area of technology evaluation, AULSS6 is working on producing a dedicated checklist to enable a thorough assessment of AI-based technologies to be adopted, ensuring their compliance with current regulatory standards and the production of appropriate documentation, e.g., FRIA (fundamental rights impact assessment), which systematically evaluates the potential impact of the AI system on users’ and patients’ fundamental rights (e.g., non-discrimination, privacy) [11]. AULSS6 developed and disseminated clear guidelines to support systematic technology evaluation across the organization. Additionally, AULSS6 is scanning internal promising technologies as candidates for Health Technology Assessment (HTA) pilot studies, applying structured dedicated frameworks [12]. This involves applying structured, dedicated frameworks to assess the clinical effectiveness, cost-effectiveness, organizational impact, and ethical implications of a technology for future widespread adoption. These efforts aim to promote evidence-based decision-making, enhance the value of technological investments, and ensure that innovation aligns with clinical needs and regulatory requirements.
Regarding public health and healthcare service organization, multiple tools are present in literature addressing problems like: staffing optimization i.e., utilizing predictive modelling to forecast patient load and allocate clinical and administrative staff efficiently; hospital length-of-stay control i.e., employing algorithms to identify patients at risk of prolonged stays, allowing for targeted intervention and care path modification; emergency room overcrowding prevention i.e., implementing real-time predictive models to forecast ER bottlenecks and manage patient flow proactively [1, 13, 14]. The same is true for public health interventions: early disease detection i.e., leveraging AI for rapid analysis of complex data (e.g., genomic, imaging, or surveillance data) to identify early signs of disease, such as infectious disease outbreaks or chronic conditions [15]; cancer screening programs i.e., applying AI to enhance the efficiency and accuracy of image analysis in screening programs, improving detection rates and reducing false positives [4]. These solutions are being thoroughly evaluated at AULSS6 and selected for potential future implementation, always ensuring an ethical approach and upholding quality and safety issues.
With regards to scientific research, AULSS6 actively participated in funded research calls, engaging in diverse set of projects addressing critical areas of innovation, from infectious disease outbreak early detection i.e., utilizing advanced algorithms for the early identification and prediction of infectious disease spread patterns, to clinical outcomes prediction, i.e., developing sophisticated algorithms for early prediction of critical outcomes in complex environments like Intensive Care Units (ICUs), enabling timely interventions. To support and streamline these efforts, AULSS6 established a standardized submission process to efficiently receive and evaluate research proposals. This efficient, integrated approach ensures the effective receipt, rigorous ethical and scientific evaluation, and enhanced potential for successful financing and execution of research proposals. This integrated approach not only fosters scientific collaboration with academic and industry partners but also accelerates the translation of research into practical real-world solutions that enhance patient outcomes and strengthen our healthcare system’s preparedness and response capabilities.
Recognizing that the successful adoption of AI depends not only on technical and regulatory readiness but also on public understanding and healthcare professionals’ engagement, AULSS6 dedicated specific resources to strategic communication. Targeted internal and external communication is being gradually established, in order to accompany the rollout of AI initiatives, addressing the specific concerns and information needs of different stakeholder groups. The dual goals of transparency and trust are maintained: the communication efforts are designed to ensure full transparency regarding the function, limitations, and impact of deployed AI systems, while actively working to enhance trust. Three core recipient groups are identified: healthcare and administrative professionals, by clarifying the AI’s role as a supportive tool rather than a replacement, focusing on clinical practice augmentation rather than substitution; patients, by providing accessible information on how their data is used and how AI impacts their care decisions; institutional stakeholders, by demonstrating compliance, ethical diligence, and the strategic value of the AI investment. This integrated communication effort is crucial for fostering a culture of responsible innovation and ensuring participatory governance, which are essential for the long-term, ethical, and successful integration of AI into healthcare delivery.
CONCLUSIONS
What makes the AULSS6 initiative particularly notable is that the project emerges from a local health authority in direct contact with the community level and stakeholders and directly responsible for delivering care to about one million residents of the Veneto region. By designing a comprehensive, compliant, and ethically grounded strategy for AI adoption, AULSS6 is demonstrating that innovation in AI can and must happen also at the local level, close to the citizens it ultimately serves. As such, AULSS6 further aims at providing a replicable and scalable model for other regional health systems across Italy and Europe, offering a powerful signal: the digital transformation of healthcare can be advanced with a bottom-up strategy complementing national and regional authorities’ guidance.
Other Information
Ethical approval
This study did not involve human participants, patient data, or experimental interventions. It was based on a descriptive analysis of institutional strategy and internal organizational processes. Therefore, ethical approval was not required.
Funding
No funding was required for the study.
Conflict of interest statement
The Authors declare no competing interests.
Address for correspondence: Marcello Di Pumpo, Azienda ULSS6 Euganea, Via Enrico degli Scrovegni 12, 35131 Padova. E-mail: marcello.dipumpo@aulss6.veneto.it
* These Authors equally contributed to the paper and share first authorship
pdf
