Sort by:
Page 4 of 17168 results

Navigating the AI revolution: will radiology sink or soar?

Schlemmer HP

pubmed logopapersJul 31 2025
The rapid acceleration of digital transformation and artificial intelligence (AI) is fundamentally reshaping medicine. Much like previous technological revolutions, AI-driven by advances in computer technology and software including machine learning, computer vision, and generative models-is redefining cognitive work in healthcare. Radiology, as one of the first fully digitized medical specialties, is at the forefront of this transformation. AI is automating workflows, enhancing image acquisition and interpretation, and improving diagnostic precision, which collectively boost efficiency, reduce costs, and elevate patient care. Global data networks and AI-powered platforms are enabling borderless collaboration, empowering radiologists to focus on complex decision-making and patient interaction. Despite these profound opportunities, widespread AI adoption in radiology remains limited, often confined to specific use cases, such as chest, neuro, and musculoskeletal imaging. Concerns persist regarding transparency, explainability, and the ethical use of AI systems, while unresolved questions about workload, liability, and reimbursement present additional hurdles. Psychological and cultural barriers, including fears of job displacement and diminished professional autonomy, also slow acceptance. However, history shows that disruptive innovations often encounter initial resistance. Just as the discovery of X-rays over a century ago ushered in a new era, today, digitalization and artificial intelligence will drive another paradigm shift-this time through cognitive automation. To realize AI's full potential, radiologists must maintain clinical oversight and safeguard their professional identity, viewing AI as a supportive tool rather than a threat. Embracing AI will allow radiologists to elevate their profession, enhance interdisciplinary collaboration, and help shape the future of medicine. Achieving this vision requires not only technological readiness but also early integration of AI education into medical training. Ultimately, radiology will not be replaced by AI, but by radiologists who effectively harness its capabilities.

A privacy preserving machine learning framework for medical image analysis using quantized fully connected neural networks with TFHE based inference.

Selvakumar S, Senthilkumar B

pubmed logopapersJul 30 2025
Medical image analysis using deep learning algorithms has become a basis of modern healthcare, enabling early detection, diagnosis, treatment planning, and disease monitoring. However, sharing sensitive raw medical data with third parties for analysis raises significant privacy concerns. This paper presents a privacy-preserving machine learning (PPML) framework using a Fully Connected Neural Network (FCNN) for secure medical image analysis using the MedMNIST dataset. The proposed PPML framework leverages a torus-based fully homomorphic encryption (TFHE) to ensure data privacy during inference, maintain patient confidentiality, and ensure compliance with privacy regulations. The FCNN model is trained in a plaintext environment for FHE compatibility using Quantization-Aware Training to optimize weights and activations. The quantized FCNN model is then validated under FHE constraints through simulation and compiled into an FHE-compatible circuit for encrypted inference on sensitive data. The proposed framework is evaluated on the MedMNIST datasets to assess its accuracy and inference time in both plaintext and encrypted environments. Experimental results reveal that the PPML framework achieves a prediction accuracy of 88.2% in the plaintext setting and 87.5% during encrypted inference, with an average inference time of 150 milliseconds per image. This shows that FCNN models paired with TFHE-based encryption achieve high prediction accuracy on MedMNIST datasets with minimal performance degradation compared to unencrypted inference.

Risk inventory and mitigation actions for AI in medical imaging-a qualitative study of implementing standalone AI for screening mammography.

Gerigoorian A, Kloub M, Dembrower K, Engwall M, Strand F

pubmed logopapersJul 30 2025
Recent prospective studies have shown that AI may be integrated in double-reader settings to increase cancer detection. The ScreenTrustCAD study was conducted at the breast radiology department at the Capio S:t Göran Hospital where AI is now implemented in clinical practice. This study reports on how the hospital prepared by exploring risks from an enterprise risk management perspective, i.e., applying a holistic and proactive perspective, and developed risk mitigation actions. The study was conducted as an integral part of the preparations before implementing AI in a breast imaging department. Collaborative ideation sessions were conducted with personnel at the hospital, either directly or indirectly involved with AI, to identify risks. Two external experts with competencies in cybersecurity, machine learning, and the ethical aspects of AI, were interviewed as a complement. The risks identified were analyzed according to an Enterprise Risk Management framework, adopted for healthcare, that assumes risks to be emerging from eight different domains. Finally, appropriate risk mitigation actions were identified and discussed. Twenty-three risks were identified covering seven of eight risk domains, in turn generating 51 suggested risk mitigation actions. Not only does the study indicate the emergence of patient safety risks, but it also shows that there are operational, strategic, financial, human capital, legal, and technological risks. The risks with most suggested mitigation actions were ‘Radiographers unable to answer difficult questions from patients’, ‘Increased risk that patient-reported symptoms are missed by the single radiologist’, ‘Increased pressure on the single reader knowing they are the only radiologist to catch a mistake by AI’, and ‘The performance of the AI algorithm might deteriorate’. Before a clinical integration of AI, hospitals should expand, identify, and address risks beyond immediate patient safety by applying comprehensive and proactive risk management. The online version contains supplementary material available at 10.1186/s12913-025-13176-9.

Clinician Perspectives of a Magnetic Resonance Imaging-Based 3D Volumetric Analysis Tool for Neurofibromatosis Type 2-Related Schwannomatosis: Qualitative Pilot Study.

Desroches ST, Huang A, Ghankot R, Tommasini SM, Wiznia DH, Buono FD

pubmed logopapersJul 30 2025
Accurate monitoring of tumor progression is crucial for optimizing outcomes in neurofibromatosis type 2-related schwannomatosis. Standard 2D linear analysis on magnetic resonance imaging is less accurate than 3D volumetric analysis, but since 3D volumetric analysis is time-consuming, it is not widely used. To shorten the time required for 3D volumetric analysis, our lab has been developing an automated artificial intelligence-driven 3D volumetric tool. The objective of the study was to survey and interview clinicians treating neurofibromatosis type 2-related schwannomatosis to understand their views on current 2D analysis and to gather insights for the design of an artificial intelligence-driven 3D volumetric analysis tool. Interviews examined for the following themes: (1) shortcomings of the currently used linear analysis, (2) utility of 3D visualizations, (3) features of an interactive 3D modeling software, and (4) lack of a gold standard to assess the accuracy of 3D volumetric analysis. A Likert scale questionnaire was used to survey clinicians' levels of agreement with 25 statements related to 2D and 3D tumor analyses. A total of 14 clinicians completed a survey, and 12 clinicians were interviewed. Specialties ranged across neurosurgery, neuroradiology, neurology, oncology, and pediatrics. Overall, clinicians expressed concerns with current linear techniques, with clinicians agreeing that linear measurements can be variable with the possibility of two different clinicians calculating 2 different tumor sizes (mean 4.64, SD 0.49) and that volumetric measurements would be more helpful for determining clearer thresholds of tumor growth (mean 4.50, SD 0.52). For statements discussing the capabilities of a 3D volumetric analysis and visualization software, clinicians expressed strong interest in being able to visualize tumors with respect to critical brain structures (mean 4.36, SD 0.74) and in forecasting tumor growth (mean 4.77, SD 0.44). Clinicians were overall in favor of the adoption of 3D volumetric analysis techniques for measuring vestibular schwannoma tumors but expressed concerns regarding the novelty and inexperience surrounding these techniques. However, clinicians felt that the ability to visualize tumors with reference to critical structures, to overlay structures, to interact with 3D models, and to visualize areas of slow versus rapid growth in 3D would be valuable contributions to clinical practice. Overall, clinicians provided valuable insights for designing a 3D volumetric analysis tool for vestibular schwannoma tumor growth. These findings may also apply to other central nervous system tumors, offering broader utility in tumor growth assessments.

Validating an explainable radiomics approach in non-small cell lung cancer combining high energy physics with clinical and biological analyses.

Monteleone M, Camagni F, Percio S, Morelli L, Baroni G, Gennai S, Govoni P, Paganelli C

pubmed logopapersJul 30 2025
This study aims at establishing a validation framework for an explainable radiomics-based model, specifically targeting classification of histopathological subtypes in non-small cell lung cancer (NSCLC) patients. We developed an explainable radiomics pipeline using open-access CT images from the cancer imaging archive (TCIA). Our approach incorporates three key prongs: SHAP-based feature selection for explainability within the radiomics pipeline, a technical validation of the explainable technique using high energy physics (HEP) data, and a biological validation using RNA-sequencing data and clinical observations. Our radiomic model achieved an accuracy of 0.84 in the classification of the histological subtype. The technical validation performed on the HEP domain over 150 numerically equivalent datasets, maintaining consistent sample size and class imbalance, confirmed the reliability of SHAP-based input features. Biological analysis found significant correlations between gene expression and CT-based radiomic features. In particular, gene MUC21 achieved the highest correlation with the radiomic feature describing the10th percentile of voxel intensities (r = 0.46, p < 0.05). This study presents a validation framework for explainable CT-based radiomics in lung cancer, combining HEP-driven technical validation with biological validation to enhance interpretability, reliability, and clinical relevance of XAI models.

Patient Perspectives on Artificial Intelligence in Medical Imaging.

Glenning J, Gualtieri L

pubmed logopapersJul 28 2025
Artificial intelligence (AI) is reshaping medical imaging with the promise of improved diagnostic accuracy and efficiency. Yet, its ethical and effective adoption depends not only on technical excellence but also on aligning implementation with patient perspectives. This commentary synthesizes emerging research on how patients perceive AI in radiology, expressing cautious optimism, a desire for transparency, and a strong preference for human oversight. Patients consistently view AI as a supportive tool rather than a replacement for clinicians. We argue that centering patient voices is essential to sustaining trust, preserving the human connection in care, and ensuring that AI serves as a truly patient-centered innovation. The path forward requires participatory approaches, ethical safeguards, and transparent communication to ensure that AI enhances, rather than diminishes, the values patients hold most dear.

Towards trustworthy artificial intelligence in musculoskeletal medicine: A narrative review on uncertainty quantification.

Vahdani AM, Shariatnia M, Rajpurkar P, Pareek A

pubmed logopapersJul 28 2025
Deep learning (DL) models have achieved remarkable performance in musculoskeletal (MSK) medical imaging research, yet their clinical integration remains hindered by their black-box nature and the absence of reliable confidence measures. Uncertainty quantification (UQ) seeks to bridge this gap by providing each DL prediction with a calibrated estimate of uncertainty, thereby fostering clinician trust and safer deployment. We conducted a targeted narrative review, performing expert-driven searches in PubMed, Scopus, and arXiv and mining references from relevant publications in MSK imaging utilizing UQ, and a thematic synthesis was used to derive a cohesive taxonomy of UQ methodologies. UQ approaches encompass multi-pass methods (e.g., test-time augmentation, Monte Carlo dropout, and model ensembling) that infer uncertainty from variability across repeated inferences; single-pass methods (e.g., conformal prediction, and evidential deep learning) that augment each individual prediction with uncertainty metrics; and other techniques that leverage auxiliary information, such as inter-rater variability, hidden-layer activations, or generative reconstruction errors, to estimate confidence. Applications in MSK imaging, include highlighting uncertain areas in cartilage segmentation and identifying uncertain predictions in joint implant design detections; downstream applications include enhanced clinical utility and more efficient data annotation pipelines. Embedding UQ into DL workflows is essential for translating high-performance models into clinical practice. Future research should prioritize robust out-of-distribution handling, computational efficiency, and standardized evaluation metrics to accelerate the adoption of trustworthy AI in MSK medicine. Not applicable.

Semantics versus Identity: A Divide-and-Conquer Approach towards Adjustable Medical Image De-Identification

Yuan Tian, Shuo Wang, Rongzhao Zhang, Zijian Chen, Yankai Jiang, Chunyi Li, Xiangyang Zhu, Fang Yan, Qiang Hu, XiaoSong Wang, Guangtao Zhai

arxiv logopreprintJul 25 2025
Medical imaging has significantly advanced computer-aided diagnosis, yet its re-identification (ReID) risks raise critical privacy concerns, calling for de-identification (DeID) techniques. Unfortunately, existing DeID methods neither particularly preserve medical semantics, nor are flexibly adjustable towards different privacy levels. To address these issues, we propose a divide-and-conquer framework comprising two steps: (1) Identity-Blocking, which blocks varying proportions of identity-related regions, to achieve different privacy levels; and (2) Medical-Semantics-Compensation, which leverages pre-trained Medical Foundation Models (MFMs) to extract medical semantic features to compensate the blocked regions. Moreover, recognizing that features from MFMs may still contain residual identity information, we introduce a Minimum Description Length principle-based feature decoupling strategy, to effectively decouple and discard such identity components. Extensive evaluations against existing approaches across seven datasets and three downstream tasks, demonstrates our state-of-the-art performance.

Patient Perspectives on Artificial Intelligence in Health Care: Focus Group Study for Diagnostic Communication and Tool Implementation.

Foresman G, Biro J, Tran A, MacRae K, Kazi S, Schubel L, Visconti A, Gallagher W, Smith KM, Giardina T, Haskell H, Miller K

pubmed logopapersJul 24 2025
Artificial intelligence (AI) is rapidly transforming health care, offering potential benefits in diagnosis, treatment, and workflow efficiency. However, limited research explores patient perspectives on AI, especially in its role in diagnosis and communication. This study examines patient perceptions of various AI applications, focusing on the diagnostic process and communication. This study aimed to examine patient perspectives on AI use in health care, particularly in diagnostic processes and communication, identifying key concerns, expectations, and opportunities to guide the development and implementation of AI tools. This study used a qualitative focus group methodology with co-design principles to explore patient and family member perspectives on AI in clinical practice. A single 2-hour session was conducted with 17 adult participants. The session included interactive activities and breakout sessions focused on five specific AI scenarios relevant to diagnosis and communication: (1) portal messaging, (2) radiology review, (3) digital scribe, (4) virtual human, and (5) decision support. The session was audio-recorded and transcribed, with facilitator notes and demographic questionnaires collected. Data were analyzed using inductive thematic analysis by 2 independent researchers (GF and JB), with discrepancies resolved via consensus. Participants reported varying comfort levels with AI applications contingent on the level of patient interaction, with digital scribe (average 4.24, range 2-5) and radiology review (average 4.00, range 2-5) being the highest, and virtual human (average 1.68, range 1-4) being the lowest. In total, five cross-cutting themes emerged: (1) validation (concerns about model reliability), (2) usability (impact on diagnostic processes), (3) transparency (expectations for disclosing AI usage), (4) opportunities (potential for AI to improve care), and (5) privacy (concerns about data security). Participants valued the co-design session and felt they had a significant say in the discussions. This study highlights the importance of incorporating patient perspectives in the design and implementation of AI tools in health care. Transparency, human oversight, clear communication, and data privacy are crucial for patient trust and acceptance of AI in diagnostic processes. These findings inform strategies for individual clinicians, health care organizations, and policy makers to ensure responsible and patient-centered AI deployment in health care.

The impacts of artificial intelligence on the workload of diagnostic radiology services: A rapid review and stakeholder contextualisation

Sutton, C., Prowse, J., Elshehaly, M., Randell, R.

medrxiv logopreprintJul 24 2025
BackgroundAdvancements in imaging technology, alongside increasing longevity and co-morbidities, have led to heightened demand for diagnostic radiology services. However, there is a shortfall in radiology and radiography staff to acquire, read and report on such imaging examinations. Artificial intelligence (AI) has been identified, notably by AI developers, as a potential solution to impact positively the workload of radiology services for diagnostics to address this staffing shortfall. MethodsA rapid review complemented with data from interviews with UK radiology service stakeholders was undertaken. ArXiv, Cochrane Library, Embase, Medline and Scopus databases were searched for publications in English published between 2007 and 2022. Following screening 110 full texts were included. Interviews with 15 radiology service managers, clinicians and academics were carried out between May and September 2022. ResultsMost literature was published in 2021 and 2022 with a distinct focus on AI for diagnostics of lung and chest disease (n = 25) notably COVID-19 and respiratory system cancers, closely followed by AI for breast screening (n = 23). AI contribution to streamline the workload of radiology services was categorised as autonomous, augmentative and assistive contributions. However, percentage estimates, of workload reduction, varied considerably with the most significant reduction identified in national screening programmes. AI was also recognised as aiding radiology services through providing second opinion, assisting in prioritisation of images for reading and improved quantification in diagnostics. Stakeholders saw AI as having the potential to remove some of the laborious work and contribute service resilience. ConclusionsThis review has shown there is limited data on real-world experiences from radiology services for the implementation of AI in clinical production. Autonomous, augmentative and assistive AI can, as noted in the article, decrease workload and aid reading and reporting, however the governance surrounding these advancements lags.
Page 4 of 17168 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.