Sort by:
Page 15 of 17169 results

Fetal origins of adult disease: transforming prenatal care by integrating Barker's Hypothesis with AI-driven 4D ultrasound.

Andonotopo W, Bachnas MA, Akbar MIA, Aziz MA, Dewantiningrum J, Pramono MBA, Sulistyowati S, Stanojevic M, Kurjak A

pubmed logopapersMay 26 2025
The fetal origins of adult disease, widely known as Barker's Hypothesis, suggest that adverse fetal environments significantly impact the risk of developing chronic diseases, such as diabetes and cardiovascular conditions, in adulthood. Recent advancements in 4D ultrasound (4D US) and artificial intelligence (AI) technologies offer a promising avenue for improving prenatal diagnostics and validating this hypothesis. These innovations provide detailed insights into fetal behavior and neurodevelopment, linking early developmental markers to long-term health outcomes. This study synthesizes contemporary developments in AI-enhanced 4D US, focusing on their roles in detecting fetal anomalies, assessing neurodevelopmental markers, and evaluating congenital heart defects. The integration of AI with 4D US allows for real-time, high-resolution visualization of fetal anatomy and behavior, surpassing the diagnostic precision of traditional methods. Despite these advancements, challenges such as algorithmic bias, data diversity, and real-world validation persist and require further exploration. Findings demonstrate that AI-driven 4D US improves diagnostic sensitivity and accuracy, enabling earlier detection of fetal abnormalities and optimization of clinical workflows. By providing a more comprehensive understanding of fetal programming, these technologies substantiate the links between early-life conditions and adult health outcomes, as proposed by Barker's Hypothesis. The integration of AI and 4D US has the potential to revolutionize prenatal care, paving the way for personalized maternal-fetal healthcare. Future research should focus on addressing current limitations, including ethical concerns and accessibility challenges, to promote equitable implementation. Such advancements could significantly reduce the global burden of chronic diseases and foster healthier generations.

Applications of artificial intelligence in abdominal imaging.

Gupta A, Rajamohan N, Bansal B, Chaudhri S, Chandarana H, Bagga B

pubmed logopapersMay 26 2025
The rapid advancements in artificial intelligence (AI) carry the promise to reshape abdominal imaging by offering transformative solutions to challenges in disease detection, classification, and personalized care. AI applications, particularly those leveraging deep learning and radiomics, have demonstrated remarkable accuracy in detecting a wide range of abdominal conditions, including but not limited to diffuse liver parenchymal disease, focal liver lesions, pancreatic ductal adenocarcinoma (PDAC), renal tumors, and bowel pathologies. These models excel in the automation of tasks such as segmentation, classification, and prognostication across modalities like ultrasound, CT, and MRI, often surpassing traditional diagnostic methods. Despite these advancements, widespread adoption remains limited by challenges such as data heterogeneity, lack of multicenter validation, reliance on retrospective single-center studies, and the "black box" nature of many AI models, which hinder interpretability and clinician trust. The absence of standardized imaging protocols and reference gold standards further complicates integration into clinical workflows. To address these barriers, future directions emphasize collaborative multi-center efforts to generate diverse, standardized datasets, integration of explainable AI frameworks to existing picture archiving and communication systems, and the development of automated, end-to-end pipelines capable of processing multi-source data. Targeted clinical applications, such as early detection of PDAC, improved segmentation of renal tumors, and improved risk stratification in liver diseases, show potential to refine diagnostic accuracy and therapeutic planning. Ethical considerations, such as data privacy, regulatory compliance, and interdisciplinary collaboration, are essential for successful translation into clinical practice. AI's transformative potential in abdominal imaging lies not only in complementing radiologists but also in fostering precision medicine by enabling faster, more accurate, and patient-centered care. Overcoming current limitations through innovation and collaboration will be pivotal in realizing AI's full potential to improve patient outcomes and redefine the landscape of abdominal radiology.

Optimizing the power of AI for fracture detection: from blind spots to breakthroughs.

Behzad S, Eibschutz L, Lu MY, Gholamrezanezhad A

pubmed logopapersMay 23 2025
Artificial Intelligence (AI) is increasingly being integrated into the field of musculoskeletal (MSK) radiology, from research methods to routine clinical practice. Within the field of fracture detection, AI is allowing for precision and speed previously unimaginable. Yet, AI's decision-making processes are sometimes wrought with deficiencies, undermining trust, hindering accountability, and compromising diagnostic precision. To make AI a trusted ally for radiologists, we recommend incorporating clinical history, rationalizing AI decisions by explainable AI (XAI) techniques, increasing the variety and scale of training data to approach the complexity of a clinical situation, and active interactions between clinicians and developers. By bridging these gaps, the true potential of AI can be unlocked, enhancing patient outcomes and fundamentally transforming radiology through a harmonious integration of human expertise and intelligent technology. In this article, we aim to examine the factors contributing to AI inaccuracies and offer recommendations to address these challenges-benefiting both radiologists and developers striving to improve future algorithms.

Patient Reactions to Artificial Intelligence-Clinician Discrepancies: Web-Based Randomized Experiment.

Madanay F, O'Donohue LS, Zikmund-Fisher BJ

pubmed logopapersMay 22 2025
As the US Food and Drug Administration (FDA)-approved use of artificial intelligence (AI) for medical imaging rises, radiologists are increasingly integrating AI into their clinical practices. In lung cancer screening, diagnostic AI offers a second set of eyes with the potential to detect cancer earlier than human radiologists. Despite AI's promise, a potential problem with its integration is the erosion of patient confidence in clinician expertise when there is a discrepancy between the radiologist's and the AI's interpretation of the imaging findings. We examined how discrepancies between AI-derived recommendations and radiologists' recommendations affect patients' agreement with radiologists' recommendations and satisfaction with their radiologists. We also analyzed how patients' medical maximizing-minimizing preferences moderate these relationships. We conducted a randomized, between-subjects experiment with 1606 US adult participants. Assuming the role of patients, participants imagined undergoing a low-dose computerized tomography scan for lung cancer screening and receiving results and recommendations from (1) a radiologist only, (2) AI and a radiologist in agreement, (3) a radiologist who recommended more testing than AI (ie, radiologist overcalled AI), or (4) a radiologist who recommended less testing than AI (ie, radiologist undercalled AI). Participants rated the radiologist on three criteria: agreement with the radiologist's recommendation, how likely they would be to recommend the radiologist to family and friends, and how good of a provider they perceived the radiologist to be. We measured medical maximizing-minimizing preferences and categorized participants as maximizers (ie, those who seek aggressive intervention), minimizers (ie, those who prefer no or passive intervention), and neutrals (ie, those in the middle). Participants' agreement with the radiologist's recommendation was significantly lower when the radiologist undercalled AI (mean 4.01, SE 0.07, P<.001) than in the other 3 conditions, with no significant differences among them (radiologist overcalled AI [mean 4.63, SE 0.06], agreed with AI [mean 4.55, SE 0.07], or had no AI [mean 4.57, SE 0.06]). Similarly, participants were least likely to recommend (P<.001) and positively rate (P<.001) the radiologist who undercalled AI, with no significant differences among the other conditions. Maximizers agreed with the radiologist who overcalled AI (β=0.82, SE 0.14; P<.001) and disagreed with the radiologist who undercalled AI (β=-0.47, SE 0.14; P=.001). However, whereas minimizers disagreed with the radiologist who overcalled AI (β=-0.43, SE 0.18, P=.02), they did not significantly agree with the radiologist who undercalled AI (β=0.14, SE 0.17, P=.41). Radiologists who recommend less testing than AI may face decreased patient confidence in their expertise, but they may not face this same penalty for giving more aggressive recommendations than AI. Patients' reactions may depend in part on whether their general preferences to maximize or minimize align with the radiologists' recommendations. Future research should test communication strategies for radiologists' disclosure of AI discrepancies to patients.

Adversarial artificial intelligence in radiology: Attacks, defenses, and future considerations.

Dietrich N, Gong B, Patlas MN

pubmed logopapersMay 21 2025
Artificial intelligence (AI) is rapidly transforming radiology, with applications spanning disease detection, lesion segmentation, workflow optimization, and report generation. As these tools become more integrated into clinical practice, new concerns have emerged regarding their vulnerability to adversarial attacks. This review provides an in-depth overview of adversarial AI in radiology, a topic of growing relevance in both research and clinical domains. It begins by outlining the foundational concepts and model characteristics that make machine learning systems particularly susceptible to adversarial manipulation. A structured taxonomy of attack types is presented, including distinctions based on attacker knowledge, goals, timing, and computational frequency. The clinical implications of these attacks are then examined across key radiology tasks, with literature highlighting risks to disease classification, image segmentation and reconstruction, and report generation. Potential downstream consequences such as patient harm, operational disruption, and loss of trust are discussed. Current mitigation strategies are reviewed, spanning input-level defenses, model training modifications, and certified robustness approaches. In parallel, the role of broader lifecycle and safeguard strategies are considered. By consolidating current knowledge across technical and clinical domains, this review helps identify gaps, inform future research priorities, and guide the development of robust, trustworthy AI systems in radiology.

Mask of Truth: Model Sensitivity to Unexpected Regions of Medical Images.

Sourget T, Hestbek-Møller M, Jiménez-Sánchez A, Junchi Xu J, Cheplygina V

pubmed logopapersMay 20 2025
The development of larger models for medical image analysis has led to increased performance. However, it also affected our ability to explain and validate model decisions. Models can use non-relevant parts of images, also called spurious correlations or shortcuts, to obtain high performance on benchmark datasets but fail in real-world scenarios. In this work, we challenge the capacity of convolutional neural networks (CNN) to classify chest X-rays and eye fundus images while masking out clinically relevant parts of the image. We show that all models trained on the PadChest dataset, irrespective of the masking strategy, are able to obtain an area under the curve (AUC) above random. Moreover, the models trained on full images obtain good performance on images without the region of interest (ROI), even superior to the one obtained on images only containing the ROI. We also reveal a possible spurious correlation in the Chákṣu dataset while the performances are more aligned with the expectation of an unbiased model. We go beyond the performance analysis with the usage of the explainability method SHAP and the analysis of embeddings. We asked a radiology resident to interpret chest X-rays under different masking to complement our findings with clinical knowledge.

An explainable AI-driven deep neural network for accurate breast cancer detection from histopathological and ultrasound images.

Alom MR, Farid FA, Rahaman MA, Rahman A, Debnath T, Miah ASM, Mansor S

pubmed logopapersMay 20 2025
Breast cancer represents a significant global health challenge, which makes it essential to detect breast cancer early and accurately to improve patient prognosis and reduce mortality rates. However, traditional diagnostic processes relying on manual analysis of medical images are inherently complex and subject to variability between observers, highlighting the urgent need for robust automated breast cancer detection systems. While deep learning has demonstrated potential, many current models struggle with limited accuracy and lack of interpretability. This research introduces the Deep Neural Breast Cancer Detection (DNBCD) model, an explainable AI-based framework that utilizes deep learning methods for classifying breast cancer using histopathological and ultrasound images. The proposed model employs Densenet121 as a foundation, integrating customized Convolutional Neural Network (CNN) layers including GlobalAveragePooling2D, Dense, and Dropout layers along with transfer learning to achieve both high accuracy and interpretability for breast cancer diagnosis. The proposed DNBCD model integrates several preprocessing techniques, including image normalization and resizing, and augmentation techniques to enhance the model's robustness and address class imbalances using class weight. It employs Grad-CAM (Gradient-weighted Class Activation Mapping) to offer visual justifications for its predictions, increasing trust and transparency among healthcare providers. The model was assessed using two benchmark datasets: Breakhis-400x (B-400x) and Breast Ultrasound Images Dataset (BUSI) containing 1820 and 1578 images, respectively. We systematically divided the datasets into training (70%), testing (20%,) and validation (10%) sets, ensuring efficient model training and evaluation obtaining accuracies of 93.97% for B-400x dataset having benign and malignant classes and 89.87% for BUSI dataset having benign, malignant, and normal classes for breast cancer detection. Experimental results demonstrate that the proposed DNBCD model significantly outperforms existing state-of-the-art approaches with potential uses in clinical environments. We also made all the materials publicly accessible for the research community at: https://github.com/romzanalom/XAI-Based-Deep-Neural-Breast-Cancer-Detection .

Federated Learning for Renal Tumor Segmentation and Classification on Multi-Center MRI Dataset.

Nguyen DT, Imami M, Zhao LM, Wu J, Borhani A, Mohseni A, Khunte M, Zhong Z, Shi V, Yao S, Wang Y, Loizou N, Silva AC, Zhang PJ, Zhang Z, Jiao Z, Kamel I, Liao WH, Bai H

pubmed logopapersMay 19 2025
Deep learning (DL) models for accurate renal tumor characterization may benefit from multi-center datasets for improved generalizability; however, data-sharing constraints necessitate privacy-preserving solutions like federated learning (FL). To assess the performance and reliability of FL for renal tumor segmentation and classification in multi-institutional MRI datasets. Retrospective multi-center study. A total of 987 patients (403 female) from six hospitals were included for analysis. 73% (723/987) had malignant renal tumors, primarily clear cell carcinoma (n = 509). Patients were split into training (n = 785), validation (n = 104), and test (n = 99) sets, stratified across three simulated institutions. MRI was performed at 1.5 T and 3 T using T2-weighted imaging (T2WI) and contrast-enhanced T1-weighted imaging (CE-T1WI) sequences. FL and non-FL approaches used nnU-Net for tumor segmentation and ResNet for its classification. FL-trained models across three simulated institutional clients with central weight aggregation, while the non-FL approach used centralized training on the full dataset. Segmentation was evaluated using Dice coefficients, and classification between malignant and benign lesions was assessed using accuracy, sensitivity, specificity, and area under the curves (AUCs). FL and non-FL performance was compared using the Wilcoxon test for segmentation Dice and Delong's test for AUC (p < 0.05). No significant difference was observed between FL and non-FL models in segmentation (Dice: 0.43 vs. 0.45, p = 0.202) or classification (AUC: 0.69 vs. 0.64, p = 0.959) on the test set. For classification, no significant difference was observed between the models in accuracy (p = 0.912), sensitivity (p = 0.862), or specificity (p = 0.847) on the test set. FL demonstrated comparable performance to non-FL approaches in renal tumor segmentation and classification, supporting its potential as a privacy-preserving alternative for multi-institutional DL models. 4. Stage 2.

A Comprehensive Review of Techniques, Algorithms, Advancements, Challenges, and Clinical Applications of Multi-modal Medical Image Fusion for Improved Diagnosis

Muhammad Zubair, Muzammil Hussai, Mousa Ahmad Al-Bashrawi, Malika Bendechache, Muhammad Owais

arxiv logopreprintMay 18 2025
Multi-modal medical image fusion (MMIF) is increasingly recognized as an essential technique for enhancing diagnostic precision and facilitating effective clinical decision-making within computer-aided diagnosis systems. MMIF combines data from X-ray, MRI, CT, PET, SPECT, and ultrasound to create detailed, clinically useful images of patient anatomy and pathology. These integrated representations significantly advance diagnostic accuracy, lesion detection, and segmentation. This comprehensive review meticulously surveys the evolution, methodologies, algorithms, current advancements, and clinical applications of MMIF. We present a critical comparative analysis of traditional fusion approaches, including pixel-, feature-, and decision-level methods, and delves into recent advancements driven by deep learning, generative models, and transformer-based architectures. A critical comparative analysis is presented between these conventional methods and contemporary techniques, highlighting differences in robustness, computational efficiency, and interpretability. The article addresses extensive clinical applications across oncology, neurology, and cardiology, demonstrating MMIF's vital role in precision medicine through improved patient-specific therapeutic outcomes. Moreover, the review thoroughly investigates the persistent challenges affecting MMIF's broad adoption, including issues related to data privacy, heterogeneity, computational complexity, interpretability of AI-driven algorithms, and integration within clinical workflows. It also identifies significant future research avenues, such as the integration of explainable AI, adoption of privacy-preserving federated learning frameworks, development of real-time fusion systems, and standardization efforts for regulatory compliance.

Computational modeling of breast tissue mechanics and machine learning in cancer diagnostics: enhancing precision in risk prediction and therapeutic strategies.

Ashi L, Taurin S

pubmed logopapersMay 17 2025
Breast cancer remains a significant global health issue. Despite advances in detection and treatment, its complexity is driven by genetic, environmental, and structural factors. Computational methods like Finite Element Modeling (FEM) have transformed our understanding of breast cancer risk and progression. Advanced computational approaches in breast cancer research are the focus, with an emphasis on FEM's role in simulating breast tissue mechanics and enhancing precision in therapies such as radiofrequency ablation (RFA). Machine learning (ML), particularly Convolutional Neural Networks (CNNs), has revolutionized imaging modalities like mammograms and MRIs, improving diagnostic accuracy and early detection. AI applications in analyzing histopathological images have advanced tumor classification and grading, offering consistency and reducing inter-observer variability. Explainability tools like Grad-CAM, SHAP, and LIME enhance the transparency of AI-driven models, facilitating their integration into clinical workflows. Integrating FEM and ML represents a paradigm shift in breast cancer management. FEM offers precise modeling of tissue mechanics, while ML excels in predictive analytics and image analysis. Despite challenges such as data variability and limited standardization, synergizing these approaches promises adaptive, personalized care. These computational methods have the potential to redefine diagnostics, optimize treatment, and improve patient outcomes.
Page 15 of 17169 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.