Sort by:
Page 134 of 1521519 results

Improving Breast Cancer Diagnosis in Ultrasound Images Using Deep Learning with Feature Fusion and Attention Mechanism.

Asif S, Yan Y, Feng B, Wang M, Zheng Y, Jiang T, Fu R, Yao J, Lv L, Song M, Sui L, Yin Z, Wang VY, Xu D

pubmed logopapersMay 27 2025
Early detection of malignant lesions in ultrasound images is crucial for effective cancer diagnosis and treatment. While traditional methods rely on radiologists, deep learning models can improve accuracy, reduce errors, and enhance efficiency. This study explores the application of a deep learning model for classifying benign and malignant lesions, focusing on its performance and interpretability. In this study, we proposed a feature fusion-based deep learning model for classifying benign and malignant lesions in ultrasound images. The model leverages advanced architectures such as MobileNetV2 and DenseNet121, enhanced with feature fusion and attention mechanisms to boost classification accuracy. The clinical dataset comprises 2171 images collected from 1758 patients between December 2020 and May 2024. Additionally, we utilized the publicly available BUSI dataset, consisting of 780 images from female patients aged 25 to 75, collected in 2018. To enhance interpretability, we applied Grad-CAM, Saliency Maps, and shapley additive explanations (SHAP) techniques to explain the model's decision-making. A comparative analysis with radiologists of varying expertise levels is also conducted. The proposed model exhibited the highest performance, achieving an AUC of 0.9320 on our private dataset and an area under the curve (AUC) of 0.9834 on the public dataset, significantly outperforming traditional deep convolutional neural network models. It also exceeded the diagnostic performance of radiologists, showcasing its potential as a reliable tool for medical image classification. The model's success can be attributed to its incorporation of advanced architectures, feature fusion, and attention mechanisms. The model's decision-making process was further clarified using interpretability techniques like Grad-CAM, Saliency Maps, and SHAP, offering insights into its ability to focus on relevant image features for accurate classification. The proposed deep learning model offers superior accuracy in classifying benign and malignant lesions in ultrasound images, outperforming traditional models and radiologists. Its strong performance, coupled with interpretability techniques, demonstrates its potential as a reliable and efficient tool for medical diagnostics. The datasets generated and analyzed during the current study are not publicly available due to the nature of this research and participants of this study, but may be available from the corresponding author on reasonable request.

Automatic identification of Parkinsonism using clinical multi-contrast brain MRI: a large self-supervised vision foundation model strategy.

Suo X, Chen M, Chen L, Luo C, Kemp GJ, Lui S, Sun H

pubmed logopapersMay 27 2025
Valid non-invasive biomarkers for Parkinson's disease (PD) and Parkinson-plus syndrome (PPS) are urgently needed. Based on our recent self-supervised vision foundation model the Shift Window UNET TRansformer (Swin UNETR), which uses clinical multi-contrast whole brain MRI, we aimed to develop an efficient and practical model ('SwinClassifier') for the discrimination of PD vs PPS using routine clinical MRI scans. We used 75,861 clinical head MRI scans including T1-weighted, T2-weighted and fluid attenuated inversion recovery imaging as a pre-training dataset to develop a foundation model, using self-supervised learning with a cross-contrast context recovery task. Then clinical head MRI scans from n = 1992 participants with PD and n = 1989 participants with PPS were used as a downstream PD vs PPS classification dataset. We then assessed SwinClassifier's performance in confusion matrices compared to a comparative self-supervised vanilla Vision Transformer (ViT) autoencoder ('ViTClassifier'), and to two convolutional neural networks (DenseNet121 and ResNet50) trained from scratch. SwinClassifier showed very good performance (F1 score 0.83, 95% confidence interval [CI] [0.79-0.87], AUC 0.89) in PD vs PPS discrimination in independent test datasets (n = 173 participants with PD and n = 165 participants with PPS). This self-supervised classifier with pretrained weights outperformed the ViTClassifier and convolutional classifiers trained from scratch (F1 score 0.77-0.82, AUC 0.83-0.85). Occlusion sensitivity mapping in the correctly-classified cases (n = 160 PD and n = 114 PPS) highlighted the brain regions guiding discrimination mainly in sensorimotor and midline structures including cerebellum, brain stem, ventricle and basal ganglia. Our self-supervised digital model based on routine clinical head MRI discriminated PD vs PPS with good accuracy and sensitivity. With incremental improvements the approach may be diagnostically useful in early disease. National Key Research and Development Program of China.

Estimation of time-to-total knee replacement surgery with multimodal modeling and artificial intelligence.

Cigdem O, Hedayati E, Rajamohan HR, Cho K, Chang G, Kijowski R, Deniz CM

pubmed logopapersMay 27 2025
The methods for predicting time-to-total knee replacement (TKR) do not provide enough information to make robust and accurate predictions. Develop and evaluate an artificial intelligence-based model for predicting time-to-TKR by analyzing longitudinal knee data and identifying key features associated with accelerated knee osteoarthritis progression. A total of 547 subjects underwent TKR in the Osteoarthritis Initiative over nine years, and their longitudinal data was used for model training and testing. 518 and 164 subjects from Multi-Center Osteoarthritis Study and internal hospital data were used for external testing, respectively. The clinical variables, magnetic resonance (MR) images, radiographs, and quantitative and semi-quantitative assessments from images were analyzed. Deep learning (DL) models were used to extract features from radiographs and MR images. DL features were combined with clinical and image assessment features for survival analysis. A Lasso Cox feature selection method combined with a random survival forest model was used to estimate time-to-TKR. Utilizing only clinical variables for time-to-TKR predictions provided the estimation accuracy of 60.4% and C-index of 62.9%. Combining DL features extracted from radiographs, MR images with clinical, quantitative, and semi-quantitative image assessment features achieved the highest accuracy of 73.2%, (p=.001) and C-index of 77.3% for predicting time-to-TKR. The proposed predictive model demonstrated the potential of DL models and multimodal data fusion in accurately predicting time-to-TKR surgery that may help assist physicians to personalize treatment strategies and improve patient outcomes.

A Deep Neural Network Framework for the Detection of Bacterial Diseases from Chest X-Ray Scans.

Jain S, Jindal H, Bharti M

pubmed logopapersMay 27 2025
This research aims to develop an advanced deep-learning framework for detecting respiratory diseases, including COVID-19, pneumonia, and tuberculosis (TB), using chest X-ray scans. A Deep Neural Network (DNN)-based system was developed to analyze medical images and extract key features from chest X-rays. The system leverages various DNN learning algorithms to study X-ray scan color, curve, and edge-based features. The Adam optimizer is employed to minimize error rates and enhance model training. A dataset of 1800 chest X-ray images, consisting of COVID-19, pneumonia, TB, and typical cases, was evaluated across multiple DNN models. The highest accuracy was achieved using the VGG19 model. The proposed system demonstrated an accuracy of 94.72%, with a sensitivity of 92.73%, a specificity of 96.68%, and an F1-score of 94.66%. The error rate was 5.28% when trained with 80% of the dataset and tested on 20%. The VGG19 model showed significant accuracy improvements of 32.69%, 36.65%, 42.16%, and 8.1% over AlexNet, GoogleNet, InceptionV3, and VGG16, respectively. The prediction time was also remarkably low, ranging between 3 and 5 seconds. The proposed deep learning model efficiently detects respiratory diseases, including COVID-19, pneumonia, and TB, within seconds. The method ensures high reliability and efficiency by optimizing feature extraction and maintaining system complexity, making it a valuable tool for clinicians in rapid disease diagnosis.

Fetal origins of adult disease: transforming prenatal care by integrating Barker's Hypothesis with AI-driven 4D ultrasound.

Andonotopo W, Bachnas MA, Akbar MIA, Aziz MA, Dewantiningrum J, Pramono MBA, Sulistyowati S, Stanojevic M, Kurjak A

pubmed logopapersMay 26 2025
The fetal origins of adult disease, widely known as Barker's Hypothesis, suggest that adverse fetal environments significantly impact the risk of developing chronic diseases, such as diabetes and cardiovascular conditions, in adulthood. Recent advancements in 4D ultrasound (4D US) and artificial intelligence (AI) technologies offer a promising avenue for improving prenatal diagnostics and validating this hypothesis. These innovations provide detailed insights into fetal behavior and neurodevelopment, linking early developmental markers to long-term health outcomes. This study synthesizes contemporary developments in AI-enhanced 4D US, focusing on their roles in detecting fetal anomalies, assessing neurodevelopmental markers, and evaluating congenital heart defects. The integration of AI with 4D US allows for real-time, high-resolution visualization of fetal anatomy and behavior, surpassing the diagnostic precision of traditional methods. Despite these advancements, challenges such as algorithmic bias, data diversity, and real-world validation persist and require further exploration. Findings demonstrate that AI-driven 4D US improves diagnostic sensitivity and accuracy, enabling earlier detection of fetal abnormalities and optimization of clinical workflows. By providing a more comprehensive understanding of fetal programming, these technologies substantiate the links between early-life conditions and adult health outcomes, as proposed by Barker's Hypothesis. The integration of AI and 4D US has the potential to revolutionize prenatal care, paving the way for personalized maternal-fetal healthcare. Future research should focus on addressing current limitations, including ethical concerns and accessibility challenges, to promote equitable implementation. Such advancements could significantly reduce the global burden of chronic diseases and foster healthier generations.

Research-based clinical deployment of artificial intelligence algorithm for prostate MRI.

Harmon SA, Tetreault J, Esengur OT, Qin M, Yilmaz EC, Chang V, Yang D, Xu Z, Cohen G, Plum J, Sherif T, Levin R, Schmidt-Richberg A, Thompson S, Coons S, Chen T, Choyke PL, Xu D, Gurram S, Wood BJ, Pinto PA, Turkbey B

pubmed logopapersMay 26 2025
A critical limitation to deployment and utilization of Artificial Intelligence (AI) algorithms in radiology practice is the actual integration of algorithms directly into the clinical Picture Archiving and Communications Systems (PACS). Here, we sought to integrate an AI-based pipeline for prostate organ and intraprostatic lesion segmentation within a clinical PACS environment to enable point-of-care utilization under a prospective clinical trial scenario. A previously trained, publicly available AI model for segmentation of intra-prostatic findings on multiparametric Magnetic Resonance Imaging (mpMRI) was converted into a containerized environment compatible with MONAI Deploy Express. An inference server and dedicated clinical PACS workflow were established within our institution for evaluation of real-time use of the AI algorithm. PACS-based deployment was prospectively evaluated in two phases: first, a consecutive cohort of patients undergoing diagnostic imaging at our institution and second, a consecutive cohort of patients undergoing biopsy based on mpMRI findings. The AI pipeline was executed from within the PACS environment by the radiologist. AI findings were imported into clinical biopsy planning software for target definition. Metrics analyzing deployment success, timing, and detection performance were recorded and summarized. In phase one, clinical PACS deployment was successfully executed in 57/58 cases and were obtained within one minute of activation (median 33 s [range 21-50 s]). Comparison with expert radiologist annotation demonstrated stable model performance compared to independent validation studies. In phase 2, 40/40 cases were successfully executed via PACS deployment and results were imported for biopsy targeting. Cancer detection rates for prostate cancer were 82.1% for ROI targets detected by both AI and radiologist, 47.8% in targets proposed by AI and accepted by radiologist, and 33.3% in targets identified by the radiologist alone. Integration of novel AI algorithms requiring multi-parametric input into clinical PACS environment is feasible and model outputs can be used for downstream clinical tasks.

FROG: A Fine-Grained Spatiotemporal Graph Neural Network With Self-Supervised Guidance for Early Diagnosis of Alzheimer's Disease.

Zhang S, Wang Q, Wei M, Zhong J, Zhang Y, Song Z, Li C, Zhang X, Han Y, Li Y, Lv H, Jiang J

pubmed logopapersMay 26 2025
Functional magnetic resonance imaging (fMRI) has demonstrated significant potential in the early diagnosis and study of pathological mechanisms of Alzheimer's disease (AD). To fit subtle cross-spatiotemporal interactions and learn pathological features from fMRI, we proposed a fine-grained spatiotemporal graph neural network with self-supervised learning (SSL) for diagnosis and biomarker extraction of early AD. First, considering the spatiotemporal interaction of the brain, we designed two masks that leverage the spatial correlation and temporal repeatability of fMRI. Afterwards, temporal gated inception convolution and graph scalable inception convolution were proposed for the spatiotemporal autoencoder to enhance subtle cross-spatiotemporal variation and learn noise-suppressed signals. Furthermore, a spatiotemporal scalable cosine error with high selectivity for signal reconstruction was designed in SSL to guide the autoencoder to fit the fine-grained pathological features in an unsupervised manner. A total of 5,687 samples from four cross-population cohorts were involved. The accuracy of our model was 5.1% higher than the state-of-the-art models, which included four AD diagnostic models, four SSL strategies, and three multivariate time series models. The neuroimaging biomarkers were precisely localized to the abnormal brain regions, and correlated significantly with the cognitive scale and biomarkers (P$< $0.001). Moreover, the AD progression was reflected through the mask reconstruction error of our SSL strategy. The results demonstrate that our model can effectively capture spatiotemporal and pathological features, and providing a novel and relevant framework for the early diagnosis of AD based on fMRI.

Segmentation of the Left Ventricle and Its Pathologies for Acute Myocardial Infarction After Reperfusion in LGE-CMR Images.

Li S, Wu C, Feng C, Bian Z, Dai Y, Wu LM

pubmed logopapersMay 26 2025
Due to the association with higher incidence of left ventricular dysfunction and complications, segmentation of left ventricle and related pathological tissues: microvascular obstruction and myocardial infarction from late gadolinium enhancement cardiac magnetic resonance images is crucially important. However, lack of datasets, diverse shapes and locations, extreme imbalanced class, severe intensity distribution overlapping are the main challenges. We first release a late gadolinium enhancement cardiac magnetic resonance benchmark dataset LGE-LVP containing 140 patients with left ventricle myocardial infarction and concomitant microvascular obstruction. Then, a progressive deep learning model LVPSegNet is proposed to segment the left ventricle and its pathologies via adaptive region of interest extraction, sample augmentation, curriculum learning, and multiple receptive field fusion in dealing with the challenges. Comprehensive comparisons with state-of-the-art models on the internal and external datasets demonstrate that the proposed model performs the best on both geometric and clinical metrics and it most closely matched the clinician's performance. Overall, the released LGE-LVP dataset alongside the LVPSegNet we proposed offer a practical solution for automated left ventricular and its pathologies segmentation by providing data support and facilitating effective segmentation. The dataset and source codes will be released via https://github.com/DFLAG-NEU/LVPSegNet.

Training a deep learning model to predict the anatomy irradiated in fluoroscopic x-ray images.

Guo L, Trujillo D, Duncan JR, Thomas MA

pubmed logopapersMay 26 2025
Accurate patient dosimetry estimates from fluoroscopically-guided interventions (FGIs) are hindered by limited knowledge of the specific anatomy that was irradiated. Current methods use data reported by the equipment to estimate the patient anatomy exposed during each irradiation event. We propose a deep learning algorithm to automatically match 2D fluoroscopic images with corresponding anatomical regions in computational phantoms, enabling more precise patient dose estimates. Our method involves two main steps: (1) simulating 2D fluoroscopic images, and (2) developing a deep learning algorithm to predict anatomical coordinates from these images. For part (1), we utilized DeepDRR for fast and realistic simulation of 2D x-ray images from 3D computed tomography datasets. We generated a diverse set of simulated fluoroscopic images from various regions with different field sizes. In part (2), we employed a Residual Neural Network (ResNet) architecture combined with metadata processing to effectively integrate patient-specific information (age and gender) to learn the transformation between 2D images and specific anatomical coordinates in each representative phantom. For the Modified ResNet model, we defined an allowable error range of ± 10 mm. The proposed method achieved over 90% of predictions within ± 10 mm, with strong alignment between predicted and true coordinates as confirmed by Bland-Altman analysis. Most errors were within ± 2%, with outliers beyond ± 5% primarily in Z-coordinates for infant phantoms due to their limited representation in the training data. These findings highlight the model's accuracy and its potential for precise spatial localization, while emphasizing the need for improved performance in specific anatomical regions. In this work, a comprehensive simulated 2D fluoroscopy image dataset was developed, addressing the scarcity of real clinical datasets and enabling effective training of deep-learning models. The modified ResNet successfully achieved precise prediction of anatomical coordinates from the simulated fluoroscopic images, enabling the goal of more accurate patient-specific dosimetry.

Applications of artificial intelligence in abdominal imaging.

Gupta A, Rajamohan N, Bansal B, Chaudhri S, Chandarana H, Bagga B

pubmed logopapersMay 26 2025
The rapid advancements in artificial intelligence (AI) carry the promise to reshape abdominal imaging by offering transformative solutions to challenges in disease detection, classification, and personalized care. AI applications, particularly those leveraging deep learning and radiomics, have demonstrated remarkable accuracy in detecting a wide range of abdominal conditions, including but not limited to diffuse liver parenchymal disease, focal liver lesions, pancreatic ductal adenocarcinoma (PDAC), renal tumors, and bowel pathologies. These models excel in the automation of tasks such as segmentation, classification, and prognostication across modalities like ultrasound, CT, and MRI, often surpassing traditional diagnostic methods. Despite these advancements, widespread adoption remains limited by challenges such as data heterogeneity, lack of multicenter validation, reliance on retrospective single-center studies, and the "black box" nature of many AI models, which hinder interpretability and clinician trust. The absence of standardized imaging protocols and reference gold standards further complicates integration into clinical workflows. To address these barriers, future directions emphasize collaborative multi-center efforts to generate diverse, standardized datasets, integration of explainable AI frameworks to existing picture archiving and communication systems, and the development of automated, end-to-end pipelines capable of processing multi-source data. Targeted clinical applications, such as early detection of PDAC, improved segmentation of renal tumors, and improved risk stratification in liver diseases, show potential to refine diagnostic accuracy and therapeutic planning. Ethical considerations, such as data privacy, regulatory compliance, and interdisciplinary collaboration, are essential for successful translation into clinical practice. AI's transformative potential in abdominal imaging lies not only in complementing radiologists but also in fostering precision medicine by enabling faster, more accurate, and patient-centered care. Overcoming current limitations through innovation and collaboration will be pivotal in realizing AI's full potential to improve patient outcomes and redefine the landscape of abdominal radiology.
Page 134 of 1521519 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.