Sort by:
Page 148 of 1701699 results

Automated Fetal Biometry Assessment with Deep Ensembles using Sparse-Sampling of 2D Intrapartum Ultrasound Images

Jayroop Ramesh, Valentin Bacher, Mark C. Eid, Hoda Kalabizadeh, Christian Rupprecht, Ana IL Namburete, Pak-Hei Yeung, Madeleine K. Wyburd, Nicola K. Dinsdale

arxiv logopreprintMay 20 2025
The International Society of Ultrasound advocates Intrapartum Ultrasound (US) Imaging in Obstetrics and Gynecology (ISUOG) to monitor labour progression through changes in fetal head position. Two reliable ultrasound-derived parameters that are used to predict outcomes of instrumental vaginal delivery are the angle of progression (AoP) and head-symphysis distance (HSD). In this work, as part of the Intrapartum Ultrasounds Grand Challenge (IUGC) 2024, we propose an automated fetal biometry measurement pipeline to reduce intra- and inter-observer variability and improve measurement reliability. Our pipeline consists of three key tasks: (i) classification of standard planes (SP) from US videos, (ii) segmentation of fetal head and pubic symphysis from the detected SPs, and (iii) computation of the AoP and HSD from the segmented regions. We perform sparse sampling to mitigate class imbalances and reduce spurious correlations in task (i), and utilize ensemble-based deep learning methods for task (i) and (ii) to enhance generalizability under different US acquisition settings. Finally, to promote robustness in task iii) with respect to the structural fidelity of measurements, we retain the largest connected components and apply ellipse fitting to the segmentations. Our solution achieved ACC: 0.9452, F1: 0.9225, AUC: 0.983, MCC: 0.8361, DSC: 0.918, HD: 19.73, ASD: 5.71, $\Delta_{AoP}$: 8.90 and $\Delta_{HSD}$: 14.35 across an unseen hold-out set of 4 patients and 224 US frames. The results from the proposed automated pipeline can improve the understanding of labour arrest causes and guide the development of clinical risk stratification tools for efficient and effective prenatal care.

XDementNET: An Explainable Attention Based Deep Convolutional Network to Detect Alzheimer Progression from MRI data

Soyabul Islam Lincoln, Mirza Mohd Shahriar Maswood

arxiv logopreprintMay 20 2025
A common neurodegenerative disease, Alzheimer's disease requires a precise diagnosis and efficient treatment, particularly in light of escalating healthcare expenses and the expanding use of artificial intelligence in medical diagnostics. Many recent studies shows that the combination of brain Magnetic Resonance Imaging (MRI) and deep neural networks have achieved promising results for diagnosing AD. Using deep convolutional neural networks, this paper introduces a novel deep learning architecture that incorporates multiresidual blocks, specialized spatial attention blocks, grouped query attention, and multi-head attention. The study assessed the model's performance on four publicly accessible datasets and concentrated on identifying binary and multiclass issues across various categories. This paper also takes into account of the explainability of AD's progression and compared with state-of-the-art methods namely Gradient Class Activation Mapping (GradCAM), Score-CAM, Faster Score-CAM, and XGRADCAM. Our methodology consistently outperforms current approaches, achieving 99.66\% accuracy in 4-class classification, 99.63\% in 3-class classification, and 100\% in binary classification using Kaggle datasets. For Open Access Series of Imaging Studies (OASIS) datasets the accuracies are 99.92\%, 99.90\%, and 99.95\% respectively. The Alzheimer's Disease Neuroimaging Initiative-1 (ADNI-1) dataset was used for experiments in three planes (axial, sagittal, and coronal) and a combination of all planes. The study achieved accuracies of 99.08\% for axis, 99.85\% for sagittal, 99.5\% for coronal, and 99.17\% for all axis, and 97.79\% and 8.60\% respectively for ADNI-2. The network's ability to retrieve important information from MRI images is demonstrated by its excellent accuracy in categorizing AD stages.

Effectiveness of Artificial Intelligence in detecting sinonasal pathology using clinical imaging modalities: a systematic review.

Petsiou DP, Spinos D, Martinos A, Muzaffar J, Garas G, Georgalas C

pubmed logopapersMay 19 2025
Sinonasal pathology can be complex and requires a systematic and meticulous approach. Artificial Intelligence (AI) has the potential to improve diagnostic accuracy and efficiency in sinonasal imaging, but its clinical applicability remains an area of ongoing research. This systematic review evaluates the methodologies and clinical relevance of AI in detecting sinonasal pathology through radiological imaging. Key search terms included "artificial intelligence," "deep learning," "machine learning," "neural network," and "paranasal sinuses,". Abstract and full-text screening was conducted using predefined inclusion and exclusion criteria. Data were extracted on study design, AI architectures used (e.g., Convolutional Neural Networks (CNN), Machine Learning classifiers), and clinical characteristics, such as imaging modality (e.g., Computed Tomography (CT), Magnetic Resonance Imaging (MRI)). A total of 53 studies were analyzed, with 85% retrospective, 68% single-center, and 92.5% using internal databases. CT was the most common imaging modality (60.4%), and chronic rhinosinusitis without nasal polyposis (CRSsNP) was the most studied condition (34.0%). Forty-one studies employed neural networks, with classification as the most frequent AI task (35.8%). Key performance metrics included Area Under the Curve (AUC), accuracy, sensitivity, specificity, precision, and F1-score. Quality assessment based on CONSORT-AI yielded a mean score of 16.0 ± 2. AI shows promise in improving sinonasal imaging interpretation. However, as existing research is predominantly retrospective and single-center, further studies are needed to evaluate AI's generalizability and applicability. More research is also required to explore AI's role in treatment planning and post-treatment prediction for clinical integration.

Transformer model based on Sonazoid contrast-enhanced ultrasound for microvascular invasion prediction in hepatocellular carcinoma.

Qin Q, Pang J, Li J, Gao R, Wen R, Wu Y, Liang L, Que Q, Liu C, Peng J, Lv Y, He Y, Lin P, Yang H

pubmed logopapersMay 19 2025
Microvascular invasion (MVI) is strongly associated with the prognosis of patients with hepatocellular carcinoma (HCC). To evaluate the value of Transformer models with Sonazoid contrast-enhanced ultrasound (CEUS) in the preoperative prediction of MVI. This retrospective study included 164 HCC patients. Deep learning features and radiomic features were extracted from arterial and Kupffer phase images, alongside the collection of clinicopathological parameters. Normality was assessed using the Shapiro-Wilk test. The Mann‒Whitney U-test and least absolute shrinkage and selection operator algorithm were applied to screen features. Transformer, radiomic, and clinical prediction models for MVI were constructed with logistic regression. Repeated random splits followed a 7:3 ratio, with model performance evaluated over 50 iterations. The area under the receiver operating characteristic curve (AUC, 95% confidence interval [CI]), sensitivity, specificity, accuracy, positive predictive value (PPV), negative predictive value (NPV), decision curve, and calibration curve were used to evaluate the performance of the models. The DeLong test was applied to compare performance between models. The Bonferroni method was used to control type I error rates arising from multiple comparisons. A two-sided p-value of < 0.05 was considered statistically significant. In the training set, the diagnostic performance of the arterial-phase Transformer (AT) and Kupffer-phase Transformer (KT) models were better than that of the radiomic and clinical (Clin) models (p < 0.0001). In the validation set, both the AT and KT models outperformed the radiomic and Clin models in terms of diagnostic performance (p < 0.05). The AUC (95% CI) for the AT model was 0.821 (0.72-0.925) with an accuracy of 80.0%, and the KT model was 0.859 (0.766-0.977) with an accuracy of 70.0%. Logistic regression analysis indicated that tumor size (p = 0.016) and alpha-fetoprotein (AFP) (p = 0.046) were independent predictors of MVI. Transformer models using Sonazoid CEUS have potential for effectively identifying MVI-positive patients preoperatively.

Functional MRI Analysis of Cortical Regions to Distinguish Lewy Body Dementia From Alzheimer's Disease.

Kashyap B, Hanson LR, Gustafson SK, Sherman SJ, Sughrue ME, Rosenbloom MH

pubmed logopapersMay 19 2025
Cortical regions such as parietal area H (PH) and the fundus of the superior temporal sulcus (FST) are involved in higher visual function and may play a role in dementia with Lewy bodies (DLB), which is frequently associated with hallucinations. The authors evaluated functional connectivity between these two regions for distinguishing participants with DLB from those with Alzheimer's disease (AD) or mild cognitive impairment (MCI) and from cognitively normal (CN) individuals to identify a functional connectivity MRI signature for DLB. Eighteen DLB participants completed cognitive testing and functional MRI scans and were matched to AD or MCI and CN individuals whose data were obtained from the Alzheimer's Disease Neuroimaging Initiative database (https://adni.loni.usc.edu). Images were analyzed with data from Human Connectome Project (HCP) comparison individuals by using a machine learning-based subject-specific HCP atlas based on diffusion tractography. Bihemispheric functional connectivity of the PH to left FST regions was reduced in the DLB group compared with the AD and CN groups (mean±SD connectivity score=0.307±0.009 vs. 0.456±0.006 and 0.433±0.006, respectively). No significant differences were detected among the groups in connectivity within basal ganglia structures, and no significant correlations were observed between neuropsychological testing results and functional connectivity between the PH and FST regions. Performances on clock-drawing and number-cancelation tests were significantly and negatively correlated with connectivity between the right caudate nucleus and right substantia nigra for DLB participants but not for AD or CN participants. The functional connectivity between PH and FST regions is uniquely affected by DLB and may help distinguish this condition from AD.

Expert-Like Reparameterization of Heterogeneous Pyramid Receptive Fields in Efficient CNNs for Fair Medical Image Classification

Xiao Wu, Xiaoqing Zhang, Zunjie Xiao, Lingxi Hu, Risa Higashita, Jiang Liu

arxiv logopreprintMay 19 2025
Efficient convolutional neural network (CNN) architecture designs have attracted growing research interests. However, they usually apply single receptive field (RF), small asymmetric RFs, or pyramid RFs to learn different feature representations, still encountering two significant challenges in medical image classification tasks: 1) They have limitations in capturing diverse lesion characteristics efficiently, e.g., tiny, coordination, small and salient, which have unique roles on results, especially imbalanced medical image classification. 2) The predictions generated by those CNNs are often unfair/biased, bringing a high risk by employing them to real-world medical diagnosis conditions. To tackle these issues, we develop a new concept, Expert-Like Reparameterization of Heterogeneous Pyramid Receptive Fields (ERoHPRF), to simultaneously boost medical image classification performance and fairness. This concept aims to mimic the multi-expert consultation mode by applying the well-designed heterogeneous pyramid RF bags to capture different lesion characteristics effectively via convolution operations with multiple heterogeneous kernel sizes. Additionally, ERoHPRF introduces an expert-like structural reparameterization technique to merge its parameters with the two-stage strategy, ensuring competitive computation cost and inference speed through comparisons to a single RF. To manifest the effectiveness and generalization ability of ERoHPRF, we incorporate it into mainstream efficient CNN architectures. The extensive experiments show that our method maintains a better trade-off than state-of-the-art methods in terms of medical image classification, fairness, and computation overhead. The codes of this paper will be released soon.

Deep learning models based on multiparametric magnetic resonance imaging and clinical parameters for identifying synchronous liver metastases from rectal cancer.

Sun J, Wu PY, Shen F, Chen X, She J, Luo M, Feng F, Zheng D

pubmed logopapersMay 19 2025
To establish and validate deep learning (DL) models based on pre-treatment multiparametric magnetic resonance imaging (MRI) images of primary rectal cancer and basic clinical data for the prediction of synchronous liver metastases (SLM) in patients with Rectal cancer (RC). In this retrospective study, 176 and 31 patients with RC who underwent multiparametric MRI from two centers were enrolled in the primary and external validation cohorts, respectively. Clinical factors, including sex, primary tumor site, CEA level, and CA199 level were assessed. A clinical feature (CF) model was first developed by multivariate logistic regression, then two residual network DL models were constructed based on multiparametric MRI of primary cancer with or without CF incorporation. Finally, the SLM prediction models were validated by 5-fold cross-validation and external validation. The performance of the models was evaluated by decision curve analysis (DCA) and receiver operating characteristic (ROC) analysis. Among three SLM prediction models, the Combined DL model integrating primary tumor MRI and basic clinical data achieved the best performance (AUC = 0.887 in primary study cohort; AUC = 0.876 in the external validation cohort). In the primary study cohort, the CF model, MRI DL model, and Combined DL model achieved AUCs of 0.816 (95% CI: 0.750, 0.881), 0.788 (95% CI: 0.720, 0.857), and 0.887 (95% CI: 0.834, 0.940) respectively. In the external validation cohort, the CF model, DL model without CF, and DL model with CF achieved AUCs of 0.824 (95% CI: 0.664, 0.984), 0.662 (95% CI: 0.461, 0.863), and 0.876 (95% CI: 0.728, 1.000), respectively. The combined DL model demonstrates promising potential to predict SLM in patients with RC, thereby making individualized imaging test strategies. Accurate synchronous liver metastasis (SLM) risk stratification is important for treatment planning and prognosis improvement. The proposed DL signature may be employed to better understand an individual patient's SLM risk, aiding in treatment planning and selection of further imaging examinations to personalize clinical decisions. Not applicable.

Improving Deep Learning-Based Grading of Partial-thickness Supraspinatus Tendon Tears with Guided Diffusion Augmentation.

Ni M, Jiesisibieke D, Zhao Y, Wang Q, Gao L, Tian C, Yuan H

pubmed logopapersMay 19 2025
To develop and validate a deep learning system with guided diffusion-based data augmentation for grading partial-thickness supraspinatus tendon (SST) tears and to compare its performance with experienced radiologists, including external validation. This retrospective study included 1150 patients with arthroscopically confirmed SST tears, divided into a training set (741 patients), validation set (185 patients), and internal test set (185 patients). An independent external test set of 224 patients was used for generalizability assessment. To address data imbalance, MRI images were augmented using a guided diffusion model. A ResNet-34 model was employed for Ellman grading of bursal-sided and articular-sided partial-thickness tears across different MRI sequences (oblique coronal [OCOR], oblique sagittal [OSAG], and combined OCOR+OSAG). Performance was evaluated using AUC and precision-recall curves, and compared to three experienced musculoskeletal (MSK) radiologists. The DeLong test was used to compare performance across different sequence combinations. A total of 26,020 OCOR images and 26,356 OSAG images were generated using the guided diffusion model. For bursal-sided partial-thickness tears in the internal dataset, the model achieved AUCs of 0.99, 0.98, and 0.97 for OCOR, OSAG, and combined sequences, respectively, while for articular-sided tears, AUCs were 0.99, 0.99, and 0.99. The DeLong test showed no significant differences among sequence combinations (P=0.17, 0.14, 0.07). In the external dataset, the combined-sequence model achieved AUCs of 0.99, 0.97, and 0.97 for bursal-sided tears and 0.99, 0.95, and 0.95 for articular-sided tears. Radiologists demonstrated an ICC of 0.99, but their grading performance was significantly lower than the ResNet-34 model (P<0.001). The deep learning system improved grading consistency and significantly reduced evaluation time, while guided diffusion augmentation enhanced model robustness. The proposed deep learning system provides a reliable and efficient method for grading partial-thickness SST tears, achieving radiologist-level accuracy with greater consistency and faster evaluation speed.

Development and validation of ultrasound-based radiomics deep learning model to identify bone erosion in rheumatoid arthritis.

Yan L, Xu J, Ye X, Lin M, Gong Y, Fang Y, Chen S

pubmed logopapersMay 19 2025
To develop and validate a deep learning radiomics fusion model (DLR) based on ultrasound (US) images to identify bone erosion in rheumatoid arthritis (RA) patients. A total of 432 patients with RA at two institutions were collected. Three hundred twelve patients from center 1 were randomly divided into a training set (N = 218) and an internal test set (N = 94) in a 7:3 ratio; meanwhile, 124 patients from center 2 were as an external test set. Radiomics (Rad) and deep learning (DL) features were extracted based on hand-crafted radiomics and deep transfer learning networks. The least absolute shrinkage and selection operator regression was employed to establish DLR fusion feature from the Rad and DL features. Subsequently, 10 machine learning algorithms were used to construct models and the final optimal model was selected. The performance of models was evaluated using receiver operating characteristic (ROC) and decision curve analysis (DCA). The diagnostic efficacy of sonographers was compared with and without the assistance of the optimal model. LR was chosen as the optimal algorithm for model construction account for superior performance (Rad/DL/DLR: area under the curve [AUC] = 0.906/0.974/0.979) in the training set. In the internal test set, DLR_LR as the final model had the highest AUC (AUC = 0.966), which was also validated in the external test set (AUC = 0.932). With the aid of DLR_LR model, the overall performance of both junior and senior sonographers improved significantly (P < 0.05), and there was no significant difference between the junior sonographer with DLR_LR model assistance and the senior sonographer without assistance (P > 0.05). DLR model based on US images is the best performer and is expected to become an important tool for identifying bone erosion in RA patients. Key Points • DLR model based on US images is the best performer in identifying BE in RA patients. • DLR model may assist the sonographers to improve the accuracy of BE evaluations.

Preoperative DBT-based radiomics for predicting axillary lymph node metastasis in breast cancer: a multi-center study.

He S, Deng B, Chen J, Li J, Wang X, Li G, Long S, Wan J, Zhang Y

pubmed logopapersMay 19 2025
In the prognosis of breast cancer, the status of axillary lymph nodes (ALN) is critically important. While traditional axillary lymph node dissection (ALND) provides comprehensive information, it is associated with high risks. Sentinel lymph node biopsy (SLND), as an alternative, is less invasive but still poses a risk of overtreatment. In recent years, digital breast tomosynthesis (DBT) technology has emerged as a new precise diagnostic tool for breast cancer, leveraging its high detection capability for lesions obscured by dense glandular tissue. This multi-center study evaluates the feasibility of preoperative DBT-based radiomics, using tumor and peritumoral features, to predict ALN metastasis in breast cancer. We retrospectively collected DBT imaging data from 536 preoperative breast cancer patients across two centers. Specifically, 390 cases were from one Hospital, and 146 cases were from another Hospital. These data were assigned to internal training and external validation sets, respectively. We performed 3D region of interest (ROI) delineation on the cranio-caudal (CC) and mediolateral oblique (MLO) views of DBT images and extracted radiomic features. Using methods such as analysis of variance (ANOVA) and least absolute shrinkage and selection operator (LASSO), we selected radiomic features extracted from the tumor and its surrounding 3 mm, 5 mm, and 10 mm regions, and constructed a radiomic feature set. We then developed a combined model that includes the optimal radiomic features and clinical pathological factors. The performance of the combined model was evaluated using the area under the curve (AUC), and it was directly compared with the diagnostic results of radiologists. The results showed that the AUC of the radiomic features from the surrounding regions of the tumor were generally lower than those from the tumor itself. Among them, the Signature<sub>tuomor+10 mm</sub> model performed best, achieving an AUC of 0.806 using a logistic regression (LR) classifier to generate the RadScore.The nomogram incorporating both Ki67 and RadScore demonstrated a slightly higher AUC (0.813) compared to the Signature<sub>tuomor+10 mm</sub> model alone (0.806). By integrating relevant clinical information, the nomogram enhances potential clinical utility. Moreover, it outperformed radiologists' assessments in predictive accuracy, highlighting its added value in clinical decision-making. Radiomics based on DBT imaging of the tumor and surrounding regions can provide a non-invasive auxiliary tool to guide treatment strategies for ALN metastasis in breast cancer. Not applicable.
Page 148 of 1701699 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.