Sort by:
Page 89 of 3143139 results

Fetal-Net: enhancing Maternal-Fetal ultrasound interpretation through Multi-Scale convolutional neural networks and Transformers.

Islam U, Ali YA, Al-Razgan M, Ullah H, Almaiah MA, Tariq Z, Wazir KM

pubmed logopapersJul 15 2025
Ultrasound imaging plays an important role in fetal growth and maternal-fetal health evaluation, but due to the complicated anatomy of the fetus and image quality fluctuation, its interpretation is quite challenging. Although deep learning include Convolution Neural Networks (CNNs) have been promising, they have largely been limited to one task or the other, such as the segmentation or detection of fetal structures, thus lacking an integrated solution that accounts for the intricate interplay between anatomical structures. To overcome these limitations, Fetal-Net-a new deep learning architecture that integrates Multi-Scale-CNNs and transformer layers-was developed. The model was trained on a large, expertly annotated set of more than 12,000 ultrasound images across different anatomical planes for effective identification of fetal structures and anomaly detection. Fetal-Net achieved excellent performance in anomaly detection, with precision (96.5%), accuracy (97.5%), and recall (97.8%) showed robustness factor against various imaging settings, making it a potent means of augmenting prenatal care through refined ultrasound image interpretation.

Deep Learning for Osteoporosis Diagnosis Using Magnetic Resonance Images of Lumbar Vertebrae.

Mousavinasab SM, Hedyehzadeh M, Mousavinasab ST

pubmed logopapersJul 15 2025
This work uses T1, STIR, and T2 MRI sequences of the lumbar vertebrae and BMD measurements to identify osteoporosis using deep learning. An analysis of 1350 MRI images from 50 individuals who had simultaneous BMD and MRI scans was performed. The accuracy of a custom convolution neural network for osteoporosis categorization was assessed using deep learning. T2-weighted MRIs were most diagnostic. The suggested model outperformed T1 and STIR sequences with 88.5% accuracy, 88.9% sensitivity, and 76.1% F1-score. Modern deep learning models like GoogleNet, EfficientNet-B3, ResNet50, InceptionV3, and InceptionResNetV2 were compared to its performance. These designs performed well, but our model was more sensitive and accurate. This research shows that T2-weighted MRI is the best sequence for osteoporosis diagnosis and that deep learning overcomes BMD-based approaches by reducing ionizing radiation. These results support clinical use of deep learning with MRI for safe, accurate, and quick osteoporosis diagnosis.

Motion artifacts and image quality in stroke MRI: associated factors and impact on AI and human diagnostic accuracy.

Krag CH, Müller FC, Gandrup KL, Andersen MB, Møller JM, Liu ML, Rud A, Krabbe S, Al-Farra L, Nielsen M, Kruuse C, Boesen MP

pubmed logopapersJul 15 2025
To assess the prevalence of motion artifacts and the factors associated with them in a cohort of suspected stroke patients, and to determine their impact on diagnostic accuracy for both AI and radiologists. This retrospective cross-sectional study included brain MRI scans of consecutive adult suspected stroke patients from a non-comprehensive Danish stroke center between January and April 2020. An expert neuroradiologist identified acute ischemic, hemorrhagic, and space-occupying lesions as references. Two blinded radiology residents rated MRI image quality and motion artifacts. The diagnostic accuracy of a CE-marked deep learning tool was compared to that of radiology reports. Multivariate analysis examined associations between patient characteristics and motion artifacts. 775 patients (68 years ± 16, 420 female) were included. Acute ischemic, hemorrhagic, and space-occupying lesions were found in 216 (27.9%), 12 (1.5%), and 20 (2.6%). Motion artifacts were present in 57 (7.4%). Increasing age (OR per decade, 1.60; 95% CI: 1.26, 2.09; p < 0.001) and limb motor symptoms (OR, 2.36; 95% CI: 1.32, 4.20; p = 0.003) were independently associated with motion artifacts in multivariate analysis. Motion artifacts significantly reduced the accuracy of detecting hemorrhage. This reduction was greater for the AI tool (from 88 to 67%; p < 0.001) than for radiology reports (from 100 to 93%; p < 0.001). Ischemic and space-occupying lesion detection was not significantly affected. Motion artifacts are common in suspected stroke patients, particularly in the elderly and patients with motor symptoms, reducing accuracy for hemorrhage detection by both AI and radiologists. Question Motion artifacts reduce the quality of MRI scans, but it is unclear which factors are associated with them and how they impact diagnostic accuracy. Findings Motion artifacts occurred in 7% of suspected stroke MRI scans, associated with higher patient age and motor symptoms, lowering hemorrhage detection by AI and radiologists. Clinical relevance Motion artifacts in stroke brain MRIs significantly reduce the diagnostic accuracy of human and AI detection of intracranial hemorrhages. Elderly patients and those with motor symptoms may benefit from a greater focus on motion artifact prevention and reduction.

Preoperative prediction value of 2.5D deep learning model based on contrast-enhanced CT for lymphovascular invasion of gastric cancer.

Sun X, Wang P, Ding R, Ma L, Zhang H, Zhu L

pubmed logopapersJul 15 2025
To develop and validate artificial intelligence models based on contrast-enhanced CT(CECT) images of venous phase using deep learning (DL) and Radiomics approaches to predict lymphovascular invasion in gastric cancer prior to surgery. We retrospectively analyzed data from 351 gastric cancer patients, randomly splitting them into two cohorts (training cohort, n = 246; testing cohort, n = 105) in a 7:3 ratio. The tumor region of interest (ROI) was outlined on venous phase CT images as the input for the development of radiomics, 2D and 3D DL models (DL2D and DL3D). Of note, by centering the analysis on the tumor's maximum cross-section and incorporating seven adjacent 2D images, we generated stable 2.5D data to establish a multi-instance learning (MIL) model. Meanwhile, the clinical and feature-combined models which integrated traditional CT enhancement parameters (Ratio), radiomics, and MIL features were also constructed. Models' performance was evaluated by the area under the curve (AUC), confusion matrices, and detailed metrics, such as sensitivity and specificity. A nomogram based on the combined model was established and applied to clinical practice. The calibration curve was used to evaluate the consistency between the predicted LVI of each model and the actual LVI of gastric cancer, and the decision curve analysis (DCA) was used to evaluate the net benefit of each model. Among the developed models, 2.5D MIL and combined models exhibited the superior performance in comparison to the clinical model, the radiomics model, the DL2D model, and the DL3D model as evidenced by the AUC values of 0.820, 0.822, 0.748, 0.725, 0.786, and 0.711 on testing set, respectively. Additionally, the 2.5D MIL and combined models also showed good calibration for LVI prediction, and could provide a net clinical benefit when the threshold probability ranged from 0.31 to 0.98, and from 0.28 to 0.84, indicating their clinical usefulness. The MIL and combined models highlight their performance in predicting preoperative lymphovascular invasion in gastric cancer, offering valuable insights for clinicians in selecting appropriate treatment options for gastric cancer patients.

Poincare guided geometric UNet for left atrial epicardial adipose tissue segmentation in Dixon MRI images.

Firouznia M, Ylipää E, Henningsson M, Carlhäll CJ

pubmed logopapersJul 15 2025
Epicardial Adipose Tissue (EAT) is a recognized risk factor for cardiovascular diseases and plays a pivotal role in the pathophysiology of Atrial Fibrillation (AF). Accurate automatic segmentation of the EAT around the Left Atrium (LA) from Magnetic Resonance Imaging (MRI) data remains challenging. While Convolutional Neural Networks excel at multi-scale feature extraction using stacked convolutions, they struggle to capture long-range self-similarity and hierarchical relationships, which are essential in medical image segmentation. In this study, we present and validate PoinUNet, a deep learning model that integrates a Poincaré embedding layer into a 3D UNet to enhance LA wall and fat segmentation from Dixon MRI data. By using hyperbolic space learning, PoinUNet captures complex LA and EAT relationships and addresses class imbalance and fat geometry challenges using a new loss function. Sixty-six participants, including forty-eight AF patients, were scanned at 1.5T. The first network identified fat regions, while the second utilized Poincaré embeddings and convolutional layers for precise segmentation, enhanced by fat fraction maps. PoinUNet achieved a Dice Similarity Coefficient of 0.87 and a Hausdorff distance of 9.42 on the test set. This performance surpasses state-of-the-art methods, providing accurate quantification of the LA wall and LA EAT.

Assessing MRI-based Artificial Intelligence Models for Preoperative Prediction of Microvascular Invasion in Hepatocellular Carcinoma: A Systematic Review and Meta-analysis.

Han X, Shan L, Xu R, Zhou J, Lu M

pubmed logopapersJul 15 2025
To evaluate the performance of magnetic resonance imaging (MRI)-based artificial intelligence (AI) in the preoperative prediction of microvascular invasion (MVI) in patients with hepatocellular carcinoma (HCC). A systematic search of PubMed, Embase, and Web of Science was conducted up to May 2025, following PRISMA guidelines. Studies using MRI-based AI models with histopathologically confirmed MVI were included. Study quality was assessed using the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool and the Grading of Recommendations Assessment, Development and Evaluation (GRADE) framework. Statistical synthesis used bivariate random-effects models. Twenty-nine studies were included, totaling 2838 internal and 1161 external validation cases. Pooled internal validation showed a sensitivity of 0.81 (95% CI: 0.76-0.85), specificity of 0.82 (95% CI: 0.78-0.85), diagnostic odds ratio (DOR) of 19.33 (95% CI: 13.15-28.42), and area under the curve (AUC) of 0.88 (95% CI: 0.85-0.91). External validation yielded a comparable AUC of 0.85. Traditional machine learning methods achieved higher sensitivity than deep learning approaches in both internal and external validation cohorts (both P < 0.05). Studies incorporating both radiomics and clinical features demonstrated superior sensitivity and specificity compared to radiomics-only models (P < 0.01). MRI-based AI demonstrates high performance for preoperative prediction of MVI in HCC, particularly for MRI-based models that combine multimodal imaging and clinical variables. However, substantial heterogeneity and low GRADE levels may affect the strength of the evidence, highlighting the need for methodological standardization and multicenter prospective validation to ensure clinical applicability.

Identification of high-risk hepatoblastoma in the CHIC risk stratification system based on enhanced CT radiomics features.

Yang Y, Si J, Zhang K, Li J, Deng Y, Wang F, Liu H, He L, Chen X

pubmed logopapersJul 15 2025
Survival of patients with high-risk hepatoblastoma remains low, and early identification of high-risk hepatoblastoma is critical. To investigate the clinical value of contrast-enhanced computed tomography (CECT) radiomics in predicting high-risk hepatoblastoma. Clinical and CECT imaging data were retrospectively collected from 162 children who were treated at our hospital and pathologically diagnosed with hepatoblastoma. Patients were categorized into high-risk and non-high-risk groups according to the Children's Hepatic Tumors International Collaboration - Hepatoblastoma Study (CHIC-HS). Subsequently, these cases were randomized into training and test groups in a ratio of 7:3. The region of interest (ROI) was first outlined in the pre-treatment venous images, and subsequently the best features were extracted and filtered, and the radiomics model was built by three machine learning methods: namely, Bagging Decision Tree (BDT), Logistic Regression (LR), and Stochastic Gradient Descent (SGD). The AUC, 95 % CI, and accuracy of the model were calculated, and the model performance was evaluated by the DeLong test. The AUCs of the Bagging decision tree model were 0.966 (95 % CI: 0.938-0.994) and 0.875 (95 % CI: 0.77-0.98) for the training and test sets, respectively, with accuracies of 0.841 and 0.816,respectively. The logistic regression model has AUCs of 0.901 (95 % CI: 0.839-0.963) and 0.845 (95 % CI: 0.721-0.968) for the training and test sets, with accuracies of 0.788 and 0.735, respectively. The stochastic gradient descent model has AUCs of 0.788 (95 % CI: 0.712 -0.863) and 0.742 (95 % CI: 0.627-0.857) with accuracies of 0.735 and 0.653, respectively. CECT-based imaging histology identifies high-risk hepatoblastomas and may provide additional imaging biomarkers for identifying high-risk hepatoblastomas.

Multimodal Radiopathomics Signature for Prediction of Response to Immunotherapy-based Combination Therapy in Gastric Cancer Using Interpretable Machine Learning.

Huang W, Wang X, Zhong R, Li Z, Zhou K, Lyu Q, Han JE, Chen T, Islam MT, Yuan Q, Ahmad MU, Chen S, Chen C, Huang J, Xie J, Shen Y, Xiong W, Shen L, Xu Y, Yang F, Xu Z, Li G, Jiang Y

pubmed logopapersJul 15 2025
Immunotherapy has become a cornerstone in the treatment of advanced gastric cancer (GC). However, identifying reliable predictive biomarkers remains a considerable challenge. This study demonstrates the potential of integrating multimodal baseline data, including computed tomography scan images and digital H&E-stained pathology images, with biological interpretation to predict the response to immunotherapy-based combination therapy using a multicenter cohort of 298 GC patients. By employing seven machine learning approaches, we developed a radiopathomics signature (RPS) to predict treatment response and stratify prognostic risk in GC. The RPS demonstrated area under the receiver-operating-characteristic curves (AUCs) of 0.978 (95% CI, 0.950-1.000), 0.863 (95% CI, 0.744-0.982), and 0.822 (95% CI, 0.668-0.975) in the training, internal validation, and external validation cohorts, respectively, outperforming conventional biomarkers such as CPS, MSI-H, EBV, and HER-2. Kaplan-Meier analysis revealed significant differences of survival between high- and low-risk groups, especially in advanced-stage and non-surgical patients. Additionally, genetic analyses revealed that the RPS correlates with enhanced immune regulation pathways and increased infiltration of memory B cells. The interpretable RPS provides accurate predictions for treatment response and prognosis in GC and holds potential for guiding more precise, patient-specific treatment strategies while offering insights into immune-related mechanisms.

LUMEN-A Deep Learning Pipeline for Analysis of the 3D Morphology of the Cerebral Lenticulostriate Arteries from Time-of-Flight 7T MRI.

Li R, Chatterjee S, Jiaerken Y, Zhou X, Radhakrishna C, Benjamin P, Nannoni S, Tozer DJ, Markus HS, Rodgers CT

pubmed logopapersJul 15 2025
The lenticulostriate arteries (LSAs) supply critical subcortical brain structures and are affected in cerebral small vessel disease (CSVD). Changes in their morphology are linked to cardiovascular risk factors and may indicate early pathology. 7T Time-of-Flight MR angiography (TOF-MRA) enables clear LSA visualisation. We aimed to develop a semi-automated pipeline for quantifying 3D LSA morphology from 7T TOF-MRA in CSVD patients. We used data from a local 7T CSVD study to create a pipeline, LUMEN, comprising two stages: vessel segmentation and LSA quantification. For segmentation, we fine-tuned a deep learning model, DS6, and compared it against nnU-Net and a Frangi-filter pipeline, MSFDF. For quantification, centrelines of LSAs within basal ganglia were extracted to compute branch counts, length, tortuosity, and maximum curvature. This pipeline was applied to 69 subjects, with results compared to traditional analysis measuring LSA morphology on 2D coronal maximum intensity projection (MIP) images. For vessel segmentation, fine-tuned DS6 achieved the highest test Dice score (0.814±0.029) and sensitivity, whereas nnU-Net achieved the best balanced average Hausdorff distance and precision. Visual inspection confirmed that DS6 was most sensitive in detecting LSAs with weak signals. Across 69 subjects, the pipeline with DS6 identified 23.5±8.5 LSA branches. Branch length inside the basal ganglia was 26.4±3.5 mm, and tortuosity was 1.5±0.1. Extracted LSA metrics from 2D MIP analysis and our 3D analysis showed fair-to-moderate correlations. Outliers highlighted the added value of 3D analysis. This open-source deep-learning-based pipeline offers a validated tool quantifying 3D LSA morphology in CSVD patients from 7T-TOF-MRA for clinical research.

Learning homeomorphic image registration via conformal-invariant hyperelastic regularisation.

Zou J, Debroux N, Liu L, Qin J, Schönlieb CB, Aviles-Rivero AI

pubmed logopapersJul 15 2025
Deformable image registration is a fundamental task in medical image analysis and plays a crucial role in a wide range of clinical applications. Recently, deep learning-based approaches have been widely studied for deformable medical image registration and achieved promising results. However, existing deep learning image registration techniques do not theoretically guarantee topology-preserving transformations. This is a key property to preserve anatomical structures and achieve plausible transformations that can be used in real clinical settings. We propose a novel framework for deformable image registration. Firstly, we introduce a novel regulariser based on conformal-invariant properties in a nonlinear elasticity setting. Our regulariser enforces the deformation field to be mooth, invertible and orientation-preserving. More importantly, we strictly guarantee topology preservation yielding to a clinical meaningful registration. Secondly, we boost the performance of our regulariser through coordinate MLPs, where one can view the to-be-registered images as continuously differentiable entities. We demonstrate, through numerical and visual experiments, that our framework is able to outperform current techniques for image registration.
Page 89 of 3143139 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.