Sort by:
Page 1 of 15142 results
Next

Generating Synthetic MR Spectroscopic Imaging Data with Generative Adversarial Networks to Train Machine Learning Models.

Maruyama S, Takeshima H

pubmed logopapersSep 26 2025
To develop a new method to generate synthetic MR spectroscopic imaging (MRSI) data for training machine learning models. This study targeted routine MRI examination protocols with single voxel spectroscopy (SVS). A novel model derived from pix2pix generative adversarial networks was proposed to generate synthetic MRSI data using MRI and SVS data as inputs. T1- and T2-weighted, SVS, and reference MRSI data were acquired from healthy brains with clinically available sequences. The proposed model was trained to generate synthetic MRSI data. Quantitative evaluation involved the calculation of the mean squared error (MSE) against the reference and metabolite ratio value. The effect of the location of and the number of the SVS data on the quality of the synthetic MRSI data was investigated using the MSE. The synthetic MRSI data generated from the proposed model were visually closer to the reference. The 95% confidence interval (CI) of the metabolite ratio value of synthetic MRSI data overlapped with the reference for seven of eight metabolite ratios. The MSEs tended to be lower in the same location than in different locations. The MSEs among groups of numbers of SVS data were not significantly different. A new method was developed to generate MRSI data by integrating MRI and SVS data. Our method can potentially increase the volume of MRSI data training for other machine learning models by adding SVS acquisition to routine MRI examinations.

A novel open-source ultrasound dataset with deep learning benchmarks for spinal cord injury localization and anatomical segmentation.

Kumar A, Kotkar K, Jiang K, Bhimreddy M, Davidar D, Weber-Levine C, Krishnan S, Kerensky MJ, Liang R, Leadingham KK, Routkevitch D, Hersh AM, Ashayeri K, Tyler B, Suk I, Son J, Theodore N, Thakor N, Manbachi A

pubmed logopapersSep 26 2025
While deep learning has catalyzed breakthroughs across numerous domains, its broader adoption in clinical settings is inhibited by the costly and time-intensive nature of data acquisition and annotation. To further facilitate medical machine learning, we present an ultrasound dataset of 10,223 brightness-mode (B-mode) images consisting of sagittal slices of porcine spinal cords (N = 25) before and after a contusion injury. We additionally benchmark the performance metrics of several state-of-the-art object detection algorithms to localize the site of injury and semantic segmentation models to label the anatomy for comparison and creation of task-specific architectures. Finally, we evaluate the zero-shot generalization capabilities of the segmentation models on human ultrasound spinal cord images to determine whether training on our porcine dataset is sufficient for accurately interpreting human data. Our results show that the YOLOv8 detection model outperforms all evaluated models for injury localization, achieving a mean Average Precision (mAP50-95) score of 0.606. Segmentation metrics indicate that the DeepLabv3 segmentation model achieves the highest accuracy on unseen porcine anatomy, with a Mean Dice score of 0.587, while SAMed achieves the highest mean Dice score generalizing to human anatomy (0.445). To the best of our knowledge, this is the largest annotated dataset of spinal cord ultrasound images made publicly available to researchers and medical professionals, as well as the first public report of object detection and segmentation architectures to assess anatomical markers in the spinal cord for methodology development and clinical applications.

MultiD4CAD: Multimodal Dataset composed of CT and Clinical Features for Coronary Artery Disease Analysis.

Prinzi F, Militello C, Sollami G, Toia P, La Grutta L, Vitabile S

pubmed logopapersSep 26 2025
Multimodal datasets offer valuable support for developing Clinical Decision Support Systems (CDSS), which leverage predictive models to enhance clinicians' decision-making. In this observational study, we present a dataset of suspected Coronary Artery Disease (CAD) patients - called MultiD4CAD - comprising imaging and clinical data. The imaging data obtained from Coronary Computed Tomography Angiography (CCTA) includes epicardial (EAT) and pericoronary (PAT) adipose tissue segmentations. These metabolically active fat tissues play a key role in cardiovascular diseases. In addition, clinical data include a set of biomarkers recognized as CAD risk factors. The validated EAT and PAT segmentations make the dataset suitable for training predictive models based on radiomics and deep learning architectures. The inclusion of CAD disease labels allows for its application in supervised learning algorithms to predict CAD outcomes. MultiD4CAD has revealed important correlations between imaging features, clinical biomarkers, and CAD status. The article concludes by discussing some challenges, such as classification, segmentation, radiomics, and deep training tasks, that can be investigated and validated using the proposed dataset.

An open deep learning-based framework and model for tooth instance segmentation in dental CBCT.

Zhou Y, Xu Y, Khalil B, Nalley A, Tarce M

pubmed logopapersSep 25 2025
Current dental CBCT segmentation tools often lack accuracy, accessibility, or comprehensive anatomical coverage. To address this, we constructed a densely annotated dental CBCT dataset and developed a deep learning model, OraSeg, for tooth-level instance segmentation, which is then deployed as a one-click tool and made freely accessible for non-commercial use. We established a standardized annotated dataset covering 35 key oral anatomical structures and employed UNetR as the backbone network, combining Swin Transformer and the spatial Mamba module for multi-scale residual feature fusion. The OralSeg model was designed and optimized for precise instance segmentation of dental CBCT images, and integrated into the 3D Slicer platform, providing a graphical user interface for one-click segmentation. OralSeg had a Dice similarity coefficient of 0.8316 ± 0.0305 on CBCT instance segmentation compared to SwinUNETR and 3D U-Net. The model significantly improves segmentation performance, especially in complex oral anatomical structures, such as apical areas, alveolar bone margins, and mandibular nerve canals. The OralSeg model presented in this study provides an effective solution for instance segmentation of dental CBCT images. The tool allows clinical dentists and researchers with no AI background to perform one-click segmentation, and may be applicable in various clinical and research contexts. OralSeg can offer researchers and clinicians a user-friendly tool for tooth-level instance segmentation, which may assist in clinical diagnosis, educational training, and research, and contribute to the broader adoption of digital dentistry in precision medicine.

Generating Brain MRI with StyleGAN2-ADA: The Effect of the Training Set Size on the Quality of Synthetic Images.

Lai M, Mascalchi M, Tessa C, Diciotti S

pubmed logopapersSep 23 2025
The potential of deep learning for medical imaging is often constrained by limited data availability. Generative models can unlock this potential by generating synthetic data that reproduces the statistical properties of real data while being more accessible for sharing. In this study, we investigated the influence of training set size on the performance of a state-of-the-art generative adversarial network, the StyleGAN2-ADA, trained on a cohort of 3,227 subjects from the OpenBHB dataset to generate 2D slices of brain MR images from healthy subjects. The quality of the synthetic images was assessed through qualitative evaluations and state-of-the-art quantitative metrics, which are provided in a publicly accessible repository. Our results demonstrate that StyleGAN2-ADA generates realistic and high-quality images, deceiving even expert radiologists while preserving privacy, as it did not memorize training images. Notably, increasing the training set size led to slight improvements in fidelity metrics. However, training set size had no noticeable impact on diversity metrics, highlighting the persistent limitation of mode collapse. Furthermore, we observed that diversity metrics, such as coverage and β-recall, are highly sensitive to the number of synthetic images used in their computation, leading to inflated values when synthetic data significantly outnumber real ones. These findings underscore the need to carefully interpret diversity metrics and the importance of employing complementary evaluation strategies for robust assessment. Overall, while StyleGAN2-ADA shows promise as a tool for generating privacy-preserving synthetic medical images, overcoming diversity limitations will require exploring alternative generative architectures or incorporating additional regularization techniques.

Enhancing Instance Feature Representation: A Foundation Model-Based Multi-Instance Approach for Neonatal Retinal Screening.

Guo J, Wang K, Tan G, Li G, Zhang X, Chen J, Hu J, Liang Y, Jiang B

pubmed logopapersSep 22 2025
Automated analysis of neonatal fundus images presents a uniquely intricate challenge in medical imaging. Existing methodologies predominantly focus on diagnosing abnormalities from individual images, often leading to inaccuracies due to the diverse and subtle nature of neonatal retinal features. Consequently, clinical standards frequently mandate the acquisition of retinal images from multiple angles to ensure the detection of minute lesions. To accommodate this, we propose leveraging multiple fundus images captured from various regions of the retina to comprehensively screen for a wide range of neonatal ocular pathologies. We employ Multiple Instance Learning (MIL) for this task, and introduce a simple yet effective learnable structure on the existing MIL method, called Learnable Dense to Global (LD2G-MIL). Different from other methods that focus on instance-to-bag feature aggregation, the proposed method focuses on generating better instance-level representations that are co-optimized with downstream MIL targets in a learnable way. Additionally, it incorporates a bag prior-based similarity loss (BP loss) mechanism, leveraging prior knowledge to enhance performance in neonatal retinal screening. To validate the efficacy of our LD2G-MIL method, we compiled the Neonatal Fundus Images (NFI) dataset, an extensive collection comprising 115,621 retinal images from 8,886 neonatal clinical episodes. Empirical evaluations on this dataset demonstrate that our approach consistently outperforms stateof-the-art (SOTA) generic and specialized methods. The code and trained models are publicly available at https: //github.com/CVIU-CSU/LD2G-MIL.

Benign vs malignant tumors classification from tumor outlines in mammography scans using artificial intelligence techniques.

Beni HM, Asaei FY

pubmed logopapersSep 21 2025
Breast cancer is one of the most important causes of death among women due to cancer. With the early diagnosis of this condition, the probability of survival will increase. For this purpose, medical imaging methods, especially mammography, are used for screening and early diagnosis of breast abnormalities. The main goal of this study is to distinguish benign or malignant tumors based on tumor morphology features extracted from tumor outlines extracted from mammography images. Unlike previous studies, this study does not use the mammographic image itself but only extracts the exact outline of the tumor. These outlines were extracted from a new and publicly available mammography database published in 2024. The features outlines were calculated using known pre-trained Convolutional Neural Networks (CNN), including VGG16, ResNet50, Xception65, AlexNet, DenseNet, GoogLeNet, Inception-v3, and a combination of them to improve performance. These pre-trained networks have been used in many studies in various fields. In the classification part, known Machine Learning (ML) algorithms, such as Support Vector Machine (SVM), K-Nearest Neighbor (KNN), Neural Network (NN), Naïve Bayes (NB), Decision Tree (DT), and a combination of them have been compared in outcome measures, namely accuracy, specificity, sensitivity, and precision. Also, with the use of data augmentation, the dataset size was increased about 6-8 times, and the K-fold cross-validation technique (K = 5) was used in this study. Based on the performed simulations, a combination of the features from all pre-trained deep networks and the NB classifier resulted in the best possible outcomes with 88.13 % accuracy, 92.52 % specificity, 83.73 % sensitivity, and 92.04 % precision. Furthermore, validation on DMID dataset using ResNet50 features along with NB classifier, led to 92.03 % accuracy, 95.57 % specificity, 88.49 % sensitivity, and 95.23 % precision. This study sheds light on using AI algorithms to prevent biopsy tests and speed up breast cancer tumor classification using tumor outlines in mammographic images.

Insertion of hepatic lesions into clinical photon-counting-detector CT projection data.

Gong H, Kharat S, Wellinghoff J, El Sadaney AO, Fletcher JG, Chang S, Yu L, Leng S, McCollough CH

pubmed logopapersSep 19 2025
To facilitate task-driven image quality assessment of lesion detectability in clinical photon-counting-detector CT (PCD-CT), it is desired to have patient image data with known pathology and precise annotation. Standard patient case collection and reference standard establishment are time- and resource-intensive. To mitigate this challenge, we aimed to develop a projection-domain lesion insertion framework that efficiently creates realistic patient cases by digitally inserting real radiopathologic features into patient PCD-CT images. 
Approach. This framework used an artificial-intelligence-assisted (AI) semi-automatic annotation to generate digital lesion models from real lesion images. The x-ray energy for commercial beam-hardening correction in PCD-CT system was estimated and used for calculating multi-energy forward projections of these lesion models at different energy thresholds. Lesion projections were subsequently added to patient projections from PCD-CT exams. The modified projections were reconstructed to form realistic lesion-present patient images, using the CT manufacturer's offline reconstruction software. Image quality was qualitatively and quantitatively validated in phantom scans and patient cases with liver lesions, using visual inspection, CT number accuracy, structural similarity index (SSIM), and radiomic feature analysis. Statistical tests were performed using Wilcoxon signed rank test. 
Main results. No statistically significant discrepancy (p>0.05) of CT numbers was observed between original and re-inserted tissue- and contrast-media-mimicking rods and hepatic lesions (mean ± standard deviation): rods 0.4 ± 2.3 HU, lesions -1.8 ± 6.4 HU. The original and inserted lesions showed similar morphological features at original and re-inserted locations: mean ± standard deviation of SSIM 0.95 ± 0.02. Additionally, the corresponding radiomic features presented highly similar feature clusters with no statistically significant differences (p>0.05). 
Significance. The proposed framework can generate patient PCD-CT exams with realistic liver lesions using archived patient data and lesion images. It will facilitate systematic evaluation of PCD-CT systems and advanced reconstruction and post-processing algorithms with target pathological features.

DLMUSE: Robust Brain Segmentation in Seconds Using Deep Learning.

Bashyam VM, Erus G, Cui Y, Wu D, Hwang G, Getka A, Singh A, Aidinis G, Baik K, Melhem R, Mamourian E, Doshi J, Davison A, Nasrallah IM, Davatzikos C

pubmed logopapersSep 17 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content</i>. Purpose To introduce an open-source deep learning brain segmentation model for fully automated brain MRI segmentation, enabling rapid segmentation and facilitating large-scale neuroimaging research. Materials and Methods In this retrospective study, a deep learning model was developed using a diverse training dataset of 1900 MRI scans (ages 24-93 with a mean of 65 years (SD: 11.5 years) and 1007 females and 893 males) with reference labels generated using a multiatlas segmentation method with human supervision. The final model was validated using 71391 scans from 14 studies. Segmentation quality was assessed using Dice similarity and Pearson correlation coefficients with reference segmentations. Downstream predictive performance for brain age and Alzheimer's disease was evaluated by fitting machine learning models. Statistical significance was assessed using Mann-Whittney U and McNemar's tests. Results The DLMUSE model achieved high correlation (r = 0.93-0.95) and agreement (median Dice scores = 0.84-0.89) with reference segmentations across the testing dataset. Prediction of brain age using DLMUSE features achieved a mean absolute error of 5.08 years, similar to that of the reference method (5.15 years, <i>P</i> = .56). Classification of Alzheimer's disease using DLMUSE features achieved an accuracy of 89% and F1-score of 0.80, which were comparable to values achieved by the reference method (89% and 0.79, respectively). DLMUSE segmentation speed was over 10000 times faster than that of the reference method (3.5 seconds vs 14 hours). Conclusion DLMUSE enabled rapid brain MRI segmentation, with performance comparable to that of state-of-theart methods across diverse datasets. The resulting open-source tools and user-friendly web interface can facilitate large-scale neuroimaging research and wide utilization of advanced segmentation methods. ©RSNA, 2025.

Prediction of cerebrospinal fluid intervention in fetal ventriculomegaly via AI-powered normative modelling.

Zhou M, Rajan SA, Nedelec P, Bayona JB, Glenn O, Gupta N, Gano D, George E, Rauschecker AM

pubmed logopapersSep 16 2025
Fetal ventriculomegaly (VM) is common and largely benign when isolated. However, it can occasionally progress to hydrocephalus, a more severe condition associated with increased mortality and neurodevelopmental delay that may require surgical postnatal intervention. Accurate differentiation between VM and hydrocephalus is essential but remains challenging, relying on subjective assessment and limited two-dimensional measurements. Deep learning-based segmentation offers a promising solution for objective and reproducible volumetric analysis. This work presents an AI-powered method for segmentation, volume quantification, and classification of the ventricles in fetal brain MRI to predict need for postnatal intervention. This retrospective study included 222 patients with singleton pregnancies. An nnUNet was trained to segment the fetal ventricles on 20 manually segmented, institutional fetal brain MRIs combined with 80 studies from a publicly available dataset. The validated model was then applied to 138 normal fetal brain MRIs to generate a normative reference range across a range of gestational ages (18-36 weeks). Finally it was applied to 64 fetal brains with VM (14 of which required postnatal intervention). ROC curves and AUC to predict VM and need for postnatal intervention were calculated. The nnUNet predicted segmentation of the fetal ventricles in the reference dataset were high quality and accurate (median Dice score 0.96, IQR 0.93-0.99). A normative reference range of ventricular volumes across gestational ages was developed using automated segmentation volumes. The optimal threshold for identifying VM was 2 standard deviations from normal with sensitivity of 92% and specificity of 93% (AUC 0.97, 95% CI 0.91-0.98). When normalized to intracranial volume, fetal ventricular volume was higher and subarachnoid volume lower among those who required postnatal intervention (p<0.001, p=0.003). The optimal threshold for identifying need for postnatal intervention was 11 standard deviations from normal with sensitivity of 86% and specificity of 100% (AUC 0.97, 95% CI 0.86-1.00). This work introduces a deep-learning based method for fast and accurate quantification of ventricular volumes in fetal brain MRI. A normative reference standard derived using this method can predict VM and need for postnatal CSF intervention. Increased ventricular volume is a strong predictor for postnatal intervention. VM = ventriculomegaly, 2D = two-dimensional, 3D = three-dimensional, ROC = receiver operating characteristics, AUC = area under curve.
Page 1 of 15142 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.