Sort by:
Page 5 of 32317 results

Enhancing Instance Feature Representation: A Foundation Model-Based Multi-Instance Approach for Neonatal Retinal Screening.

Guo J, Wang K, Tan G, Li G, Zhang X, Chen J, Hu J, Liang Y, Jiang B

pubmed logopapersSep 22 2025
Automated analysis of neonatal fundus images presents a uniquely intricate challenge in medical imaging. Existing methodologies predominantly focus on diagnosing abnormalities from individual images, often leading to inaccuracies due to the diverse and subtle nature of neonatal retinal features. Consequently, clinical standards frequently mandate the acquisition of retinal images from multiple angles to ensure the detection of minute lesions. To accommodate this, we propose leveraging multiple fundus images captured from various regions of the retina to comprehensively screen for a wide range of neonatal ocular pathologies. We employ Multiple Instance Learning (MIL) for this task, and introduce a simple yet effective learnable structure on the existing MIL method, called Learnable Dense to Global (LD2G-MIL). Different from other methods that focus on instance-to-bag feature aggregation, the proposed method focuses on generating better instance-level representations that are co-optimized with downstream MIL targets in a learnable way. Additionally, it incorporates a bag prior-based similarity loss (BP loss) mechanism, leveraging prior knowledge to enhance performance in neonatal retinal screening. To validate the efficacy of our LD2G-MIL method, we compiled the Neonatal Fundus Images (NFI) dataset, an extensive collection comprising 115,621 retinal images from 8,886 neonatal clinical episodes. Empirical evaluations on this dataset demonstrate that our approach consistently outperforms stateof-the-art (SOTA) generic and specialized methods. The code and trained models are publicly available at https: //github.com/CVIU-CSU/LD2G-MIL.

Training the next generation of physicians for artificial intelligence-assisted clinical neuroradiology: ASNR MICCAI Brain Tumor Segmentation (BraTS) 2025 Lighthouse Challenge education platform

Raisa Amiruddin, Nikolay Y. Yordanov, Nazanin Maleki, Pascal Fehringer, Athanasios Gkampenis, Anastasia Janas, Kiril Krantchev, Ahmed Moawad, Fabian Umeh, Salma Abosabie, Sara Abosabie, Albara Alotaibi, Mohamed Ghonim, Mohanad Ghonim, Sedra Abou Ali Mhana, Nathan Page, Marko Jakovljevic, Yasaman Sharifi, Prisha Bhatia, Amirreza Manteghinejad, Melisa Guelen, Michael Veronesi, Virginia Hill, Tiffany So, Mark Krycia, Bojan Petrovic, Fatima Memon, Justin Cramer, Elizabeth Schrickel, Vilma Kosovic, Lorenna Vidal, Gerard Thompson, Ichiro Ikuta, Basimah Albalooshy, Ali Nabavizadeh, Nourel Hoda Tahon, Karuna Shekdar, Aashim Bhatia, Claudia Kirsch, Gennaro D'Anna, Philipp Lohmann, Amal Saleh Nour, Andriy Myronenko, Adam Goldman-Yassen, Janet R. Reid, Sanjay Aneja, Spyridon Bakas, Mariam Aboian

arxiv logopreprintSep 21 2025
High-quality reference standard image data creation by neuroradiology experts for automated clinical tools can be a powerful tool for neuroradiology & artificial intelligence education. We developed a multimodal educational approach for students and trainees during the MICCAI Brain Tumor Segmentation Lighthouse Challenge 2025, a landmark initiative to develop accurate brain tumor segmentation algorithms. Fifty-six medical students & radiology trainees volunteered to annotate brain tumor MR images for the BraTS challenges of 2023 & 2024, guided by faculty-led didactics on neuropathology MRI. Among the 56 annotators, 14 select volunteers were then paired with neuroradiology faculty for guided one-on-one annotation sessions for BraTS 2025. Lectures on neuroanatomy, pathology & AI, journal clubs & data scientist-led workshops were organized online. Annotators & audience members completed surveys on their perceived knowledge before & after annotations & lectures respectively. Fourteen coordinators, each paired with a neuroradiologist, completed the data annotation process, averaging 1322.9+/-760.7 hours per dataset per pair and 1200 segmentations in total. On a scale of 1-10, annotation coordinators reported significant increase in familiarity with image segmentation software pre- and post-annotation, moving from initial average of 6+/-2.9 to final average of 8.9+/-1.1, and significant increase in familiarity with brain tumor features pre- and post-annotation, moving from initial average of 6.2+/-2.4 to final average of 8.1+/-1.2. We demonstrate an innovative offering for providing neuroradiology & AI education through an image segmentation challenge to enhance understanding of algorithm development, reinforce the concept of data reference standard, and diversify opportunities for AI-driven image analysis among future physicians.

Benign vs malignant tumors classification from tumor outlines in mammography scans using artificial intelligence techniques.

Beni HM, Asaei FY

pubmed logopapersSep 21 2025
Breast cancer is one of the most important causes of death among women due to cancer. With the early diagnosis of this condition, the probability of survival will increase. For this purpose, medical imaging methods, especially mammography, are used for screening and early diagnosis of breast abnormalities. The main goal of this study is to distinguish benign or malignant tumors based on tumor morphology features extracted from tumor outlines extracted from mammography images. Unlike previous studies, this study does not use the mammographic image itself but only extracts the exact outline of the tumor. These outlines were extracted from a new and publicly available mammography database published in 2024. The features outlines were calculated using known pre-trained Convolutional Neural Networks (CNN), including VGG16, ResNet50, Xception65, AlexNet, DenseNet, GoogLeNet, Inception-v3, and a combination of them to improve performance. These pre-trained networks have been used in many studies in various fields. In the classification part, known Machine Learning (ML) algorithms, such as Support Vector Machine (SVM), K-Nearest Neighbor (KNN), Neural Network (NN), Naïve Bayes (NB), Decision Tree (DT), and a combination of them have been compared in outcome measures, namely accuracy, specificity, sensitivity, and precision. Also, with the use of data augmentation, the dataset size was increased about 6-8 times, and the K-fold cross-validation technique (K = 5) was used in this study. Based on the performed simulations, a combination of the features from all pre-trained deep networks and the NB classifier resulted in the best possible outcomes with 88.13 % accuracy, 92.52 % specificity, 83.73 % sensitivity, and 92.04 % precision. Furthermore, validation on DMID dataset using ResNet50 features along with NB classifier, led to 92.03 % accuracy, 95.57 % specificity, 88.49 % sensitivity, and 95.23 % precision. This study sheds light on using AI algorithms to prevent biopsy tests and speed up breast cancer tumor classification using tumor outlines in mammographic images.

Learning Scan-Adaptive MRI Undersampling Patterns with Pre-Optimized Mask Supervision

Aryan Dhar, Siddhant Gautam, Saiprasad Ravishankar

arxiv logopreprintSep 21 2025
Deep learning techniques have gained considerable attention for their ability to accelerate MRI data acquisition while maintaining scan quality. In this work, we present a convolutional neural network (CNN) based framework for learning undersampling patterns directly from multi-coil MRI data. Unlike prior approaches that rely on in-training mask optimization, our method is trained with precomputed scan-adaptive optimized masks as supervised labels, enabling efficient and robust scan-specific sampling. The training procedure alternates between optimizing a reconstructor and a data-driven sampling network, which generates scan-specific sampling patterns from observed low-frequency $k$-space data. Experiments on the fastMRI multi-coil knee dataset demonstrate significant improvements in sampling efficiency and image reconstruction quality, providing a robust framework for enhancing MRI acquisition through deep learning.

Echo-Path: Pathology-Conditioned Echo Video Generation

Kabir Hamzah Muhammad, Marawan Elbatel, Yi Qin, Xiaomeng Li

arxiv logopreprintSep 21 2025
Cardiovascular diseases (CVDs) remain the leading cause of mortality globally, and echocardiography is critical for diagnosis of both common and congenital cardiac conditions. However, echocardiographic data for certain pathologies are scarce, hindering the development of robust automated diagnosis models. In this work, we propose Echo-Path, a novel generative framework to produce echocardiogram videos conditioned on specific cardiac pathologies. Echo-Path can synthesize realistic ultrasound video sequences that exhibit targeted abnormalities, focusing here on atrial septal defect (ASD) and pulmonary arterial hypertension (PAH). Our approach introduces a pathology-conditioning mechanism into a state-of-the-art echo video generator, allowing the model to learn and control disease-specific structural and motion patterns in the heart. Quantitative evaluation demonstrates that the synthetic videos achieve low distribution distances, indicating high visual fidelity. Clinically, the generated echoes exhibit plausible pathology markers. Furthermore, classifiers trained on our synthetic data generalize well to real data and, when used to augment real training sets, it improves downstream diagnosis of ASD and PAH by 7\% and 8\% respectively. Code, weights and dataset are available here https://github.com/Marshall-mk/EchoPathv1

Insertion of hepatic lesions into clinical photon-counting-detector CT projection data.

Gong H, Kharat S, Wellinghoff J, El Sadaney AO, Fletcher JG, Chang S, Yu L, Leng S, McCollough CH

pubmed logopapersSep 19 2025
To facilitate task-driven image quality assessment of lesion detectability in clinical photon-counting-detector CT (PCD-CT), it is desired to have patient image data with known pathology and precise annotation. Standard patient case collection and reference standard establishment are time- and resource-intensive. To mitigate this challenge, we aimed to develop a projection-domain lesion insertion framework that efficiently creates realistic patient cases by digitally inserting real radiopathologic features into patient PCD-CT images. 
Approach. This framework used an artificial-intelligence-assisted (AI) semi-automatic annotation to generate digital lesion models from real lesion images. The x-ray energy for commercial beam-hardening correction in PCD-CT system was estimated and used for calculating multi-energy forward projections of these lesion models at different energy thresholds. Lesion projections were subsequently added to patient projections from PCD-CT exams. The modified projections were reconstructed to form realistic lesion-present patient images, using the CT manufacturer's offline reconstruction software. Image quality was qualitatively and quantitatively validated in phantom scans and patient cases with liver lesions, using visual inspection, CT number accuracy, structural similarity index (SSIM), and radiomic feature analysis. Statistical tests were performed using Wilcoxon signed rank test. 
Main results. No statistically significant discrepancy (p>0.05) of CT numbers was observed between original and re-inserted tissue- and contrast-media-mimicking rods and hepatic lesions (mean ± standard deviation): rods 0.4 ± 2.3 HU, lesions -1.8 ± 6.4 HU. The original and inserted lesions showed similar morphological features at original and re-inserted locations: mean ± standard deviation of SSIM 0.95 ± 0.02. Additionally, the corresponding radiomic features presented highly similar feature clusters with no statistically significant differences (p>0.05). 
Significance. The proposed framework can generate patient PCD-CT exams with realistic liver lesions using archived patient data and lesion images. It will facilitate systematic evaluation of PCD-CT systems and advanced reconstruction and post-processing algorithms with target pathological features.

Toward Medical Deepfake Detection: A Comprehensive Dataset and Novel Method

Shuaibo Li, Zhaohu Xing, Hongqiu Wang, Pengfei Hao, Xingyu Li, Zekai Liu, Lei Zhu

arxiv logopreprintSep 19 2025
The rapid advancement of generative AI in medical imaging has introduced both significant opportunities and serious challenges, especially the risk that fake medical images could undermine healthcare systems. These synthetic images pose serious risks, such as diagnostic deception, financial fraud, and misinformation. However, research on medical forensics to counter these threats remains limited, and there is a critical lack of comprehensive datasets specifically tailored for this field. Additionally, existing media forensic methods, which are primarily designed for natural or facial images, are inadequate for capturing the distinct characteristics and subtle artifacts of AI-generated medical images. To tackle these challenges, we introduce \textbf{MedForensics}, a large-scale medical forensics dataset encompassing six medical modalities and twelve state-of-the-art medical generative models. We also propose \textbf{DSKI}, a novel \textbf{D}ual-\textbf{S}tage \textbf{K}nowledge \textbf{I}nfusing detector that constructs a vision-language feature space tailored for the detection of AI-generated medical images. DSKI comprises two core components: 1) a cross-domain fine-trace adapter (CDFA) for extracting subtle forgery clues from both spatial and noise domains during training, and 2) a medical forensic retrieval module (MFRM) that boosts detection accuracy through few-shot retrieval during testing. Experimental results demonstrate that DSKI significantly outperforms both existing methods and human experts, achieving superior accuracy across multiple medical modalities.

DLMUSE: Robust Brain Segmentation in Seconds Using Deep Learning.

Bashyam VM, Erus G, Cui Y, Wu D, Hwang G, Getka A, Singh A, Aidinis G, Baik K, Melhem R, Mamourian E, Doshi J, Davison A, Nasrallah IM, Davatzikos C

pubmed logopapersSep 17 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content</i>. Purpose To introduce an open-source deep learning brain segmentation model for fully automated brain MRI segmentation, enabling rapid segmentation and facilitating large-scale neuroimaging research. Materials and Methods In this retrospective study, a deep learning model was developed using a diverse training dataset of 1900 MRI scans (ages 24-93 with a mean of 65 years (SD: 11.5 years) and 1007 females and 893 males) with reference labels generated using a multiatlas segmentation method with human supervision. The final model was validated using 71391 scans from 14 studies. Segmentation quality was assessed using Dice similarity and Pearson correlation coefficients with reference segmentations. Downstream predictive performance for brain age and Alzheimer's disease was evaluated by fitting machine learning models. Statistical significance was assessed using Mann-Whittney U and McNemar's tests. Results The DLMUSE model achieved high correlation (r = 0.93-0.95) and agreement (median Dice scores = 0.84-0.89) with reference segmentations across the testing dataset. Prediction of brain age using DLMUSE features achieved a mean absolute error of 5.08 years, similar to that of the reference method (5.15 years, <i>P</i> = .56). Classification of Alzheimer's disease using DLMUSE features achieved an accuracy of 89% and F1-score of 0.80, which were comparable to values achieved by the reference method (89% and 0.79, respectively). DLMUSE segmentation speed was over 10000 times faster than that of the reference method (3.5 seconds vs 14 hours). Conclusion DLMUSE enabled rapid brain MRI segmentation, with performance comparable to that of state-of-theart methods across diverse datasets. The resulting open-source tools and user-friendly web interface can facilitate large-scale neuroimaging research and wide utilization of advanced segmentation methods. ©RSNA, 2025.

Prediction of cerebrospinal fluid intervention in fetal ventriculomegaly via AI-powered normative modelling.

Zhou M, Rajan SA, Nedelec P, Bayona JB, Glenn O, Gupta N, Gano D, George E, Rauschecker AM

pubmed logopapersSep 16 2025
Fetal ventriculomegaly (VM) is common and largely benign when isolated. However, it can occasionally progress to hydrocephalus, a more severe condition associated with increased mortality and neurodevelopmental delay that may require surgical postnatal intervention. Accurate differentiation between VM and hydrocephalus is essential but remains challenging, relying on subjective assessment and limited two-dimensional measurements. Deep learning-based segmentation offers a promising solution for objective and reproducible volumetric analysis. This work presents an AI-powered method for segmentation, volume quantification, and classification of the ventricles in fetal brain MRI to predict need for postnatal intervention. This retrospective study included 222 patients with singleton pregnancies. An nnUNet was trained to segment the fetal ventricles on 20 manually segmented, institutional fetal brain MRIs combined with 80 studies from a publicly available dataset. The validated model was then applied to 138 normal fetal brain MRIs to generate a normative reference range across a range of gestational ages (18-36 weeks). Finally it was applied to 64 fetal brains with VM (14 of which required postnatal intervention). ROC curves and AUC to predict VM and need for postnatal intervention were calculated. The nnUNet predicted segmentation of the fetal ventricles in the reference dataset were high quality and accurate (median Dice score 0.96, IQR 0.93-0.99). A normative reference range of ventricular volumes across gestational ages was developed using automated segmentation volumes. The optimal threshold for identifying VM was 2 standard deviations from normal with sensitivity of 92% and specificity of 93% (AUC 0.97, 95% CI 0.91-0.98). When normalized to intracranial volume, fetal ventricular volume was higher and subarachnoid volume lower among those who required postnatal intervention (p<0.001, p=0.003). The optimal threshold for identifying need for postnatal intervention was 11 standard deviations from normal with sensitivity of 86% and specificity of 100% (AUC 0.97, 95% CI 0.86-1.00). This work introduces a deep-learning based method for fast and accurate quantification of ventricular volumes in fetal brain MRI. A normative reference standard derived using this method can predict VM and need for postnatal CSF intervention. Increased ventricular volume is a strong predictor for postnatal intervention. VM = ventriculomegaly, 2D = two-dimensional, 3D = three-dimensional, ROC = receiver operating characteristics, AUC = area under curve.

Normative Modelling of Brain Volume for Diagnostic and Prognostic Stratification in Multiple Sclerosis

Korbmacher, M., Lie, I. A., Wesnes, K., Westman, E., Espeseth, T., Andreassen, O., Westlye, L., Wergeland, S., Harbo, H. F., Nygaard, G. O., Myhr, K.-M., Hogestol, E. A., Torkildsen, O.

medrxiv logopreprintSep 15 2025
BackgroundBrain atrophy is a hallmark of multiple sclerosis (MS). For clinical translatability and individual-level predictions, brain atrophy needs to be put into context of the broader population, using reference or normative models. MethodsReference models of MRI-derived brain volumes were established from a large healthy control (HC) multi-cohort dataset (N=63 115, 51% females). The reference models were applied to two independent MS cohorts (N=362, T1w-scans=953, follow-up time up to 12 years) to assess deviations from the reference, defined as Z-values. We assessed the overlap of deviation profiles and their stability over time using individual-level transitions towards or out of significant reference deviation states (|Z|>1{middle dot}96). A negative binomial model was used for case-control comparisons of the number of extreme deviations. Linear models were used to assess differences in Z-score deviations between MS and propensity-matched HCs, and associations with clinical scores at baseline and over time. The utilized normative BrainReference models, scripts and usage instructions are freely available. FindingsWe identified a temporally stable, brain morphometric phenotype of MS. The right and left thalami most consistently showed significantly lower-than-reference volumes in MS (25% and 26% overlap across the sample). The number of such extreme smaller-than-reference values was 2{middle dot}70 in MS compared to HC (4{middle dot}51 versus 1{middle dot}67). Additional deviations indicated stronger disability (Expanded Disability Status Scale: {beta}=0{middle dot}22, 95% CI 0{middle dot}12 to 0{middle dot}32), Paced Auditory Serial Addition Test score ({beta}=-0{middle dot}27, 95% CI -0{middle dot}52 to -0{middle dot}02), and Fatigue Severity Score ({beta}=0{middle dot}29, 95% CI 0{middle dot}05 to 0{middle dot}53) at baseline, and over time with EDSS ({beta}=0{middle dot}07, 95% CI 0{middle dot}02 to 0{middle dot}13). We additionally provide detailed maps of reference-deviations and their associations with clinical assessments. InterpretationWe present a heterogenous brain phenotype of MS which is associated with clinical manifestations, and particularly implicating the thalamus. The findings offer potential to aid diagnosis and prognosis of MS. FundingNorwegian MS-union, Research Council of Norway (#223273; #324252); the South-Eastern Norway Regional Health Authority (#2022080); and the European Unions Horizon2020 Research and Innovation Programme (#847776, #802998). Research in contextO_ST_ABSEvidence before this studyC_ST_ABSReference values and normative models have yet to be widely applied to neuroimaging assessments of neurological disorders such as multiple sclerosis (MS). We conducted a literature search in PubMed and Embase (Jan 1, 2000-September 12, 2025) using the terms "MRI" AND "multiple sclerosis", with and without the keywords "normative model*" and "atrophy", without language restrictions. While normative models have been applied in psychiatric and developmental disorders, few studies have addressed their use in neurological conditions. Existing MS research has largely focused on global atrophy and has not provided regional reference charts or established links to clinical and cognitive outcomes. Added value of this studyWe provide regionally detailed brain morphometry maps derived from a heterogeneous MS cohort spanning wide ranges of age, sex, clinical phenotype, disease duration, disability, and scanner characteristics. By leveraging normative modelling, our approach enables individualised brain phenotyping of MS in relation to a population based normative sample. The analyses reveal clinically meaningful and spatially consistent patterns of smaller brain volumes, particularly in the thalamus and frontal cortical regions, which are linked to disability, cognitive impairment, and fatigue. Robustness across scanners, centres, and longitudinal follow-up supports the stability and generalisability of these findings to real-world MS populations. Implications of all the available evidenceNormative modelling offers an individualised, sensitive, and interpretable approach to quantifying brain structure in MS by providing individual-specific reference values, supporting earlier detection of neurodegeneration and improved patient stratification. A consistent pattern of thalamic and fronto-parietal deviations defines a distinct morphometric profile of MS, with potential utility for early and personalised diagnosis and disease monitoring in clinical practice and clinical trials.
Page 5 of 32317 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.