Sort by:
Page 52 of 65646 results

A 3D deep learning model based on MRI for predicting lymphovascular invasion in rectal cancer.

Wang T, Chen C, Liu C, Li S, Wang P, Yin D, Liu Y

pubmed logopapersMay 20 2025
The assessment of lymphovascular invasion (LVI) is crucial in the management of rectal cancer; However, accurately evaluating LVI preoperatively using imaging remains challenging. Recent advances in radiomics have created opportunities for developing more accurate diagnostic tools. This study aimed to develop and validate a deep learning model for predicting LVI in rectal cancer patients using preoperative MR imaging. These cases were randomly divided into a training cohort (n = 233) and an validation cohort (n = 101) at a ratio of 7:3. Based on the pathological reports, the patients were classified into positive and negative groups according to their LVI status. Based on the preoperative MRI T2WI axial images, the regions of interest (ROI) were defined from the tumor itself and the edges of the tumor extending outward by 5 pixels, 10 pixels, 15 pixels, and 20 pixels. The 2D and 3D deep learning features were extracted using the DenseNet121 architecture, and the deep learning models were constructed, including a total of ten models: GTV (the tumor itself), GPTV5 (the tumor itself and the tumor extending outward by 5 pixels), GPTV10, GPTV15, and GPTV20. To assess model performance, we utilized the area under the curve (AUC) and conducted DeLong test to compare different models, aiming to identify the optimal model for predicting LVI in rectal cancer. In the 2D deep learning model group, the 2D GPTV10 model demonstrated superior performance with an AUC of 0.891 (95% confidence interval [CI] 0.850-0.933) in the training cohort and an AUC of 0.841 (95% CI 0.767-0.915) in the validation cohort. The difference in AUC between this model and other 2D models was not statistically significant based on DeLong test (p > 0.05); In the group of 3D deep learning models, the 3D GPTV10 model had the highest AUC, with a training cohort AUC of 0.961 (95% CI 0.940-0.982) and a validation cohort AUC of 0.928 (95% CI 0.881-0.976). DeLong test demonstrated that the performance of the 3D GPTV10 model surpassed other 3D models as well as the 2D GPTV10 model (p < 0.05). The study developed a deep learning model, namely 3D GPTV10, utilizing preoperative MRI data to accurately predict the presence of LVI in rectal cancer patients. By training on the tumor itself and its surrounding margin 10 pixels as the region of interest, this model achieved superior performance compared to other deep learning models. These findings have significant implications for clinicians in formulating personalized treatment plans for rectal cancer patients.

A multi-modal model integrating MRI habitat and clinicopathology to predict platinum sensitivity in patients with high-grade serous ovarian cancer: a diagnostic study.

Bi Q, Ai C, Meng Q, Wang Q, Li H, Zhou A, Shi W, Lei Y, Wu Y, Song Y, Xiao Z, Li H, Qiang J

pubmed logopapersMay 20 2025
Platinum resistance of high-grade serous ovarian cancer (HGSOC) cannot currently be recognized by specific molecular biomarkers. We aimed to compare the predictive capacity of various models integrating MRI habitat, whole slide images (WSIs), and clinical parameters to predict platinum sensitivity in HGSOC patients. A retrospective study involving 998 eligible patients from four hospitals was conducted. MRI habitats were clustered using K-means algorithm on multi-parametric MRI. Following feature extraction and selection, a Habitat model was developed. Vision Transformer (ViT) and multi-instance learning were trained to derive the patch-level prediction and WSI-level prediction on hematoxylin and eosin (H&E)-stained WSIs, respectively, forming a Pathology model. Logistic regression (LR) was used to create a Clinic model. A multi-modal model integrating Clinic, Habitat, and Pathology (CHP) was constructed using Multi-Head Attention (MHA) and compared with the unimodal models and Ensemble multi-modal models. The area under the curve (AUC) and integrated discrimination improvement (IDI) value were used to assess model performance and gains. In the internal validation cohort and the external test cohort, the Habitat model showed the highest AUCs (0.722 and 0.685) compared to the Clinic model (0.683 and 0.681) and the Pathology model (0.533 and 0.565), respectively. The AUCs (0.789 and 0.807) of the multi-modal model interating CHP based on MHA were highest than those of any unimodal models and Ensemble multi-modal models, with positive IDI values. MRI-based habitat imaging showed potentials to predict platinum sensitivity in HGSOC patients. Multi-modal integration of CHP based on MHA was helpful to improve prediction performance.

Deep-Learning Reconstruction for 7T MP2RAGE and SPACE MRI: Improving Image Quality at High Acceleration Factors.

Liu Z, Patel V, Zhou X, Tao S, Yu T, Ma J, Nickel D, Liebig P, Westerhold EM, Mojahed H, Gupta V, Middlebrooks EH

pubmed logopapersMay 20 2025
Deep learning (DL) reconstruction has been successful in realizing otherwise impracticable acceleration factors and improving image quality in conventional MRI field strengths; however, there has been limited application to ultra-high field MRI.The objective of this study was to evaluate the performance of a prototype DL-based image reconstruction technique in 7T MRI of the brain utilizing MP2RAGE and SPACE acquisitions, in comparison to reconstructions in conventional compressed sensing (CS) and controlled aliasing in parallel imaging (CAIPIRINHA) techniques. This retrospective study involved 60 patients who underwent 7T brain MRI between June 2024 and October 2024, comprised of 30 patients with MP2RAGE data and 30 patients with SPACE FLAIR data. Each set of raw data was reconstructed with DL-based reconstruction and conventional reconstruction. Image quality was independently assessed by two neuroradiologists using a 5-point Likert scale, which included overall image quality, artifacts, sharpness, structural conspicuity, and noise level. Inter-observer agreement was determined using top-box analysis. Contrast-to-noise ratio (CNR) and noise levels were quantitatively evaluated and compared using the Wilcoxon signed-rank test. DL-based reconstruction resulted in a significant increase in overall image quality and a reduction in subjective noise level for both MP2RAGE and SPACE FLAIR data (all P<0.001), with no significant differences in image artifacts (all P>0.05). When compared to standard reconstruction, the implementation of DL-based reconstruction yielded an increase in CNR of 49.5% [95% CI 33.0-59.0%] for MP2RAGE data and 90.6% [95% CI 73.2-117.7%] for SPACE FLAIR data, along with a decrease in noise of 33.5% [95% CI 23.0-38.0%] for MP2RAGE data and 47.5% [95% CI 41.9-52.6%] for SPACE FLAIR data. DL-based reconstruction of 7T MRI significantly enhanced image quality compared to conventional reconstruction without introducing image artifacts. The achievable high acceleration factors have the potential to substantially improve image quality and resolution in 7T MRI. CAIPIRINHA = Controlled Aliasing In Parallel Imaging Results IN Higher Acceleration; CNR = contrast-to-noise ratio; CS = compressed sensing; DL = deep learning; MNI = Montreal Neurological Institute; MP2RAGE = Magnetization-Prepared 2 Rapid Acquisition Gradient Echoes; SPACE = Sampling Perfection with Application-Optimized Contrasts using Different Flip Angle Evolutions.

Deep learning-based radiomics and machine learning for prognostic assessment in IDH-wildtype glioblastoma after maximal safe surgical resection: a multicenter study.

Liu J, Jiang S, Wu Y, Zou R, Bao Y, Wang N, Tu J, Xiong J, Liu Y, Li Y

pubmed logopapersMay 20 2025
Glioblastoma (GBM) is a highly aggressive brain tumor with poor prognosis. This study aimed to construct and validate a radiomics-based machine learning model for predicting overall survival (OS) in IDH-wildtype GBM after maximal safe surgical resection using magnetic resonance imaging. A total of 582 patients were retrospectively enrolled, comprising 301 in the training cohort, 128 in the internal validation cohort, and 153 in the external validation cohort. Volumes of interest (VOIs) from contrast-enhanced T1-weighted imaging (CE-T1WI) were segmented into three regions: contrast-enhancing tumor, necrotic non-enhancing core, and peritumoral edema using an ResNet-based segmentation network. A total of 4,227 radiomic features were extracted and filtered using LASSO-Cox regression to identify signatures. The prognostic model was constructed using the Mime prediction framework, categorizing patients into high- and low-risk groups based on the median OS. Model performance was assessed using the concordance index (CI) and Kaplan-Meier survival analysis. Independent prognostic factors were identified through multivariable Cox regression analysis, and a nomogram was developed for individualized risk assessment. The Step Cox [backward] + RSF model achieved CIs of 0.89, 0.81, and 0.76 in the training, internal and external validation cohorts. Log-rank tests demonstrated significant survival differences between high- and low-risk groups across all cohorts (P < 0.05). Multivariate Cox analysis identified age (HR: 1.022; 95% CI: 0.979, 1.009, P < 0.05), KPS score (HR: 0.970, 95% CI: 0.960, 0.978, P < 0.05), rad-scores of the necrotic non-enhancing core (HR: 8.164; 95% CI: 2.439, 27.331, P < 0.05), and peritumoral edema (HR: 3.748; 95% CI: 1.212, 11.594, P < 0.05) as independent predictors of OS. A nomogram integrating these predictors provided individualized risk assessment. This deep learning segmentation-based radiomics model demonstrated robust performance in predicting OS in GBM after maximal safe surgical resection. By incorporating radiomic signatures and advanced machine learning algorithms, it offers a non-invasive tool for personalized prognostic assessment and supports clinical decision-making.

End-to-end Cortical Surface Reconstruction from Clinical Magnetic Resonance Images

Jesper Duemose Nielsen, Karthik Gopinath, Andrew Hoopes, Adrian Dalca, Colin Magdamo, Steven Arnold, Sudeshna Das, Axel Thielscher, Juan Eugenio Iglesias, Oula Puonti

arxiv logopreprintMay 20 2025
Surface-based cortical analysis is valuable for a variety of neuroimaging tasks, such as spatial normalization, parcellation, and gray matter (GM) thickness estimation. However, most tools for estimating cortical surfaces work exclusively on scans with at least 1 mm isotropic resolution and are tuned to a specific magnetic resonance (MR) contrast, often T1-weighted (T1w). This precludes application using most clinical MR scans, which are very heterogeneous in terms of contrast and resolution. Here, we use synthetic domain-randomized data to train the first neural network for explicit estimation of cortical surfaces from scans of any contrast and resolution, without retraining. Our method deforms a template mesh to the white matter (WM) surface, which guarantees topological correctness. This mesh is further deformed to estimate the GM surface. We compare our method to recon-all-clinical (RAC), an implicit surface reconstruction method which is currently the only other tool capable of processing heterogeneous clinical MR scans, on ADNI and a large clinical dataset (n=1,332). We show a approximately 50 % reduction in cortical thickness error (from 0.50 to 0.24 mm) with respect to RAC and better recovery of the aging-related cortical thinning patterns detected by FreeSurfer on high-resolution T1w scans. Our method enables fast and accurate surface reconstruction of clinical scans, allowing studies (1) with sample sizes far beyond what is feasible in a research setting, and (2) of clinical populations that are difficult to enroll in research studies. The code is publicly available at https://github.com/simnibs/brainnet.

NOVA: A Benchmark for Anomaly Localization and Clinical Reasoning in Brain MRI

Cosmin I. Bercea, Jun Li, Philipp Raffler, Evamaria O. Riedel, Lena Schmitzer, Angela Kurz, Felix Bitzer, Paula Roßmüller, Julian Canisius, Mirjam L. Beyrle, Che Liu, Wenjia Bai, Bernhard Kainz, Julia A. Schnabel, Benedikt Wiestler

arxiv logopreprintMay 20 2025
In many real-world applications, deployed models encounter inputs that differ from the data seen during training. Out-of-distribution detection identifies whether an input stems from an unseen distribution, while open-world recognition flags such inputs to ensure the system remains robust as ever-emerging, previously $unknown$ categories appear and must be addressed without retraining. Foundation and vision-language models are pre-trained on large and diverse datasets with the expectation of broad generalization across domains, including medical imaging. However, benchmarking these models on test sets with only a few common outlier types silently collapses the evaluation back to a closed-set problem, masking failures on rare or truly novel conditions encountered in clinical use. We therefore present $NOVA$, a challenging, real-life $evaluation-only$ benchmark of $\sim$900 brain MRI scans that span 281 rare pathologies and heterogeneous acquisition protocols. Each case includes rich clinical narratives and double-blinded expert bounding-box annotations. Together, these enable joint assessment of anomaly localisation, visual captioning, and diagnostic reasoning. Because NOVA is never used for training, it serves as an $extreme$ stress-test of out-of-distribution generalisation: models must bridge a distribution gap both in sample appearance and in semantic space. Baseline results with leading vision-language models (GPT-4o, Gemini 2.0 Flash, and Qwen2.5-VL-72B) reveal substantial performance drops across all tasks, establishing NOVA as a rigorous testbed for advancing models that can detect, localize, and reason about truly unknown anomalies.

XDementNET: An Explainable Attention Based Deep Convolutional Network to Detect Alzheimer Progression from MRI data

Soyabul Islam Lincoln, Mirza Mohd Shahriar Maswood

arxiv logopreprintMay 20 2025
A common neurodegenerative disease, Alzheimer's disease requires a precise diagnosis and efficient treatment, particularly in light of escalating healthcare expenses and the expanding use of artificial intelligence in medical diagnostics. Many recent studies shows that the combination of brain Magnetic Resonance Imaging (MRI) and deep neural networks have achieved promising results for diagnosing AD. Using deep convolutional neural networks, this paper introduces a novel deep learning architecture that incorporates multiresidual blocks, specialized spatial attention blocks, grouped query attention, and multi-head attention. The study assessed the model's performance on four publicly accessible datasets and concentrated on identifying binary and multiclass issues across various categories. This paper also takes into account of the explainability of AD's progression and compared with state-of-the-art methods namely Gradient Class Activation Mapping (GradCAM), Score-CAM, Faster Score-CAM, and XGRADCAM. Our methodology consistently outperforms current approaches, achieving 99.66\% accuracy in 4-class classification, 99.63\% in 3-class classification, and 100\% in binary classification using Kaggle datasets. For Open Access Series of Imaging Studies (OASIS) datasets the accuracies are 99.92\%, 99.90\%, and 99.95\% respectively. The Alzheimer's Disease Neuroimaging Initiative-1 (ADNI-1) dataset was used for experiments in three planes (axial, sagittal, and coronal) and a combination of all planes. The study achieved accuracies of 99.08\% for axis, 99.85\% for sagittal, 99.5\% for coronal, and 99.17\% for all axis, and 97.79\% and 8.60\% respectively for ADNI-2. The network's ability to retrieve important information from MRI images is demonstrated by its excellent accuracy in categorizing AD stages.

Dynadiff: Single-stage Decoding of Images from Continuously Evolving fMRI

Marlène Careil, Yohann Benchetrit, Jean-Rémi King

arxiv logopreprintMay 20 2025
Brain-to-image decoding has been recently propelled by the progress in generative AI models and the availability of large ultra-high field functional Magnetic Resonance Imaging (fMRI). However, current approaches depend on complicated multi-stage pipelines and preprocessing steps that typically collapse the temporal dimension of brain recordings, thereby limiting time-resolved brain decoders. Here, we introduce Dynadiff (Dynamic Neural Activity Diffusion for Image Reconstruction), a new single-stage diffusion model designed for reconstructing images from dynamically evolving fMRI recordings. Our approach offers three main contributions. First, Dynadiff simplifies training as compared to existing approaches. Second, our model outperforms state-of-the-art models on time-resolved fMRI signals, especially on high-level semantic image reconstruction metrics, while remaining competitive on preprocessed fMRI data that collapse time. Third, this approach allows a precise characterization of the evolution of image representations in brain activity. Overall, this work lays the foundation for time-resolved brain-to-image decoding.

GuidedMorph: Two-Stage Deformable Registration for Breast MRI

Yaqian Chen, Hanxue Gu, Haoyu Dong, Qihang Li, Yuwen Chen, Nicholas Konz, Lin Li, Maciej A. Mazurowski

arxiv logopreprintMay 19 2025
Accurately registering breast MR images from different time points enables the alignment of anatomical structures and tracking of tumor progression, supporting more effective breast cancer detection, diagnosis, and treatment planning. However, the complexity of dense tissue and its highly non-rigid nature pose challenges for conventional registration methods, which primarily focus on aligning general structures while overlooking intricate internal details. To address this, we propose \textbf{GuidedMorph}, a novel two-stage registration framework designed to better align dense tissue. In addition to a single-scale network for global structure alignment, we introduce a framework that utilizes dense tissue information to track breast movement. The learned transformation fields are fused by introducing the Dual Spatial Transformer Network (DSTN), improving overall alignment accuracy. A novel warping method based on the Euclidean distance transform (EDT) is also proposed to accurately warp the registered dense tissue and breast masks, preserving fine structural details during deformation. The framework supports paradigms that require external segmentation models and with image data only. It also operates effectively with the VoxelMorph and TransMorph backbones, offering a versatile solution for breast registration. We validate our method on ISPY2 and internal dataset, demonstrating superior performance in dense tissue, overall breast alignment, and breast structural similarity index measure (SSIM), with notable improvements by over 13.01% in dense tissue Dice, 3.13% in breast Dice, and 1.21% in breast SSIM compared to the best learning-based baseline.

Improving Deep Learning-Based Grading of Partial-thickness Supraspinatus Tendon Tears with Guided Diffusion Augmentation.

Ni M, Jiesisibieke D, Zhao Y, Wang Q, Gao L, Tian C, Yuan H

pubmed logopapersMay 19 2025
To develop and validate a deep learning system with guided diffusion-based data augmentation for grading partial-thickness supraspinatus tendon (SST) tears and to compare its performance with experienced radiologists, including external validation. This retrospective study included 1150 patients with arthroscopically confirmed SST tears, divided into a training set (741 patients), validation set (185 patients), and internal test set (185 patients). An independent external test set of 224 patients was used for generalizability assessment. To address data imbalance, MRI images were augmented using a guided diffusion model. A ResNet-34 model was employed for Ellman grading of bursal-sided and articular-sided partial-thickness tears across different MRI sequences (oblique coronal [OCOR], oblique sagittal [OSAG], and combined OCOR+OSAG). Performance was evaluated using AUC and precision-recall curves, and compared to three experienced musculoskeletal (MSK) radiologists. The DeLong test was used to compare performance across different sequence combinations. A total of 26,020 OCOR images and 26,356 OSAG images were generated using the guided diffusion model. For bursal-sided partial-thickness tears in the internal dataset, the model achieved AUCs of 0.99, 0.98, and 0.97 for OCOR, OSAG, and combined sequences, respectively, while for articular-sided tears, AUCs were 0.99, 0.99, and 0.99. The DeLong test showed no significant differences among sequence combinations (P=0.17, 0.14, 0.07). In the external dataset, the combined-sequence model achieved AUCs of 0.99, 0.97, and 0.97 for bursal-sided tears and 0.99, 0.95, and 0.95 for articular-sided tears. Radiologists demonstrated an ICC of 0.99, but their grading performance was significantly lower than the ResNet-34 model (P<0.001). The deep learning system improved grading consistency and significantly reduced evaluation time, while guided diffusion augmentation enhanced model robustness. The proposed deep learning system provides a reliable and efficient method for grading partial-thickness SST tears, achieving radiologist-level accuracy with greater consistency and faster evaluation speed.
Page 52 of 65646 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.