Sort by:
Page 106 of 1171162 results

MRI-based radiomics for differentiating high-grade from low-grade clear cell renal cell carcinoma: a systematic review and meta-analysis.

Broomand Lomer N, Ghasemi A, Ahmadzadeh AM, A Torigian D

pubmed logopapersMay 17 2025
High-grade clear cell renal cell carcinoma (ccRCC) is linked to lower survival rates and more aggressive disease progression. This study aims to assess the diagnostic performance of MRI-derived radiomics as a non-invasive approach for pre-operative differentiation of high-grade from low-grade ccRCC. A systematic search was conducted across PubMed, Scopus, and Embase. Quality assessment was performed using QUADAS-2 and METRICS. Pooled sensitivity, specificity, positive likelihood ratio (PLR), negative likelihood ratio (NLR), diagnostic odds ratio (DOR), and area under the curve (AUC) were estimated using a bivariate model. Separate meta-analyses were conducted for radiomics models and combined models, where the latter integrated clinical and radiological features with radiomics. Subgroup analysis was performed to identify potential sources of heterogeneity. Sensitivity analysis was conducted to identify potential outliers. A total of 15 studies comprising 2,265 patients were included, with seven and six studies contributing to the meta-analysis of radiomics and combined models, respectively. The pooled estimates of the radiomics model were as follows: sensitivity, 0.78; specificity, 0.84; PLR, 4.17; NLR, 0.28; DOR, 17.34; and AUC, 0.84. For the combined model, the pooled sensitivity, specificity, PLR, NLR, DOR, and AUC were 0.87, 0.81, 3.78, 0.21, 28.57, and 0.90, respectively. Radiomics models trained on smaller cohorts exhibited a significantly higher pooled specificity and PLR than those trained on larger cohorts. Also, radiomics models based on single-user segmentation demonstrated a significantly higher pooled specificity compared to multi-user segmentation. Radiomics has demonstrated potential as a non-invasive tool for grading ccRCC, with combined models achieving superior performance.

An integrated deep learning model for early and multi-class diagnosis of Alzheimer's disease from MRI scans.

Vinukonda ER, Jagadesh BN

pubmed logopapersMay 17 2025
Alzheimer's disease (AD) is a progressive neurodegenerative disorder that severely affects memory, behavior, and cognitive function. Early and accurate diagnosis is crucial for effective intervention, yet detecting subtle changes in the early stages remains a challenge. In this study, we propose a hybrid deep learning-based multi-class classification system for AD using magnetic resonance imaging (MRI). The proposed approach integrates an improved DeepLabV3+ (IDeepLabV3+) model for lesion segmentation, followed by feature extraction using the LeNet-5 model. A novel feature selection method based on average correlation and error probability is employed to enhance classification efficiency. Finally, an Enhanced ResNext (EResNext) model is used to classify AD into four stages: non-dementia (ND), very mild dementia (VMD), mild dementia (MD), and moderate dementia (MOD). The proposed model achieves an accuracy of 98.12%, demonstrating its superior performance over existing methods. The area under the ROC curve (AUC) further validates its effectiveness, with the highest score of 0.97 for moderate dementia. This study highlights the potential of hybrid deep learning models in improving early AD detection and staging, contributing to more accurate clinical diagnosis and better patient care.

A self-supervised multimodal deep learning approach to differentiate post-radiotherapy progression from pseudoprogression in glioblastoma.

Gomaa A, Huang Y, Stephan P, Breininger K, Frey B, Dörfler A, Schnell O, Delev D, Coras R, Donaubauer AJ, Schmitter C, Stritzelberger J, Semrau S, Maier A, Bayer S, Schönecker S, Heiland DH, Hau P, Gaipl US, Bert C, Fietkau R, Schmidt MA, Putz F

pubmed logopapersMay 17 2025
Accurate differentiation of pseudoprogression (PsP) from True Progression (TP) following radiotherapy (RT) in glioblastoma patients is crucial for optimal treatment planning. However, this task remains challenging due to the overlapping imaging characteristics of PsP and TP. This study therefore proposes a multimodal deep-learning approach utilizing complementary information from routine anatomical MR images, clinical parameters, and RT treatment planning information for improved predictive accuracy. The approach utilizes a self-supervised Vision Transformer (ViT) to encode multi-sequence MR brain volumes to effectively capture both global and local context from the high dimensional input. The encoder is trained in a self-supervised upstream task on unlabeled glioma MRI datasets from the open BraTS2021, UPenn-GBM, and UCSF-PDGM datasets (n = 2317 MRI studies) to generate compact, clinically relevant representations from FLAIR and T1 post-contrast sequences. These encoded MR inputs are then integrated with clinical data and RT treatment planning information through guided cross-modal attention, improving progression classification accuracy. This work was developed using two datasets from different centers: the Burdenko Glioblastoma Progression Dataset (n = 59) for training and validation, and the GlioCMV progression dataset from the University Hospital Erlangen (UKER) (n = 20) for testing. The proposed method achieved competitive performance, with an AUC of 75.3%, outperforming the current state-of-the-art data-driven approaches. Importantly, the proposed approach relies solely on readily available anatomical MRI sequences, clinical data, and RT treatment planning information, enhancing its clinical feasibility. The proposed approach addresses the challenge of limited data availability for PsP and TP differentiation and could allow for improved clinical decision-making and optimized treatment plans for glioblastoma patients.

Evaluation of synthetic images derived from a neural network in pediatric brain magnetic resonance imaging.

Nagaraj UD, Meineke J, Sriwastwa A, Tkach JA, Leach JL, Doneva M

pubmed logopapersMay 17 2025
Synthetic MRI (SyMRI) is a technique used to estimate tissue properties and generate multiple MR sequence contrasts from a single acquisition. However, image quality can be suboptimal. To evaluate a neural network approach using artificial intelligence-based direct contrast synthesis (AI-DCS) of the multi-contrast weighted images to improve image quality. This prospective, IRB approved study enrolled 50 pediatric patients undergoing clinical brain MRI. In addition to the standard of care (SOC) clinical protocol, 2D multi-delay multi-echo (MDME) sequence was obtained. SOC 3D T1-weighted (T1W), 2D T2-weighted (T2W) and 2D T2W fluid-attenuated inversion recovery (FLAIR) images from 35 patients were used to train a neural network generating synthetic T1W, T2W, and FLAIR images. Quantitative analysis of grey matter (GM) and white matter (WM) apparent signal to noise (aSNR) and grey-white matter (GWM) apparent contrast to noise (aCNR) ratios was performed. 8 patients were evaluated. When compared to SyMRI, T1W AI-DCS had better overall image quality, reduced noise/artifacts, and better subjective SNR in 100 % (16/16) of evaluations. When compared to SyMRI, T2W AI-DCS overall image quality and diagnostic confidence was better in 93.8 % (15/16) and 87.5 % (14/16) of evaluations, respectively. When compared to SyMRI, FLAIR AI-DCS was better in 93.8 % (15/16) of evaluations in overall image quality and in 100 % (16/16) of evaluations for noise/artifacts and subjective SNR. Quantitative analysis revealed higher WM aSNR compared with SyMRI (p < 0.05) for T1W, T2W and FLAIR. AI-DCS demonstrates better overall image quality than SyMRI on T1W, T2W and FLAIR.

ML-Driven Alzheimer 's disease prediction: A deep ensemble modeling approach.

Jumaili MLF, Sonuç E

pubmed logopapersMay 17 2025
Alzheimer's disease (AD) is a progressive neurological disorder characterized by cognitive decline due to brain cell death, typically manifesting later in life.Early and accurate detection is critical for effective disease management and treatment. This study proposes an ensemble learning framework that combines five deep learning architectures (VGG16, VGG19, ResNet50, InceptionV3, and EfficientNetB7) to improve the accuracy of AD diagnosis. We use a comprehensive dataset of 3,714 MRI brain scans collected from specialized clinics in Iraq, categorized into three classes: NonDemented (834 images), MildDemented (1,824 images), and VeryDemented (1,056 images). The proposed voting ensemble model achieves a diagnostic accuracy of 99.32% on our dataset. The effectiveness of the model is further validated on two external datasets: OASIS (achieving 86.6% accuracy) and ADNI (achieving 99.5% accuracy), demonstrating competitive performance compared to existing approaches. Moreover, the proposed model exhibits high precision and recall across all stages of dementia, providing a reliable and robust tool for early AD detection. This study highlights the effectiveness of ensemble learning in AD diagnosis and shows promise for clinical applications.

Lightweight hybrid transformers-based dyslexia detection using cross-modality data.

Sait ARW, Alkhurayyif Y

pubmed logopapersMay 16 2025
Early and precise diagnosis of dyslexia is crucial for implementing timely intervention to reduce its effects. Timely identification can improve the individual's academic and cognitive performance. Traditional dyslexia detection (DD) relies on lengthy, subjective, restricted behavioral evaluations and interviews. Due to the limitations, deep learning (DL) models have been explored to improve DD by analyzing complex neurological, behavioral, and visual data. DL architectures, including convolutional neural networks (CNNs) and vision transformers (ViTs), encounter challenges in extracting meaningful patterns from cross-modality data. The lack of model interpretability and limited computational power restricts these models' generalizability across diverse datasets. To overcome these limitations, we propose an innovative model for DD using magnetic resonance imaging (MRI), electroencephalography (EEG), and handwriting images. We introduce a model, leveraging hybrid transformer-based feature extraction, including SWIN-Linformer for MRI, LeViT-Performer for handwriting images, and graph transformer networks (GTNs) with multi-attention mechanisms for EEG data. A multi-modal attention-based feature fusion network was used to fuse the extracted features in order to guarantee the integration of key multi-modal features. We enhance Dartbooster XGBoost (DXB)-based classification using Bayesian optimization with Hyperband (BOHB) algorithm. In order to reduce computational overhead, we employ a quantization-aware training technique. The local interpretable model-agnostic explanations (LIME) technique and gradient-weighted class activation mapping (Grad-CAM) were adopted to enable model interpretability. Five public repositories were used to train and test the proposed model. The experimental outcomes demonstrated that the proposed model achieves an accuracy of 99.8% with limited computational overhead, outperforming baseline models. It sets a novel standard for DD, offering potential for early identification and timely intervention. In the future, advanced feature fusion and quantization techniques can be utilized to achieve optimal results in resource-constrained environments.

UGoDIT: Unsupervised Group Deep Image Prior Via Transferable Weights

Shijun Liang, Ismail R. Alkhouri, Siddhant Gautam, Qing Qu, Saiprasad Ravishankar

arxiv logopreprintMay 16 2025
Recent advances in data-centric deep generative models have led to significant progress in solving inverse imaging problems. However, these models (e.g., diffusion models (DMs)) typically require large amounts of fully sampled (clean) training data, which is often impractical in medical and scientific settings such as dynamic imaging. On the other hand, training-data-free approaches like the Deep Image Prior (DIP) do not require clean ground-truth images but suffer from noise overfitting and can be computationally expensive as the network parameters need to be optimized for each measurement set independently. Moreover, DIP-based methods often overlook the potential of learning a prior using a small number of sub-sampled measurements (or degraded images) available during training. In this paper, we propose UGoDIT, an Unsupervised Group DIP via Transferable weights, designed for the low-data regime where only a very small number, M, of sub-sampled measurement vectors are available during training. Our method learns a set of transferable weights by optimizing a shared encoder and M disentangled decoders. At test time, we reconstruct the unseen degraded image using a DIP network, where part of the parameters are fixed to the learned weights, while the remaining are optimized to enforce measurement consistency. We evaluate UGoDIT on both medical (multi-coil MRI) and natural (super resolution and non-linear deblurring) image recovery tasks under various settings. Compared to recent standalone DIP methods, UGoDIT provides accelerated convergence and notable improvement in reconstruction quality. Furthermore, our method achieves performance competitive with SOTA DM-based and supervised approaches, despite not requiring large amounts of clean training data.

Diff-Unfolding: A Model-Based Score Learning Framework for Inverse Problems

Yuanhao Wang, Shirin Shoushtari, Ulugbek S. Kamilov

arxiv logopreprintMay 16 2025
Diffusion models are extensively used for modeling image priors for inverse problems. We introduce \emph{Diff-Unfolding}, a principled framework for learning posterior score functions of \emph{conditional diffusion models} by explicitly incorporating the physical measurement operator into a modular network architecture. Diff-Unfolding formulates posterior score learning as the training of an unrolled optimization scheme, where the measurement model is decoupled from the learned image prior. This design allows our method to generalize across inverse problems at inference time by simply replacing the forward operator without retraining. We theoretically justify our unrolling approach by showing that the posterior score can be derived from a composite model-based optimization formulation. Extensive experiments on image restoration and accelerated MRI show that Diff-Unfolding achieves state-of-the-art performance, improving PSNR by up to 2 dB and reducing LPIPS by $22.7\%$, while being both compact (47M parameters) and efficient (0.72 seconds per $256 \times 256$ image). An optimized C++/LibTorch implementation further reduces inference time to 0.63 seconds, underscoring the practicality of our approach.

The imaging crisis in axial spondyloarthritis.

Diekhoff T, Poddubnyy D

pubmed logopapersMay 16 2025
Imaging holds a pivotal yet contentious role in the early diagnosis of axial spondyloarthritis. Although MRI has enhanced our ability to detect early inflammatory changes, particularly bone marrow oedema in the sacroiliac joints, the poor specificity of this finding introduces a substantial risk of overdiagnosis. The well intentioned push by rheumatologists towards earlier intervention could inadvertently lead to the misclassification of mechanical or degenerative conditions (eg, osteitis condensans ilii) as inflammatory disease, especially in the absence of structural lesions. Diagnostic uncertainty is further fuelled by anatomical variability, sex differences, and suboptimal imaging protocols. Current strategies-such as quantifying bone marrow oedema and analysing its distribution patterns, and integrating clinical and laboratory data-offer partial guidance for avoiding overdiagnosis but fall short of resolving the core diagnostic dilemma. Emerging imaging technologies, including high-resolution sequences, quantitative MRI, radiomics, and artificial intelligence, could improve diagnostic precision, but these tools remain exploratory. This Viewpoint underscores the need for a shift in imaging approaches, recognising that although timely diagnosis and treatment is essential to prevent long-term structural damage, robust and reliable imaging criteria are also needed. Without such advances, the imaging field risks repeating past missteps seen in other rheumatological conditions.

FlowMRI-Net: A Generalizable Self-Supervised 4D Flow MRI Reconstruction Network.

Jacobs L, Piccirelli M, Vishnevskiy V, Kozerke S

pubmed logopapersMay 16 2025
Image reconstruction from highly undersampled 4D flow MRI data can be very time consuming and may result in significant underestimation of velocities depending on regularization, thereby limiting the applicability of the method. The objective of the present work was to develop a generalizable self-supervised deep learning-based framework for fast and accurate reconstruction of highly undersampled 4D flow MRI and to demonstrate the utility of the framework for aortic and cerebrovascular applications. The proposed deep-learning-based framework, called FlowMRI-Net, employs physics-driven unrolled optimization using a complex-valued convolutional recurrent neural network and is trained in a self-supervised manner. The generalizability of the framework is evaluated using aortic and cerebrovascular 4D flow MRI acquisitions acquired on systems from two different vendors for various undersampling factors (R=8,16,24) and compared to compressed sensing (CS-LLR) reconstructions. Evaluation includes an ablation study and a qualitative and quantitative analysis of image and velocity magnitudes. FlowMRI-Net outperforms CS-LLR for aortic 4D flow MRI reconstruction, resulting in significantly lower vectorial normalized root mean square error and mean directional errors for velocities in the thoracic aorta. Furthermore, the feasibility of FlowMRI-Net's generalizability is demonstrated for cerebrovascular 4D flow MRI reconstruction. Reconstruction times ranged from 3 to 7minutes on commodity CPU/GPU hardware. FlowMRI-Net enables fast and accurate reconstruction of highly undersampled aortic and cerebrovascular 4D flow MRI, with possible applications to other vascular territories.
Page 106 of 1171162 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.