Sort by:
Page 108 of 3543538 results

Reducing motion artifacts in the aorta: super-resolution deep learning reconstruction with motion reduction algorithm.

Yasaka K, Tsujimoto R, Miyo R, Abe O

pubmed logopapersAug 9 2025
To assess the efficacy of super-resolution deep learning reconstruction (SR-DLR) with motion reduction algorithm (SR-DLR-M) in mitigating aorta motion artifacts compared to SR-DLR and deep learning reconstruction with motion reduction algorithm (DLR-M). This retrospective study included 86 patients (mean age, 65.0 ± 14.1 years; 53 males) who underwent contrast-enhanced CT including the chest region. CT images were reconstructed with SR-DLR-M, SR-DLR, and DLR-M. Circular or ovoid regions of interest were placed on the aorta, and the standard deviation of the CT attenuation was recorded as quantitative noise. From the CT attenuation profile along a line region of interest that intersected the left common carotid artery wall, edge rise slope and edge rise distance were calculated. Two readers assessed the images based on artifact, sharpness, noise, structure depiction, and diagnostic acceptability (for aortic dissection). Quantitative noise was 7.4/5.4/8.3 Hounsfield unit (HU) in SR-DLR-M/SR-DLR/DLR-M. Significant differences were observed between SR-DLR-M vs. SR-DLR and DLR-M (p < 0.001). Edge rise slope and edge rise distance were 107.1/108.8/85.8 HU/mm and 1.6/1.5/2.0 mm, respectively, in SR-DLR-M/SR-DLR/DLR-M. Statistically significant differences were detected between SR-DLR-M vs. DLR-M (p ≤ 0.001 for both). Two readers scored artifacts in SR-DLR-M as significantly better than those in SR-DLR (p < 0.001). Scores for sharpness, noise, and structure depiction in SR-DLR-M were significantly better than those in DLR-M (p ≤ 0.005). Diagnostic acceptability in SR-DLR-M was significantly better than that in SR-DLR and DLR-M (p < 0.001). SR-DLR-M provided significantly better CT images in diagnosing aortic dissection compared to SR-DLR and DLR-M.

BrainATCL: Adaptive Temporal Brain Connectivity Learning for Functional Link Prediction and Age Estimation

Yiran Huang, Amirhossein Nouranizadeh, Christine Ahrends, Mengjia Xu

arxiv logopreprintAug 9 2025
Functional Magnetic Resonance Imaging (fMRI) is an imaging technique widely used to study human brain activity. fMRI signals in areas across the brain transiently synchronise and desynchronise their activity in a highly structured manner, even when an individual is at rest. These functional connectivity dynamics may be related to behaviour and neuropsychiatric disease. To model these dynamics, temporal brain connectivity representations are essential, as they reflect evolving interactions between brain regions and provide insight into transient neural states and network reconfigurations. However, conventional graph neural networks (GNNs) often struggle to capture long-range temporal dependencies in dynamic fMRI data. To address this challenge, we propose BrainATCL, an unsupervised, nonparametric framework for adaptive temporal brain connectivity learning, enabling functional link prediction and age estimation. Our method dynamically adjusts the lookback window for each snapshot based on the rate of newly added edges. Graph sequences are subsequently encoded using a GINE-Mamba2 backbone to learn spatial-temporal representations of dynamic functional connectivity in resting-state fMRI data of 1,000 participants from the Human Connectome Project. To further improve spatial modeling, we incorporate brain structure and function-informed edge attributes, i.e., the left/right hemispheric identity and subnetwork membership of brain regions, enabling the model to capture biologically meaningful topological patterns. We evaluate our BrainATCL on two tasks: functional link prediction and age estimation. The experimental results demonstrate superior performance and strong generalization, including in cross-session prediction scenarios.

Multi-institutional study for comparison of detectability of hypovascular liver metastases between 70- and 40-keV images: DELMIO study.

Ichikawa S, Funayama S, Hyodo T, Ozaki K, Ito A, Kakuya M, Kobayashi T, Tanahashi Y, Kozaka K, Igarashi S, Suto T, Noda Y, Matsuo M, Narita A, Okada H, Suzuki K, Goshima S

pubmed logopapersAug 9 2025
To compare the lesion detectability of hypovascular liver metastases between 70-keV and 40-keV images from dual energy-computed tomography (CT) reconstructed with deep-learning image reconstruction (DLIR). This multi-institutional, retrospective study included adult patients both pre- and post-treatment for gastrointestinal adenocarcinoma. All patients underwent contrast-enhanced CT with reconstruction at 40-keV and 70-keV. Liver metastases were confirmed using gadoxetic acid-enhanced magnetic resonance imaging. Four radiologists independently assessed lesion conspicuity (per-patient and per-lesion) using a 5-point scale. A radiologic technologist measured image noise, tumor-to-liver contrast, and contrast-to-noise ratio (CNR). Quantitative and qualitative results were compared between 70-keV and 40-keV images. The study included 138 patients (mean age, 69 ± 12 years; 80 men) with 208 liver metastases. Seventy-one patients had liver metastases, while 67 did not. Primary cancer sites included 68 cases of pancreas, 50 colorectal, 12 stomach, and 8 gallbladder/bile duct. No significant difference in per-patient lesion detectability was found between 70-keV images (sensitivity, 71.8-90.1%; specificity, 61.2-85.1%; accuracy, 73.9-79.7%) and 40-keV images (sensitivity, 76.1-90.1%; specificity, 53.7-82.1%; accuracy, 71.7-79.0%) (p = 0.18-> 0.99). Similarly, no significant difference in per-lesion lesion detectability was observed between 70-keV (sensitivity, 67.3-82.2%) and 40-keV images (sensitivity, 68.8-81.7%) (p = 0.20-> 0.99). However, Image noise was significantly higher at 40 keV, along with greater tumor-to-liver contrast and CNRs for both hepatic parenchyma and tumors (p < 0.01). There was no significant difference in hypovascular liver metastases detectability between 70-keV and 40-keV images using the DLIR technology.

Spatio-Temporal Conditional Diffusion Models for Forecasting Future Multiple Sclerosis Lesion Masks Conditioned on Treatments

Gian Mario Favero, Ge Ya Luo, Nima Fathi, Justin Szeto, Douglas L. Arnold, Brennan Nichyporuk, Chris Pal, Tal Arbel

arxiv logopreprintAug 9 2025
Image-based personalized medicine has the potential to transform healthcare, particularly for diseases that exhibit heterogeneous progression such as Multiple Sclerosis (MS). In this work, we introduce the first treatment-aware spatio-temporal diffusion model that is able to generate future masks demonstrating lesion evolution in MS. Our voxel-space approach incorporates multi-modal patient data, including MRI and treatment information, to forecast new and enlarging T2 (NET2) lesion masks at a future time point. Extensive experiments on a multi-centre dataset of 2131 patient 3D MRIs from randomized clinical trials for relapsing-remitting MS demonstrate that our generative model is able to accurately predict NET2 lesion masks for patients across six different treatments. Moreover, we demonstrate our model has the potential for real-world clinical applications through downstream tasks such as future lesion count and location estimation, binary lesion activity classification, and generating counterfactual future NET2 masks for several treatments with different efficacies. This work highlights the potential of causal, image-based generative models as powerful tools for advancing data-driven prognostics in MS.

FoundBioNet: A Foundation-Based Model for IDH Genotyping of Glioma from Multi-Parametric MRI

Somayeh Farahani, Marjaneh Hejazi, Antonio Di Ieva, Sidong Liu

arxiv logopreprintAug 9 2025
Accurate, noninvasive detection of isocitrate dehydrogenase (IDH) mutation is essential for effective glioma management. Traditional methods rely on invasive tissue sampling, which may fail to capture a tumor's spatial heterogeneity. While deep learning models have shown promise in molecular profiling, their performance is often limited by scarce annotated data. In contrast, foundation deep learning models offer a more generalizable approach for glioma imaging biomarkers. We propose a Foundation-based Biomarker Network (FoundBioNet) that utilizes a SWIN-UNETR-based architecture to noninvasively predict IDH mutation status from multi-parametric MRI. Two key modules are incorporated: Tumor-Aware Feature Encoding (TAFE) for extracting multi-scale, tumor-focused features, and Cross-Modality Differential (CMD) for highlighting subtle T2-FLAIR mismatch signals associated with IDH mutation. The model was trained and validated on a diverse, multi-center cohort of 1705 glioma patients from six public datasets. Our model achieved AUCs of 90.58%, 88.08%, 65.41%, and 80.31% on independent test sets from EGD, TCGA, Ivy GAP, RHUH, and UPenn, consistently outperforming baseline approaches (p <= 0.05). Ablation studies confirmed that both the TAFE and CMD modules are essential for improving predictive accuracy. By integrating large-scale pretraining and task-specific fine-tuning, FoundBioNet enables generalizable glioma characterization. This approach enhances diagnostic accuracy and interpretability, with the potential to enable more personalized patient care.

Machine learning diagnostic model for amyotrophic lateral sclerosis analysis using MRI-derived features.

Gil Chong P, Mazon M, Cerdá-Alberich L, Beser Robles M, Carot JM, Vázquez-Costa JF, Martí-Bonmatí L

pubmed logopapersAug 8 2025
Amyotrophic Lateral Sclerosis is a devastating motor neuron disease characterized by its diagnostic difficulty. Currently, no reliable biomarkers exist in the diagnosis process. In this scenario, our purpose is the application of machine learning algorithms to imaging MRI-derived variables for the development of diagnostic models that facilitate and shorten the process. A dataset of 211 patients (114 ALS, 45 mimic, 22 genetic carriers and 30 control) with MRI-derived features of volumetry, cortical thickness and local iron (via T2* mapping, and visual assessment of susceptibility imaging). A binary classification task approach has been taken to classify patients with and without ALS. A sequential modeling methodology, understood from an iterative improvement perspective, has been followed, analyzing each group's performance separately to adequately improve modelling. Feature filtering techniques, dimensionality reduction techniques (PCA, kernel PCA), oversampling techniques (SMOTE, ADASYN) and classification techniques (logistic regression, LASSO, Ridge, ElasticNet, Support Vector Classifier, K-neighbors, random forest) were included. Three subsets of available data have been used for each proposed architecture: a subset containing automatic retrieval MRI-derived data, a subset containing the variables from the visual analysis of the susceptibility imaging and a subset containing all features. The best results have been attained with all the available data through a voting classifier composed of five different classifiers: accuracy = 0.896, AUC = 0.929, sensitivity = 0.886, specificity = 0.929. These results confirm the potential of ML techniques applied to imaging variables of volumetry, cortical thickness, and local iron for the development of diagnostic model as a clinical tool for decision-making support.

Text Embedded Swin-UMamba for DeepLesion Segmentation

Ruida Cheng, Tejas Sudharshan Mathai, Pritam Mukherjee, Benjamin Hou, Qingqing Zhu, Zhiyong Lu, Matthew McAuliffe, Ronald M. Summers

arxiv logopreprintAug 8 2025
Segmentation of lesions on CT enables automatic measurement for clinical assessment of chronic diseases (e.g., lymphoma). Integrating large language models (LLMs) into the lesion segmentation workflow offers the potential to combine imaging features with descriptions of lesion characteristics from the radiology reports. In this study, we investigate the feasibility of integrating text into the Swin-UMamba architecture for the task of lesion segmentation. The publicly available ULS23 DeepLesion dataset was used along with short-form descriptions of the findings from the reports. On the test dataset, a high Dice Score of 82% and low Hausdorff distance of 6.58 (pixels) was obtained for lesion segmentation. The proposed Text-Swin-UMamba model outperformed prior approaches: 37% improvement over the LLM-driven LanGuideMedSeg model (p < 0.001),and surpassed the purely image-based xLSTM-UNet and nnUNet models by 1.74% and 0.22%, respectively. The dataset and code can be accessed at https://github.com/ruida/LLM-Swin-UMamba

XAG-Net: A Cross-Slice Attention and Skip Gating Network for 2.5D Femur MRI Segmentation

Byunghyun Ko, Anning Tian, Jeongkyu Lee

arxiv logopreprintAug 8 2025
Accurate segmentation of femur structures from Magnetic Resonance Imaging (MRI) is critical for orthopedic diagnosis and surgical planning but remains challenging due to the limitations of existing 2D and 3D deep learning-based segmentation approaches. In this study, we propose XAG-Net, a novel 2.5D U-Net-based architecture that incorporates pixel-wise cross-slice attention (CSA) and skip attention gating (AG) mechanisms to enhance inter-slice contextual modeling and intra-slice feature refinement. Unlike previous CSA-based models, XAG-Net applies pixel-wise softmax attention across adjacent slices at each spatial location for fine-grained inter-slice modeling. Extensive evaluations demonstrate that XAG-Net surpasses baseline 2D, 2.5D, and 3D U-Net models in femur segmentation accuracy while maintaining computational efficiency. Ablation studies further validate the critical role of the CSA and AG modules, establishing XAG-Net as a promising framework for efficient and accurate femur MRI segmentation.

GAN-MRI enhanced multi-organ MRI segmentation: a deep learning perspective.

Channarayapatna Srinivasa A, Bhat SS, Baduwal D, Sim ZTJ, Patil SS, Amarapur A, Prakash KNB

pubmed logopapersAug 8 2025
Clinical magnetic resonance imaging (MRI) is a high-resolution tool widely used for detailed anatomical imaging. However, prolonged scan times often lead to motion artefacts and patient discomfort. Fast acquisition techniques can reduce scan times but often produce noisy, low-contrast images, compromising segmentation accuracy essential for diagnosis and treatment planning. To address these limitations, we developed an end-to-end framework that incorporates BIDS-based data organiser and anonymizer, a GAN-based MR image enhancement model (GAN-MRI), AssemblyNet for brain region segmentation, and an attention-residual U-Net with Guided loss for abdominal and thigh segmentation. Thirty brain scans (5,400 slices) and 32 abdominal (1,920 slices) and 55 thigh scans (2,200 slices) acquired from multiple MRI scanners (GE, Siemens, Toshiba) underwent evaluation. Image quality improved significantly, with SNR and CNR for brain scans increasing from 28.44 to 42.92 (p < 0.001) and 11.88 to 18.03 (p < 0.001), respectively. Abdominal scans exhibited SNR increases from 35.30 to 50.24 (p < 0.001) and CNR from 10,290.93 to 93,767.22 (p < 0.001). Double-blind evaluations highlighted improved visualisations of anatomical structures and bias field correction. Segmentation performance improved substantially in the thigh (muscle: + 21%, IMAT: + 9%) and abdominal regions (SSAT: + 1%, DSAT: + 2%, VAT: + 12%), while brain segmentation metrics remained largely stable, reflecting the robustness of the baseline model. Proposed framework is designed to handle data from multiple anatomies with variations from different MRI scanners and centres by enhancing MRI scan and improving segmentation accuracy, diagnostic precision and treatment planning while reducing scan times and maintaining patient comfort.

GPT-4 for automated sequence-level determination of MRI protocols based on radiology request forms from clinical routine.

Terzis R, Kaya K, Schömig T, Janssen JP, Iuga AI, Kottlors J, Lennartz S, Gietzen C, Gözdas C, Müller L, Hahnfeldt R, Maintz D, Dratsch T, Pennig L

pubmed logopapersAug 8 2025
This study evaluated GPT-4's accuracy in MRI sequence selection based on radiology request forms (RRFs), comparing its performance to radiology residents. This retrospective study included 100 RRFs across four subspecialties (cardiac imaging, neuroradiology, musculoskeletal, and oncology). GPT-4 and two radiology residents (R1: 2 years, R2: 5 years MRI experience) selected sequences based on each patient's medical history and clinical questions. Considering imaging society guidelines, five board-certified specialized radiologists assessed protocols based on completeness, quality, and utility in consensus, using 5-point Likert scales. Clinical applicability was rated binarily by the institution's lead radiographer. GPT-4 achieved median scores of 3 (1-5) for completeness, 4 (1-5) for quality, and 4 (1-5) for utility, comparable to R1 (3 (1-5), 4 (1-5), 4 (1-5); each p > 0.05) but inferior to R2 (4 (1-5), 5 (1-5); p < 0.01, respectively, and 5 (1-5); p < 0.001). Subspecialty protocol quality varied: GPT-4 matched R1 (4 (2-4) vs. 4 (2-5), p = 0.20) and R2 (4 (2-5); p = 0.47) in cardiac imaging; showed no differences in neuroradiology (all 5 (1-5), p > 0.05); scored lower than R1 and R2 in musculoskeletal imaging (3 (2-5) vs. 4 (3-5); p < 0.01, and 5 (3-5); p < 0.001); and matched R1 (4 (1-5) vs. 2 (1-4), p = 0.12) as well as R2 (5 (2-5); p = 0.20) in oncology. GPT-4-based protocols were clinically applicable in 95% of cases, comparable to R1 (95%) and R2 (96%). GPT-4 generated MRI protocols with notable completeness, quality, utility, and clinical applicability, excelling in standardized subspecialties like cardiac and neuroradiology imaging while yielding lower accuracy in musculoskeletal examinations. Question Long MRI acquisition times limit patient access, making accurate protocol selection crucial for efficient diagnostics, though it's time-consuming and error-prone, especially for inexperienced residents. Findings GPT-4 generated MRI protocols of remarkable yet inconsistent quality, performing on par with an experienced resident in standardized fields, but moderately in musculoskeletal examinations. Clinical relevance The large language model can assist less experienced radiologists in determining detailed MRI protocols and counteract increasing workloads. The model could function as a semi-automatic tool, generating MRI protocols for radiologists' confirmation, optimizing resource allocation, and improving diagnostics and cost-effectiveness.
Page 108 of 3543538 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.