Sort by:
Page 120 of 2922917 results

Enhancing cerebral infarct classification by automatically extracting relevant fMRI features.

Dobromyslin VI, Zhou W

pubmed logopapersJun 17 2025
Accurate detection of cortical infarct is critical for timely treatment and improved patient outcomes. Current brain imaging methods often require invasive procedures that primarily assess blood vessel and structural white matter damage. There is a need for non-invasive approaches, such as functional MRI (fMRI), that better reflect neuronal viability. This study utilized automated machine learning (auto-ML) techniques to identify novel infarct-specific fMRI biomarkers specifically related to chronic cortical infarcts. We analyzed resting-state fMRI data from the multi-center ADNI dataset, which included 20 chronic infarct patients and 30 cognitively normal (CN) controls. This study utilized automated machine learning (auto-ML) techniques to identify novel fMRI biomarkers specifically related to chronic cortical infarcts. Surface-based registration methods were applied to minimize partial-volume effects typically associated with lower resolution fMRI data. We evaluated the performance of 7 previously known fMRI biomarkers alongside 107 new auto-generated fMRI biomarkers across 33 different classification models. Our analysis identified 6 new fMRI biomarkers that substantially improved infarct detection performance compared to previously established metrics. The best-performing combination of biomarkers and classifiers achieved a cross-validation ROC score of 0.791, closely matching the accuracy of diffusion-weighted imaging methods used in acute stroke detection. Our proposed auto-ML fMRI infarct-detection technique demonstrated robustness across diverse imaging sites and scanner types, highlighting the potential of automated feature extraction to significantly enhance non-invasive infarct detection.

Transformer-augmented lightweight U-Net (UAAC-Net) for accurate MRI brain tumor segmentation.

Varghese NE, John A, C UDA, Pillai MJ

pubmed logopapersJun 17 2025
Accurate segmentation of brain tumor images, particularly gliomas in MRI scans, is crucial for early diagnosis, monitoring progression, and evaluating tumor structure and therapeutic response. A novel lightweight, transformer-based U-Net model for brain tumor segmentation, integrating attention mechanisms and multi-layer feature extraction via atrous convolution to capture long-range relationships and contextual information across image regions is proposed in this work. The model performance is evaluated on the publicly accessible BraTS 2020 dataset using evaluation metrics such as the Dice coefficient, accuracy, mean Intersection over Union (IoU), sensitivity, and specificity. The proposed model outperforms many of the existing methods, such as MimicNet, Swin Transformer-based UNet and hybrid multiresolution-based UNet, and is capable of handling a variety of segmentation issues. The experimental results demonstrate that the proposed model acheives an accuracy of 98.23%, a Dice score of 0.9716, and a mean IoU of 0.8242 during training when compared to the current state-of-the-art methods.

Risk factors and prognostic indicators for progressive fibrosing interstitial lung disease: a deep learning-based CT quantification approach.

Lee K, Lee JH, Koh SY, Park H, Goo JM

pubmed logopapersJun 17 2025
To investigate the value of deep learning-based quantitative CT (QCT) in predicting progressive fibrosing interstitial lung disease (PF-ILD) and assessing prognosis. This single-center retrospective study included ILD patients with CT examinations between January 2015 and June 2021. Each ILD finding (ground-glass opacity (GGO), reticular opacity (RO), honeycombing) and fibrosis (sum of RO and honeycombing) was quantified from baseline and follow-up CTs. Logistic regression was performed to identify predictors of PF-ILD, defined as radiologic progression along with forced vital capacity (FVC) decline ≥ 5% predicted. Cox proportional hazard regression was used to assess mortality. The added value of incorporating QCT into FVC was evaluated using C-index. Among 465 ILD patients (median age [IQR], 65 [58-71] years; 238 men), 148 had PF-ILD. After adjusting for clinico-radiological variables, baseline RO (OR: 1.096, 95% CI: 1.042, 1.152, p < 0.001) and fibrosis extent (OR: 1.035, 95% CI: 1.004, 1.067, p = 0.025) were PF-ILD predictors. Baseline RO (HR: 1.063, 95% CI: 1.013, 1.115, p = 0.013), honeycombing (HR: 1.074, 95% CI: 1.034, 1.116, p < 0.001), and fibrosis extent (HR: 1.067, 95% CI: 1.043, 1.093, p < 0.001) predicted poor prognosis. The Cox models combining baseline percent predicted FVC with QCT (each ILD finding, C-index: 0.714, 95% CI: 0.660, 0.764; fibrosis, C-index: 0.703, 95% CI: 0.649, 0.752; both p-values < 0.001) outperformed the model without QCT (C-index: 0.545, 95% CI: 0.500, 0.599). Deep learning-based QCT for ILD findings is useful for predicting PF-ILD and its prognosis. Question Does deep learning-based CT quantification of interstitial lung disease (ILD) findings have value in predicting progressive fibrosing ILD (PF-ILD) and improving prognostication? Findings Deep learning-based CT quantification of baseline reticular opacity and fibrosis predicted the development of PF-ILD. In addition, CT quantification demonstrated value in predicting all-cause mortality. Clinical relevance Deep learning-based CT quantification of ILD findings is useful for predicting PF-ILD and its prognosis. Identifying patients at high risk of PF-ILD through CT quantification enables closer monitoring and earlier treatment initiation, which may lead to improved clinical outcomes.

A Robust Residual Three-dimensional Convolutional Neural Networks Model for Prediction of Amyloid-β Positivity by Using FDG-PET.

Ardakani I, Yamada T, Iwano S, Kumar Maurya S, Ishii K

pubmed logopapersJun 17 2025
Widely used in oncology PET, 2-deoxy-2-18F-FDG PET is more accessible and affordable than amyloid PET, which is a crucial tool to determine amyloid positivity in diagnosis of Alzheimer disease (AD). This study aimed to leverage deep learning with residual 3D convolutional neural networks (3DCNN) to develop a robust model that predicts amyloid-β positivity by using FDG-PET. In this study, a cohort of 187 patients was used for model development. It consisted of patients ranging from cognitively normal to those with dementia and other cognitive impairments who underwent T1-weighted MRI, 18F-FDG, and 11C-Pittsburgh compound B (PiB) PET scans. A residual 3DCNN model was configured using nonexhaustive grid search and trained on repeated random splits of our development data set. We evaluated the performance of our model, and particularly its robustness, using a multisite data set of 99 patients of different ethnicities with images at different site harmonization levels. Our model achieved mean AUC scores of 0.815 and 0.840 on images without and with site harmonization correspondingly. Respectively, it achieved higher AUC scores of 0.801 and 0.834 in the cognitively normal (CN) group compared with 0.777 and 0.745 in the dementia group. As for F1 score, the corresponding mean scores were 0.770 and 0.810 on images without and with site harmonization. In the CN group, it achieved lower F1 scores of 0.580 and 0.658 compared with 0.907 and 0.931 in the dementia group. We demonstrated that residual 3DCNN can learn complex 3D spatial patterns in FDG-PET images and robustly predict amyloid-β positivity with significantly less reliance on site harmonization preprocessing.

A Digital Twin Framework for Adaptive Treatment Planning in Radiotherapy

Chih-Wei Chang, Sri Akkineni, Mingzhe Hu, Keyur D. Shah, Jun Zhou, Xiaofeng Yang

arxiv logopreprintJun 17 2025
This study aims to develop and evaluate a digital twin (DT) framework to enhance adaptive proton therapy for prostate stereotactic body radiotherapy (SBRT), focusing on improving treatment precision for dominant intraprostatic lesions (DILs) while minimizing organ-at-risk (OAR) toxicity. We propose a decision-theoretic (DT) framework combining deep learning (DL)-based deformable image registration (DIR) with a prior treatment database to generate synthetic CTs (sCTs) for predicting interfractional anatomical changes. Using daily CBCT from five prostate SBRT patients with DILs, the framework precomputes multiple plans with high (DT-H) and low (DT-L) similarity sCTs. Plan optimization is performed in RayStation 2023B, assuming a constant RBE of 1.1 and robustly accounting for positional and range uncertainties. Plan quality is evaluated via a modified ProKnow score across two fractions, with reoptimization limited to 10 minutes. Daily CBCT evaluation showed clinical plans often violated OAR constraints (e.g., bladder V20.8Gy, rectum V23Gy), with DIL V100 < 90% in 2 patients, indicating SIFB failure. DT-H plans, using high-similarity sCTs, achieved better or comparable DIL/CTV coverage and lower OAR doses, with reoptimization completed within 10 min (e.g., DT-H-REopt-A score: 154.3-165.9). DT-L plans showed variable outcomes; lower similarity correlated with reduced DIL coverage (e.g., Patient 4: 84.7%). DT-H consistently outperformed clinical plans within time limits, while extended optimization brought DT-L and clinical plans closer to DT-H quality. This DT framework enables rapid, personalized adaptive proton therapy, improving DIL targeting and reducing toxicity. By addressing geometric uncertainties, it supports outcome gains in ultra-hypofractionated prostate RT and lays groundwork for future multimodal anatomical prediction.

SCISSOR: Mitigating Semantic Bias through Cluster-Aware Siamese Networks for Robust Classification

Shuo Yang, Bardh Prenkaj, Gjergji Kasneci

arxiv logopreprintJun 17 2025
Shortcut learning undermines model generalization to out-of-distribution data. While the literature attributes shortcuts to biases in superficial features, we show that imbalances in the semantic distribution of sample embeddings induce spurious semantic correlations, compromising model robustness. To address this issue, we propose SCISSOR (Semantic Cluster Intervention for Suppressing ShORtcut), a Siamese network-based debiasing approach that remaps the semantic space by discouraging latent clusters exploited as shortcuts. Unlike prior data-debiasing approaches, SCISSOR eliminates the need for data augmentation and rewriting. We evaluate SCISSOR on 6 models across 4 benchmarks: Chest-XRay and Not-MNIST in computer vision, and GYAFC and Yelp in NLP tasks. Compared to several baselines, SCISSOR reports +5.3 absolute points in F1 score on GYAFC, +7.3 on Yelp, +7.7 on Chest-XRay, and +1 on Not-MNIST. SCISSOR is also highly advantageous for lightweight models with ~9.5% improvement on F1 for ViT on computer vision datasets and ~11.9% for BERT on NLP. Our study redefines the landscape of model generalization by addressing overlooked semantic biases, establishing SCISSOR as a foundational framework for mitigating shortcut learning and fostering more robust, bias-resistant AI systems.

Integrating Radiomics with Deep Learning Enhances Multiple Sclerosis Lesion Delineation

Nadezhda Alsahanova, Pavel Bartenev, Maksim Sharaev, Milos Ljubisavljevic, Taleb Al. Mansoori, Yauhen Statsenko

arxiv logopreprintJun 17 2025
Background: Accurate lesion segmentation is critical for multiple sclerosis (MS) diagnosis, yet current deep learning approaches face robustness challenges. Aim: This study improves MS lesion segmentation by combining data fusion and deep learning techniques. Materials and Methods: We suggested novel radiomic features (concentration rate and R\'enyi entropy) to characterize different MS lesion types and fused these with raw imaging data. The study integrated radiomic features with imaging data through a ResNeXt-UNet architecture and attention-augmented U-Net architecture. Our approach was evaluated on scans from 46 patients (1102 slices), comparing performance before and after data fusion. Results: The radiomics-enhanced ResNeXt-UNet demonstrated high segmentation accuracy, achieving significant improvements in precision and sensitivity over the MRI-only baseline and a Dice score of 0.774$\pm$0.05; p<0.001 according to Bonferroni-adjusted Wilcoxon signed-rank tests. The radiomics-enhanced attention-augmented U-Net model showed a greater model stability evidenced by reduced performance variability (SDD = 0.18 $\pm$ 0.09 vs. 0.21 $\pm$ 0.06; p=0.03) and smoother validation curves with radiomics integration. Conclusion: These results validate our hypothesis that fusing radiomics with raw imaging data boosts segmentation performance and stability in state-of-the-art models.

BRISC: Annotated Dataset for Brain Tumor Segmentation and Classification with Swin-HAFNet

Amirreza Fateh, Yasin Rezvani, Sara Moayedi, Sadjad Rezvani, Fatemeh Fateh, Mansoor Fateh

arxiv logopreprintJun 17 2025
Accurate segmentation and classification of brain tumors from Magnetic Resonance Imaging (MRI) remain key challenges in medical image analysis, largely due to the lack of high-quality, balanced, and diverse datasets. In this work, we present a new curated MRI dataset designed specifically for brain tumor segmentation and classification tasks. The dataset comprises 6,000 contrast-enhanced T1-weighted MRI scans annotated by certified radiologists and physicians, spanning three major tumor types-glioma, meningioma, and pituitary-as well as non-tumorous cases. Each sample includes high-resolution labels and is categorized across axial, sagittal, and coronal imaging planes to facilitate robust model development and cross-view generalization. To demonstrate the utility of the dataset, we propose a transformer-based segmentation model and benchmark it against established baselines. Our method achieves the highest weighted mean Intersection-over-Union (IoU) of 82.3%, with improvements observed across all tumor categories. Importantly, this study serves primarily as an introduction to the dataset, establishing foundational benchmarks for future research. We envision this dataset as a valuable resource for advancing machine learning applications in neuro-oncology, supporting both academic research and clinical decision-support development. datasetlink: https://www.kaggle.com/datasets/briscdataset/brisc2025/

Latent Anomaly Detection: Masked VQ-GAN for Unsupervised Segmentation in Medical CBCT

Pengwei Wang

arxiv logopreprintJun 17 2025
Advances in treatment technology now allow for the use of customizable 3D-printed hydrogel wound dressings for patients with osteoradionecrosis (ORN) of the jaw (ONJ). Meanwhile, deep learning has enabled precise segmentation of 3D medical images using tools like nnUNet. However, the scarcity of labeled data in ONJ imaging makes supervised training impractical. This study aims to develop an unsupervised training approach for automatically identifying anomalies in imaging scans. We propose a novel two-stage training pipeline. In the first stage, a VQ-GAN is trained to accurately reconstruct normal subjects. In the second stage, random cube masking and ONJ-specific masking are applied to train a new encoder capable of recovering the data. The proposed method achieves successful segmentation on both simulated and real patient data. This approach provides a fast initial segmentation solution, reducing the burden of manual labeling. Additionally, it has the potential to be directly used for 3D printing when combined with hand-tuned post-processing.
Page 120 of 2922917 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.