Sort by:
Page 92 of 1411408 results

Development of Multiparametric Prognostic Models for Stereotactic Magnetic Resonance Guided Radiation Therapy of Pancreatic Cancers.

Michalet M, Valenzuela G, Nougaret S, Tardieu M, Azria D, Riou O

pubmed logopapersJul 1 2025
Stereotactic magnetic resonance guided adaptive radiation therapy (SMART) is a new option for local treatment of unresectable pancreatic ductal adenocarcinoma, showing interesting survival and local control (LC) results. Despite this, some patients will experience early local and/or metastatic recurrence leading to death. We aimed to develop multiparametric prognostic models for these patients. All patients treated in our institution with SMART for an unresectable pancreatic ductal adenocarcinoma between October 21, 2019, and August 5, 2022 were included. Several initial clinical characteristics as well as dosimetric data of SMART were recorded. Radiomics data from 0.35-T simulation magnetic resonance imaging were extracted. All these data were combined to build prognostic models of overall survival (OS) and LC using machine learning algorithms. Eighty-three patients with a median age of 64.9 years were included. A majority of patients had a locally advanced pancreatic cancer (77%). The median OS was 21 months after SMART completion and 27 months after chemotherapy initiation. The 6- and 12-month post-SMART OS was 87.8% (IC95%, 78.2%-93.2%) and 70.9% (IC95%, 58.8%-80.0%), respectively. The best model for OS was the Cox proportional hazard survival analysis using clinical data, with a concordance index inverse probability of censoring weighted of 0.87. Tested on its 12-month OS prediction capacity, this model had good performance (sensitivity 67%, specificity 71%, and area under the curve 0.90). The median LC was not reached. The 6- and 12-month post-SMART LC was 92.4% [IC95%, 83.7%-96.6%] and 76.3% [IC95%, 62.6%-85.5%], respectively. The best model for LC was the component-wise gradient boosting survival analysis using clinical and radiomics data, with a concordance index inverse probability of censoring weighted of 0.80. Tested on its 9-month LC prediction capacity, this model had good performance (sensitivity 50%, specificity 97%, and area under the curve 0.78). Combining clinical and radiomics data in multiparametric prognostic models using machine learning algorithms showed good performance for the prediction of OS and LC. External validation of these models will be needed.

Phantom-based evaluation of image quality in Transformer-enhanced 2048-matrix CT imaging at low and ultralow doses.

Li Q, Liu L, Zhang Y, Zhang L, Wang L, Pan Z, Xu M, Zhang S, Xie X

pubmed logopapersJul 1 2025
To compare the quality of standard 512-matrix, standard 1024-matrix, and Swin2SR-based 2048-matrix phantom images under different scanning protocols. The Catphan 600 phantom was scanned using a multidetector CT scanner under two protocols: 120 kV/100 mA (CT dose index volume = 3.4 mGy) to simulate low-dose CT, and 70 kV/40 mA (0.27 mGy) to simulate ultralow-dose CT. Raw data were reconstructed into standard 512-matrix images using three methods: filtered back projection (FBP), adaptive statistical iterative reconstruction at 40% intensity (ASIR-V), and deep learning image reconstruction at high intensity (DLIR-H). The Swin2SR super-resolution model was used to generate 2048-matrix images (Swin2SR-2048), while the super-resolution convolutional neural network (SRCNN) model generated 2048-matrix images (SRCNN-2048). The quality of 2048-matrix images generated by the two models (Swin2SR and SRCNN) was compared. Image quality was evaluated by ImQuest software (v7.2.0.0, Duke University) based on line pair clarity, task-based transfer function (TTF), image noise, and noise power spectrum (NPS). At equivalent radiation doses and reconstruction method, Swin2SR-2048 images identified more line pairs than both standard-512 and standard-1024 images. Except for the 0.27 mGy/DLIR-H/standard kernel sequence, TTF-50% of Teflon increased after super-resolution processing. Statistically significant differences in TTF-50% were observed between the standard 512, 1024, and Swin2SR-2048 images (all p < 0.05). Swin2SR-2048 images exhibited lower image noise and NPS<sub>peak</sub> compared to both standard 512- and 1024-matrix images, with significant differences observed in all three matrix types (all p < 0.05). Swin2SR-2048 images also demonstrated superior quality compared to SRCNN-2048, with significant differences in image noise (p < 0.001), NPS<sub>peak</sub> (p < 0.05), and TTF-50% for Teflon (p < 0.05). Transformer-enhanced 2048-matrix CT images improve spatial resolution and reduce image noise compared to standard-512 and -1024 matrix images.

MedScale-Former: Self-guided multiscale transformer for medical image segmentation.

Karimijafarbigloo S, Azad R, Kazerouni A, Merhof D

pubmed logopapersJul 1 2025
Accurate medical image segmentation is crucial for enabling automated clinical decision procedures. However, existing supervised deep learning methods for medical image segmentation face significant challenges due to their reliance on extensive labeled training data. To address this limitation, our novel approach introduces a dual-branch transformer network operating on two scales, strategically encoding global contextual dependencies while preserving local information. To promote self-supervised learning, our method leverages semantic dependencies between different scales, generating a supervisory signal for inter-scale consistency. Additionally, it incorporates a spatial stability loss within each scale, fostering self-supervised content clustering. While intra-scale and inter-scale consistency losses enhance feature uniformity within clusters, we introduce a cross-entropy loss function atop the clustering score map to effectively model cluster distributions and refine decision boundaries. Furthermore, to account for pixel-level similarities between organ or lesion subpixels, we propose a selective kernel regional attention module as a plug and play component. This module adeptly captures and outlines organ or lesion regions, slightly enhancing the definition of object boundaries. Our experimental results on skin lesion, lung organ, and multiple myeloma plasma cell segmentation tasks demonstrate the superior performance of our method compared to state-of-the-art approaches.

A vision transformer-convolutional neural network framework for decision-transparent dual-energy X-ray absorptiometry recommendations using chest low-dose CT.

Kuo DP, Chen YC, Cheng SJ, Hsieh KL, Li YT, Kuo PC, Chang YC, Chen CY

pubmed logopapersJul 1 2025
This study introduces an ensemble framework that integrates Vision Transformer (ViT) and Convolutional Neural Networks (CNN) models to leverage their complementary strengths, generating visualized and decision-transparent recommendations for dual-energy X-ray absorptiometry (DXA) scans from chest low-dose computed tomography (LDCT). The framework was developed using data from 321 individuals and validated with an independent test cohort of 186 individuals. It addresses two classification tasks: (1) distinguishing normal from abnormal bone mineral density (BMD) and (2) differentiating osteoporosis from non-osteoporosis. Three field-of-view (FOV) settings-fitFOV (entire vertebra), halfFOV (vertebral body only), and largeFOV (fitFOV + 20 %)-were analyzed to assess their impact on model performance. Model predictions were weighted and combined to enhance classification accuracy, and visualizations were generated to improve decision transparency. DXA scans were recommended for individuals classified as having abnormal BMD or osteoporosis. The ensemble framework significantly outperformed individual models in both classification tasks (McNemar test, p < 0.001). In the development cohort, it achieved 91.6 % accuracy for task 1 with largeFOV (area under the receiver operating characteristic curve [AUROC]: 0.97) and 86.0 % accuracy for task 2 with fitFOV (AUROC: 0.94). In the test cohort, it demonstrated 86.6 % accuracy for task 1 (AUROC: 0.93) and 76.9 % accuracy for task 2 (AUROC: 0.99). DXA recommendation accuracy was 91.6 % and 87.1 % in the development and test cohorts, respectively, with notably high accuracy for osteoporosis detection (98.7 % and 100 %). This combined ViT-CNN framework effectively assesses bone status from LDCT images, particularly when utilizing fitFOV and largeFOV settings. By visualizing classification confidence and vertebral abnormalities, the proposed framework enhances decision transparency and supports clinicians in making informed DXA recommendations following opportunistic osteoporosis screening.

MDAL: Modality-difference-based active learning for multimodal medical image analysis via contrastive learning and pointwise mutual information.

Wang H, Jin Q, Du X, Wang L, Guo Q, Li H, Wang M, Song Z

pubmed logopapersJul 1 2025
Multimodal medical images reveal different characteristics of the same anatomy or lesion, offering significant clinical value. Deep learning has achieved widespread success in medical image analysis with large-scale labeled datasets. However, annotating medical images is expensive and labor-intensive for doctors, and the variations between different modalities further increase the annotation cost for multimodal images. This study aims to minimize the annotation cost for multimodal medical image analysis. We proposes a novel active learning framework MDAL based on modality differences for multimodal medical images. MDAL quantifies the sample-wise modality differences through pointwise mutual information estimated by multimodal contrastive learning. We hypothesize that samples with larger modality differences are more informative for annotation and further propose two sampling strategies based on these differences: MaxMD and DiverseMD. Moreover, MDAL could select informative samples in one shot without initial labeled data. We evaluated MDAL on public brain glioma and meningioma segmentation datasets and an in-house ovarian cancer classification dataset. MDAL outperforms other advanced active learning competitors. Besides, when using only 20%, 20%, and 15% of labeled samples in these datasets, MDAL reaches 99.6%, 99.9%, and 99.3% of the performance of supervised training with full labeled dataset, respectively. The results show that our proposed MDAL could significantly reduce the annotation cost for multimodal medical image analysis. We expect MDAL could be further extended to other multimodal medical data for lower annotation costs.

Transformer-based skeletal muscle deep-learning model for survival prediction in gastric cancer patients after curative resection.

Chen Q, Jian L, Xiao H, Zhang B, Yu X, Lai B, Wu X, You J, Jin Z, Yu L, Zhang S

pubmed logopapersJul 1 2025
We developed and evaluated a skeletal muscle deep-learning (SMDL) model using skeletal muscle computed tomography (CT) imaging to predict the survival of patients with gastric cancer (GC). This multicenter retrospective study included patients who underwent curative resection of GC between April 2008 and December 2020. Preoperative CT images at the third lumbar vertebra were used to develop a Transformer-based SMDL model for predicting recurrence-free survival (RFS) and disease-specific survival (DSS). The predictive performance of the SMDL model was assessed using the area under the curve (AUC) and benchmarked against both alternative artificial intelligence models and conventional body composition parameters. The association between the model score and survival was assessed using Cox regression analysis. An integrated model combining SMDL signature with clinical variables was constructed, and its discrimination and fairness were evaluated. A total of 1242, 311, and 94 patients were assigned to the training, internal, and external validation cohorts, respectively. The Transformer-based SMDL model yielded AUCs of 0.791-0.943 for predicting RFS and DSS across all three cohorts and significantly outperformed other models and body composition parameters. The model score was a strong independent prognostic factor for survival. Incorporating the SMDL signature into the clinical model resulted in better prognostic prediction performance. The false-negative and false-positive rates of the integrated model were similar across sex and age subgroups, indicating robust fairness. The Transformer-based SMDL model could accurately predict survival of GC and identify patients at high risk of recurrence or death, thereby assisting clinical decision-making.

Quantitative Ischemic Lesions of Portable Low-Field Strength MRI Using Deep Learning-Based Super-Resolution.

Bian Y, Wang L, Li J, Yang X, Wang E, Li Y, Liu Y, Xiang L, Yang Q

pubmed logopapersJul 1 2025
Deep learning-based synthetic super-resolution magnetic resonance imaging (SynthMRI) may improve the quantitative lesion performance of portable low-field strength magnetic resonance imaging (LF-MRI). The aim of this study is to evaluate whether SynthMRI improves the diagnostic performance of LF-MRI in assessing ischemic lesions. We retrospectively included 178 stroke patients and 104 healthy controls with both LF-MRI and high-field strength magnetic resonance imaging (HF-MRI) examinations. Using HF-MRI as the ground truth, the deep learning-based super-resolution framework (SCUNet [Swin-Conv-UNet]) was pretrained using large-scale open-source data sets to generate SynthMRI images from LF-MRI images. Participants were split into a training set (64.2%) to fine-tune the pretrained SCUNet, and a testing set (35.8%) to evaluate the performance of SynthMRI. Sensitivity and specificity of LF-MRI and SynthMRI were assessed. Agreement with HF-MRI for Alberta Stroke Program Early CT Score in the anterior and posterior circulation (diffusion-weighted imaging-Alberta Stroke Program Early CT Score and diffusion-weighted imaging-posterior circulation Alberta Stroke Program Early CT Score) was evaluated using intraclass correlation coefficients (ICCs). Agreement with HF-MRI for lesion volume and mean apparent diffusion coefficient (ADC) within lesions was assessed using both ICCs and Pearson correlation coefficients. SynthMRI demonstrated significantly higher sensitivity and specificity than LF-MRI (89.0% [83.3%-94.6%] versus 77.1% [69.5%-84.7%]; <i>P</i><0.001 and 91.3% [84.7%-98.0%] versus 71.0% [60.3%-81.7%]; <i>P</i><0.001, respectively). The ICCs of diffusion-weighted imaging-Alberta Stroke Program Early CT Score between SynthMRI and HF-MRI were also better than that between LF-MRI and HF-MRI (0.952 [0.920-0.972] versus 0.797 [0.678-0.876], <i>P</i><0.001). For lesion volume and mean apparent diffusion coefficient within lesions, SynthMRI showed significantly higher agreement (<i>P</i><0.001) with HF-MRI (ICC>0.85, <i>r</i>>0.78) than LF-MRI (ICC>0.45, <i>r</i>>0.35). Furthermore, for lesions during various poststroke phases, SynthMRI exhibited significantly higher agreement with HF-MRI than LF-MRI during the early hyperacute and subacute phases. SynthMRI demonstrates high agreement with HF-MRI in detecting and quantifying ischemic lesions and is better than LF-MRI, particularly for lesions during the early hyperacute and subacute phases.

CASCADE-FSL: Few-shot learning for collateral evaluation in ischemic stroke.

Aktar M, Tampieri D, Xiao Y, Rivaz H, Kersten-Oertel M

pubmed logopapersJul 1 2025
Assessing collateral circulation is essential in determining the best treatment for ischemic stroke patients as good collaterals lead to different treatment options, i.e., thrombectomy, whereas poor collaterals can adversely affect the treatment by leading to excess bleeding and eventually death. To reduce inter- and intra-rater variability and save time in radiologist assessments, computer-aided methods, mainly using deep neural networks, have gained popularity. The current literature demonstrates effectiveness when using balanced and extensive datasets in deep learning; however, such data sets are scarce for stroke, and the number of data samples for poor collateral cases is often limited compared to those for good collaterals. We propose a novel approach called CASCADE-FSL to distinguish poor collaterals effectively. Using a small, unbalanced data set, we employ a few-shot learning approach for training using a 2D ResNet-50 as a backbone and designating good and intermediate cases as two normal classes. We identify poor collaterals as anomalies in comparison to the normal classes. Our novel approach achieves an overall accuracy, sensitivity, and specificity of 0.88, 0.88, and 0.89, respectively, demonstrating its effectiveness in addressing the imbalanced dataset challenge and accurately identifying poor collateral circulation cases.

Tailored self-supervised pretraining improves brain MRI diagnostic models.

Huang X, Wang Z, Zhou W, Yang K, Wen K, Liu H, Huang S, Lyu M

pubmed logopapersJul 1 2025
Self-supervised learning has shown potential in enhancing deep learning methods, yet its application in brain magnetic resonance imaging (MRI) analysis remains underexplored. This study seeks to leverage large-scale, unlabeled public brain MRI datasets to improve the performance of deep learning models in various downstream tasks for the development of clinical decision support systems. To enhance training efficiency, data filtering methods based on image entropy and slice positions were developed, condensing a combined dataset of approximately 2 million images from fastMRI-brain, OASIS-3, IXI, and BraTS21 into a more focused set of 250 K images enriched with brain features. The Momentum Contrast (MoCo) v3 algorithm was then employed to learn these image features, resulting in robustly pretrained models specifically tailored to brain MRI. The pretrained models were subsequently evaluated in tumor classification, lesion detection, hippocampal segmentation, and image reconstruction tasks. The results demonstrate that our brain MRI-oriented pretraining outperformed both ImageNet pretraining and pretraining on larger multi-organ, multi-modality medical datasets, achieving a ∼2.8 % increase in 4-class tumor classification accuracy, a ∼0.9 % improvement in tumor detection mean average precision, a ∼3.6 % gain in adult hippocampal segmentation Dice score, and a ∼0.1 PSNR improvement in reconstruction at 2-fold acceleration. This study underscores the potential of self-supervised learning for brain MRI using large-scale, tailored datasets derived from public sources.

ConnectomeAE: Multimodal brain connectome-based dual-branch autoencoder and its application in the diagnosis of brain diseases.

Zheng Q, Nan P, Cui Y, Li L

pubmed logopapersJul 1 2025
Exploring the dependencies between multimodal brain networks and integrating node features to enhance brain disease diagnosis remains a significant challenge. Some work has examined only brain connectivity changes in patients, ignoring important information about radiomics features such as shape and texture of individual brain regions in structural images. To this end, this study proposed a novel deep learning approach to integrate multimodal brain connectome information and regional radiomics features for brain disease diagnosis. A dual-branch autoencoder (ConnectomeAE) based on multimodal brain connectomes was proposed for brain disease diagnosis. Specifically, a matrix of radiomics feature extracted from structural magnetic resonance image (MRI) was used as Rad_AE branch inputs for learning important brain region features. Functional brain network built from functional MRI image was used as inputs to Cycle_AE for capturing brain disease-related connections. By separately learning node features and connection features from multimodal brain networks, the method demonstrates strong adaptability in diagnosing different brain diseases. ConnectomeAE was validated on two publicly available datasets. The experimental results show that ConnectomeAE achieved excellent diagnostic performance with an accuracy of 70.7 % for autism spectrum disorder and 90.5 % for Alzheimer's disease. A comparison of training time with other methods indicated that ConnectomeAE exhibits simplicity and efficiency suitable for clinical applications. Furthermore, the interpretability analysis of the model aligned with previous studies, further supporting the biological basis of ConnectomeAE. ConnectomeAE could effectively leverage the complementary information between multimodal brain connectomes for brain disease diagnosis. By separately learning radiomic node features and connectivity features, ConnectomeAE demonstrated good adaptability to different brain disease classification tasks.
Page 92 of 1411408 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.