Sort by:
Page 18 of 2522511 results

CASCADE-FSL: Few-shot learning for collateral evaluation in ischemic stroke.

Aktar M, Tampieri D, Xiao Y, Rivaz H, Kersten-Oertel M

pubmed logopapersJul 1 2025
Assessing collateral circulation is essential in determining the best treatment for ischemic stroke patients as good collaterals lead to different treatment options, i.e., thrombectomy, whereas poor collaterals can adversely affect the treatment by leading to excess bleeding and eventually death. To reduce inter- and intra-rater variability and save time in radiologist assessments, computer-aided methods, mainly using deep neural networks, have gained popularity. The current literature demonstrates effectiveness when using balanced and extensive datasets in deep learning; however, such data sets are scarce for stroke, and the number of data samples for poor collateral cases is often limited compared to those for good collaterals. We propose a novel approach called CASCADE-FSL to distinguish poor collaterals effectively. Using a small, unbalanced data set, we employ a few-shot learning approach for training using a 2D ResNet-50 as a backbone and designating good and intermediate cases as two normal classes. We identify poor collaterals as anomalies in comparison to the normal classes. Our novel approach achieves an overall accuracy, sensitivity, and specificity of 0.88, 0.88, and 0.89, respectively, demonstrating its effectiveness in addressing the imbalanced dataset challenge and accurately identifying poor collateral circulation cases.

Quantitative Ischemic Lesions of Portable Low-Field Strength MRI Using Deep Learning-Based Super-Resolution.

Bian Y, Wang L, Li J, Yang X, Wang E, Li Y, Liu Y, Xiang L, Yang Q

pubmed logopapersJul 1 2025
Deep learning-based synthetic super-resolution magnetic resonance imaging (SynthMRI) may improve the quantitative lesion performance of portable low-field strength magnetic resonance imaging (LF-MRI). The aim of this study is to evaluate whether SynthMRI improves the diagnostic performance of LF-MRI in assessing ischemic lesions. We retrospectively included 178 stroke patients and 104 healthy controls with both LF-MRI and high-field strength magnetic resonance imaging (HF-MRI) examinations. Using HF-MRI as the ground truth, the deep learning-based super-resolution framework (SCUNet [Swin-Conv-UNet]) was pretrained using large-scale open-source data sets to generate SynthMRI images from LF-MRI images. Participants were split into a training set (64.2%) to fine-tune the pretrained SCUNet, and a testing set (35.8%) to evaluate the performance of SynthMRI. Sensitivity and specificity of LF-MRI and SynthMRI were assessed. Agreement with HF-MRI for Alberta Stroke Program Early CT Score in the anterior and posterior circulation (diffusion-weighted imaging-Alberta Stroke Program Early CT Score and diffusion-weighted imaging-posterior circulation Alberta Stroke Program Early CT Score) was evaluated using intraclass correlation coefficients (ICCs). Agreement with HF-MRI for lesion volume and mean apparent diffusion coefficient (ADC) within lesions was assessed using both ICCs and Pearson correlation coefficients. SynthMRI demonstrated significantly higher sensitivity and specificity than LF-MRI (89.0% [83.3%-94.6%] versus 77.1% [69.5%-84.7%]; <i>P</i><0.001 and 91.3% [84.7%-98.0%] versus 71.0% [60.3%-81.7%]; <i>P</i><0.001, respectively). The ICCs of diffusion-weighted imaging-Alberta Stroke Program Early CT Score between SynthMRI and HF-MRI were also better than that between LF-MRI and HF-MRI (0.952 [0.920-0.972] versus 0.797 [0.678-0.876], <i>P</i><0.001). For lesion volume and mean apparent diffusion coefficient within lesions, SynthMRI showed significantly higher agreement (<i>P</i><0.001) with HF-MRI (ICC>0.85, <i>r</i>>0.78) than LF-MRI (ICC>0.45, <i>r</i>>0.35). Furthermore, for lesions during various poststroke phases, SynthMRI exhibited significantly higher agreement with HF-MRI than LF-MRI during the early hyperacute and subacute phases. SynthMRI demonstrates high agreement with HF-MRI in detecting and quantifying ischemic lesions and is better than LF-MRI, particularly for lesions during the early hyperacute and subacute phases.

Development and Validation an AI Model to Improve the Diagnosis of Deep Infiltrating Endometriosis for Junior Sonologists.

Xu J, Zhang A, Zheng Z, Cao J, Zhang X

pubmed logopapersJul 1 2025
This study aims to develop and validate an artificial intelligence (AI) model based on ultrasound (US) videos and images to improve the performance of junior sonologists in detecting deep infiltrating endometriosis (DE). In this retrospective study, data were collected from female patients who underwent US examinations and had DE. The US image records were divided into two parts. First, during the model development phase, an AI-DE model was trained employing YOLOv8 to detect pelvic DE nodules. Subsequently, its clinical applicability was evaluated by comparing the diagnostic performance of junior sonologists with and without AI-model assistance. The AI-DE model was trained using 248 images, which demonstrated high performance, with a mAP50 (mean Average Precision at IoU threshold 0.5) of 0.9779 on the test set. Total 147 images were used for evaluate the diagnostic performance. The diagnostic performance of junior sonologists improved with the assistance of the AI-DE model. The area under the receiver operating characteristic (AUROC) curve improved from 0.748 (95% CI, 0.624-0.867) to 0.878 (95% CI, 0.792-0.964; p < 0.0001) for junior sonologist A, and from 0.713 (95% CI, 0.592-0.835) to 0.798 (95% CI, 0.677-0.919; p < 0.0001) for junior sonologist B. Notably, the sensitivity of both sonologists increased significantly, with the largest increase from 77.42% to 94.35%. The AI-DE model based on US images showed good performance in DE detection and significantly improved the diagnostic performance of junior sonologists.

A comparison of an integrated and image-only deep learning model for predicting the disappearance of indeterminate pulmonary nodules.

Wang J, Cai J, Tang W, Dudurych I, van Tuinen M, Vliegenthart R, van Ooijen P

pubmed logopapersJul 1 2025
Indeterminate pulmonary nodules (IPNs) require follow-up CT to assess potential growth; however, benign nodules may disappear. Accurately predicting whether IPNs will resolve is a challenge for radiologists. Therefore, we aim to utilize deep-learning (DL) methods to predict the disappearance of IPNs. This retrospective study utilized data from the Dutch-Belgian Randomized Lung Cancer Screening Trial (NELSON) and Imaging in Lifelines (ImaLife) cohort. Participants underwent follow-up CT to determine the evolution of baseline IPNs. The NELSON data was used for model training. External validation was performed in ImaLife. We developed integrated DL-based models that incorporated CT images and demographic data (age, sex, smoking status, and pack years). We compared the performance of integrated methods with those limited to CT images only and calculated sensitivity, specificity, and area under the receiver operating characteristic curve (AUC). From a clinical perspective, ensuring high specificity is critical, as it minimizes false predictions of non-resolving nodules that should be monitored for evolution on follow-up CTs. Feature importance was calculated using SHapley Additive exPlanations (SHAP) values. The training dataset included 840 IPNs (134 resolving) in 672 participants. The external validation dataset included 111 IPNs (46 resolving) in 65 participants. On the external validation set, the performance of the integrated model (sensitivity, 0.50; 95 % CI, 0.35-0.65; specificity, 0.91; 95 % CI, 0.80-0.96; AUC, 0.82; 95 % CI, 0.74-0.90) was comparable to that solely trained on CT image (sensitivity, 0.41; 95 % CI, 0.27-0.57; specificity, 0.89; 95 % CI, 0.78-0.95; AUC, 0.78; 95 % CI, 0.69-0.86; P = 0.39). The top 10 most important features were all image related. Deep learning-based models can predict the disappearance of IPNs with high specificity. Integrated models using CT scans and clinical data had comparable performance to those using only CT images.

Comparison of CNNs and Transformer Models in Diagnosing Bone Metastases in Bone Scans Using Grad-CAM.

Pak S, Son HJ, Kim D, Woo JY, Yang I, Hwang HS, Rim D, Choi MS, Lee SH

pubmed logopapersJul 1 2025
Convolutional neural networks (CNNs) have been studied for detecting bone metastases on bone scans; however, the application of ConvNeXt and transformer models has not yet been explored. This study aims to evaluate the performance of various deep learning models, including the ConvNeXt and transformer models, in diagnosing metastatic lesions from bone scans. We retrospectively analyzed bone scans from patients with cancer obtained at 2 institutions: the training and validation sets (n=4626) were from Hospital 1 and the test set (n=1428) was from Hospital 2. The deep learning models evaluated included ResNet18, the Data-Efficient Image Transformer (DeiT), the Vision Transformer (ViT Large 16), the Swin Transformer (Swin Base), and ConvNeXt Large. Gradient-weighted class activation mapping (Grad-CAM) was used for visualization. Both the validation set and the test set demonstrated that the ConvNeXt large model (0.969 and 0.885, respectively) exhibited the best performance, followed by the Swin Base model (0.965 and 0.840, respectively), both of which significantly outperformed ResNet (0.892 and 0.725, respectively). Subgroup analyses revealed that all the models demonstrated greater diagnostic accuracy for patients with polymetastasis compared with those with oligometastasis. Grad-CAM visualization revealed that the ConvNeXt Large model focused more on identifying local lesions, whereas the Swin Base model focused on global areas such as the axial skeleton and pelvis. Compared with traditional CNN and transformer models, the ConvNeXt model demonstrated superior diagnostic performance in detecting bone metastases from bone scans, especially in cases of polymetastasis, suggesting its potential in medical image analysis.

Transformer-based skeletal muscle deep-learning model for survival prediction in gastric cancer patients after curative resection.

Chen Q, Jian L, Xiao H, Zhang B, Yu X, Lai B, Wu X, You J, Jin Z, Yu L, Zhang S

pubmed logopapersJul 1 2025
We developed and evaluated a skeletal muscle deep-learning (SMDL) model using skeletal muscle computed tomography (CT) imaging to predict the survival of patients with gastric cancer (GC). This multicenter retrospective study included patients who underwent curative resection of GC between April 2008 and December 2020. Preoperative CT images at the third lumbar vertebra were used to develop a Transformer-based SMDL model for predicting recurrence-free survival (RFS) and disease-specific survival (DSS). The predictive performance of the SMDL model was assessed using the area under the curve (AUC) and benchmarked against both alternative artificial intelligence models and conventional body composition parameters. The association between the model score and survival was assessed using Cox regression analysis. An integrated model combining SMDL signature with clinical variables was constructed, and its discrimination and fairness were evaluated. A total of 1242, 311, and 94 patients were assigned to the training, internal, and external validation cohorts, respectively. The Transformer-based SMDL model yielded AUCs of 0.791-0.943 for predicting RFS and DSS across all three cohorts and significantly outperformed other models and body composition parameters. The model score was a strong independent prognostic factor for survival. Incorporating the SMDL signature into the clinical model resulted in better prognostic prediction performance. The false-negative and false-positive rates of the integrated model were similar across sex and age subgroups, indicating robust fairness. The Transformer-based SMDL model could accurately predict survival of GC and identify patients at high risk of recurrence or death, thereby assisting clinical decision-making.

Virtual lung screening trial (VLST): An in silico study inspired by the national lung screening trial for lung cancer detection.

Tushar FI, Vancoillie L, McCabe C, Kavuri A, Dahal L, Harrawood B, Fryling M, Zarei M, Sotoudeh-Paima S, Ho FC, Ghosh D, Harowicz MR, Tailor TD, Luo S, Segars WP, Abadi E, Lafata KJ, Lo JY, Samei E

pubmed logopapersJul 1 2025
Clinical imaging trials play a crucial role in advancing medical innovation but are often costly, inefficient, and ethically constrained. Virtual Imaging Trials (VITs) present a solution by simulating clinical trial components in a controlled, risk-free environment. The Virtual Lung Screening Trial (VLST), an in silico study inspired by the National Lung Screening Trial (NLST), illustrates the potential of VITs to expedite clinical trials, minimize risks to participants, and promote optimal use of imaging technologies in healthcare. This study aimed to show that a virtual imaging trial platform could investigate some key elements of a major clinical trial, specifically the NLST, which compared Computed tomography (CT) and chest radiography (CXR) for lung cancer screening. With simulated cancerous lung nodules, a virtual patient cohort of 294 subjects was created using XCAT human models. Each virtual patient underwent both CT and CXR imaging, with deep learning models, the AI CT-Reader and AI CXR-Reader, acting as virtual readers to perform recall patients with suspicion of lung cancer. The primary outcome was the difference in diagnostic performance between CT and CXR, measured by the Area Under the Curve (AUC). The AI CT-Reader showed superior diagnostic accuracy, achieving an AUC of 0.92 (95 % CI: 0.90-0.95) compared to the AI CXR-Reader's AUC of 0.72 (95 % CI: 0.67-0.77). Furthermore, at the same 94 % CT sensitivity reported by the NLST, the VLST specificity of 73 % was similar to the NLST specificity of 73.4 %. This CT performance highlights the potential of VITs to replicate certain aspects of clinical trials effectively, paving the way toward a safe and efficient method for advancing imaging-based diagnostics.

Integrated brain connectivity analysis with fMRI, DTI, and sMRI powered by interpretable graph neural networks.

Qu G, Zhou Z, Calhoun VD, Zhang A, Wang YP

pubmed logopapersJul 1 2025
Multimodal neuroimaging data modeling has become a widely used approach but confronts considerable challenges due to their heterogeneity, which encompasses variability in data types, scales, and formats across modalities. This variability necessitates the deployment of advanced computational methods to integrate and interpret diverse datasets within a cohesive analytical framework. In our research, we combine functional magnetic resonance imaging (fMRI), diffusion tensor imaging (DTI), and structural MRI (sMRI) for joint analysis. This integration capitalizes on the unique strengths of each modality and their inherent interconnections, aiming for a comprehensive understanding of the brain's connectivity and anatomical characteristics. Utilizing the Glasser atlas for parcellation, we integrate imaging-derived features from multiple modalities - functional connectivity from fMRI, structural connectivity from DTI, and anatomical features from sMRI - within consistent regions. Our approach incorporates a masking strategy to differentially weight neural connections, thereby facilitating an amalgamation of multimodal imaging data. This technique enhances interpretability at the connectivity level, transcending traditional analyses centered on singular regional attributes. The model is applied to the Human Connectome Project's Development study to elucidate the associations between multimodal imaging and cognitive functions throughout youth. The analysis demonstrates improved prediction accuracy and uncovers crucial anatomical features and neural connections, deepening our understanding of brain structure and function. This study not only advances multimodal neuroimaging analytics by offering a novel method for integrative analysis of diverse imaging modalities but also improves the understanding of intricate relationships between brain's structural and functional networks and cognitive development.

MDAL: Modality-difference-based active learning for multimodal medical image analysis via contrastive learning and pointwise mutual information.

Wang H, Jin Q, Du X, Wang L, Guo Q, Li H, Wang M, Song Z

pubmed logopapersJul 1 2025
Multimodal medical images reveal different characteristics of the same anatomy or lesion, offering significant clinical value. Deep learning has achieved widespread success in medical image analysis with large-scale labeled datasets. However, annotating medical images is expensive and labor-intensive for doctors, and the variations between different modalities further increase the annotation cost for multimodal images. This study aims to minimize the annotation cost for multimodal medical image analysis. We proposes a novel active learning framework MDAL based on modality differences for multimodal medical images. MDAL quantifies the sample-wise modality differences through pointwise mutual information estimated by multimodal contrastive learning. We hypothesize that samples with larger modality differences are more informative for annotation and further propose two sampling strategies based on these differences: MaxMD and DiverseMD. Moreover, MDAL could select informative samples in one shot without initial labeled data. We evaluated MDAL on public brain glioma and meningioma segmentation datasets and an in-house ovarian cancer classification dataset. MDAL outperforms other advanced active learning competitors. Besides, when using only 20%, 20%, and 15% of labeled samples in these datasets, MDAL reaches 99.6%, 99.9%, and 99.3% of the performance of supervised training with full labeled dataset, respectively. The results show that our proposed MDAL could significantly reduce the annotation cost for multimodal medical image analysis. We expect MDAL could be further extended to other multimodal medical data for lower annotation costs.
Page 18 of 2522511 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.