Sort by:
Page 165 of 6486473 results

Besson A, Cao K, Kokelaar R, Hajdarevic E, Wirth L, Yeung J, Yeung JM

pubmed logopapersSep 15 2025
Perineal wound complications following abdominoperineal resection (APR) significantly impacts patient morbidity. Despite various closure techniques, no method has proven superior. Body composition is a key factor influencing postoperative outcomes. AI-assisted CT scan analysis is an accurate and efficient approach to assessing body composition. This study aimed to evaluate whether body composition characteristics can predict perineal wound complications following APR. A retrospective cohort study of APR patients from 2012 to 2024 was conducted, comparing primary closure and inferior gluteal artery myocutaneous (IGAM) flap closure outcomes. Preoperative CT scans were analyzed using a validated AI model to measure lumbosacral skeletal muscle (SM), intramuscular adipose tissue (IMAT), visceral adipose tissue, and subcutaneous adipose tissue. Greater IMAT volume correlated with increased wound dehiscence in males undergoing IGAM closure (40% vs. 4.8% and p = 0.027). Lower SM-to-IMAT volume ratio was associated with higher wound infection rates (60% vs. 19% and p = 0.04). Closure technique did not significantly impact wound infection or dehiscence rates. This study is the first to use AI derived 3D body composition analysis to assess perineal wound complications after APR. IMAT volume significantly influences wound healing in male patients having IGAM reconstruction.

Li Y, Liu H, Liu Y, Li J, Suzuki HH, Liu Y, Tao J, Qiu X

pubmed logopapersSep 15 2025
Deep learning (DL) based on MRI of medulloblastoma enables risk stratification, potentially aiding in therapeutic decisions. This study aims to develop DL models that identify four medulloblastoma molecular subgroups and prognostic-related genetic signatures. This retrospective study enrolled 325 patients for model development and an independent external validation cohort of 124 patients, totaling 449 MB patients from 2 medical institutes. Consecutive patients with newly diagnosed MB at MRI (T1-weighted, T2-weighted, and contrast-enhanced T1-weighted) at two medical institutes between January 2015 and June 2023 were identified. Two-stage sequential DL models were designed-MB-CNN that first identifies wingless (WNT), sonic hedgehog (SHH), Group 3, and Group 4. Further, prognostic-related genetic signatures using DL models (MB-CNN_TP53/MYC/Chr11) were developed to predict TP53 mutation, MYC amplification, and chromosome 11 loss status. A hybrid model combining MB-CNN and conventional data (clinical information and MRI features) was compared to a logistic regression model constructed only with conventional data. Four-classification tasks were evaluated with confusion matrices (accuracy) and two-classification tasks with ROC curves (area under the curve (AUC)). The datasets comprised 449 patients (mean age ± SD at diagnosis, 13.55 years ± 2.33, 249 males). MB-CNN accurately classified MB subgroups in the external test dataset, achieving a median accuracy of 77.50% (range in 76.29% to 78.71%). MB-CNN_TP53/MYC/Chr11 models effectively predicted signatures (AUC of TP53 in SHH: 0.91, MYC amplification in Group 3: 0.87, chromosome 11 loss in Group 4: 0.89). The accuracy of the hybrid model outperformed the logistic regression model (82.20% vs. 59.14%, P = .009) and showed comparable performance to MB-CNN (82.20% vs. 77.50%, P = 0.105). MRI-based DL models allowed identification of the molecular medulloblastoma subgroups and prognostic-related genetic signatures.

Nabipoorashrafi SA, Seyedi A, Bahri RA, Yadegar A, Shomal-Zadeh M, Mohammadi F, Afshari SA, Firoozeh N, Noroozzadeh N, Khosravi F, Asadian S, Chalian H

pubmed logopapersSep 15 2025
Several artificial intelligence (AI) algorithms have been designed for detection of pulmonary embolism (PE) using computed tomographic pulmonary angiography (CTPA). Due to the rapid development of this field and the lack of an updated meta-analysis, we aimed to systematically review the available literature about the accuracy of AI-based algorithms to diagnose PE via CTPA. We searched EMBASE, PubMed, Web of Science, and Cochrane for studies assessing the accuracy of AI-based algorithms. Studies that reported sensitivity and specificity were included. The R software was used for univariate meta-analysis and drawing summary receiver operating characteristic (sROC) curves based on bivariate analysis. To explore the source of heterogeneity, sub-group analysis was performed (PROSPERO: CRD42024543107). A total of 1722 articles were found, and after removing duplicated records, 1185 were screened. Twenty studies with 26 AI models/population met inclusion criteria, encompassing 11,950 participants. Univariate meta-analysis showed a pooled sensitivity of 91.5% (95% CI 85.5-95.2) and specificity of 84.3 (95% CI 74.9-90.6) for PE detection. Additionally, in the bivariate sROC, the pooled area under the curved (AUC) was 0.923 out of 1, indicating a very high accuracy of AI algorithms in the detection of PE. Also, subgroup meta-analysis showed geographical area as a potential source of heterogeneity where the I<sup>2</sup> for sensitivity and specificity in the Asian article subgroup were 60% and 6.9%, respectively. Findings highlight the promising role of AI in accurately diagnosing PE while also emphasizing the need for further research to address regional variations and improve generalizability.

Zhou W, Yang Q, Zhang H

pubmed logopapersSep 15 2025
This study investigates the correlation between chest computed tomography (CT) radiomics features and breast density classification, and aiming to develop an automated radiomics model for breast density assessment using chest CT images. The diagnostic performance was evaluated to establish a CT-based alternative for breast density classification in clinical practice. A retrospective analysis was conducted on patients who underwent both mammography and chest CT scans. The breast density classification results based on mammography images were used to guide the development of CT-based breast density classification models. Radiomic features were extracted from breast regions of interest (ROIs) segmented on chest CT images. The diagnostic performance was evaluated to establish a CT-based alternative for breast density classification in clinical practice. Following dimensionality reduction and selection of dominant radiomic features, four four-class classification models were established, including ① Extreme Gradient Boosting (XGBoost), ② One Vs Rest Classifier-Logistic Regression, ③ Gradient Boosting, and ④ Random Forest Classifier. The performance of these models in classifying breast density using CT images was then evaluated. A total of 330 patients, aged 23-79 years, were included for analysis. The breast ROIs were automatically segmented using a U-net neural network model and subsequently refined and calibrated manually. A total of 1427 radiomic features were extracted, and after dimensionality reduction and feature selection, 28 dominant features closely associated with breast density classification were obtained to construct four classification models. Among the tested models-XGBoost, One-vs-Rest Logistic Regression, Gradient Boosting Classifier, and Random Forest Classifier-the XGBoost model achieved the best performance, with a classification accuracy of 86.6%. Analysis of the receiver operating characteristic curves showed Area Under the Curve (AUC) values of 1.00, 0.93, 0.93, and 0.99 for the four breast density categories, along with a micro-averaged AUC of 0.97 and a macro-averaged AUC of 0.96. Chest CT scans, combined with imaging radiomics models, can accurately classify breast density, providing valuable information related to breast cancer risk stratification. The proposed classification model offers a promising tool for automated breast density assessment, which could enhance personalized breast cancer screening and clinical decision-making.

Fazle Rafsani, Jay Shah, Catherine D. Chong, Todd J. Schwedt, Teresa Wu

arxiv logopreprintSep 15 2025
Anomaly detection and classification in medical imaging are critical for early diagnosis but remain challenging due to limited annotated data, class imbalance, and the high cost of expert labeling. Emerging vision foundation models such as DINOv2, pretrained on extensive, unlabeled datasets, offer generalized representations that can potentially alleviate these limitations. In this study, we propose an attention-based global aggregation framework tailored specifically for 3D medical image anomaly classification. Leveraging the self-supervised DINOv2 model as a pretrained feature extractor, our method processes individual 2D axial slices of brain MRIs, assigning adaptive slice-level importance weights through a soft attention mechanism. To further address data scarcity, we employ a composite loss function combining supervised contrastive learning with class-variance regularization, enhancing inter-class separability and intra-class consistency. We validate our framework on the ADNI dataset and an institutional multi-class headache cohort, demonstrating strong anomaly classification performance despite limited data availability and significant class imbalance. Our results highlight the efficacy of utilizing pretrained 2D foundation models combined with attention-based slice aggregation for robust volumetric anomaly detection in medical imaging. Our implementation is publicly available at https://github.com/Rafsani/DinoAtten3D.git.

Bo Cao, Fan Yu, Mengmeng Feng, SenHao Zhang, Xin Meng, Yue Zhang, Zhen Qian, Jie Lu

arxiv logopreprintSep 15 2025
Multimodal learning has attracted much attention in recent years due to its ability to effectively utilize data features from a variety of different modalities. Diagnosing the vulnerability of atherosclerotic plaques directly from carotid 3D MRI images is relatively challenging for both radiologists and conventional 3D vision networks. In clinical practice, radiologists assess patient conditions using a multimodal approach that incorporates various imaging modalities and domain-specific expertise, paving the way for the creation of multimodal diagnostic networks. In this paper, we have developed an effective strategy to leverage radiologists' domain knowledge to automate the diagnosis of carotid plaque vulnerability through Variation inference and Multimodal knowledge Distillation (VMD). This method excels in harnessing cross-modality prior knowledge from limited image annotations and radiology reports within training data, thereby enhancing the diagnostic network's accuracy for unannotated 3D MRI images. We conducted in-depth experiments on the dataset collected in-house and verified the effectiveness of the VMD strategy we proposed.

Alessandro Crimi, Andrea Brovelli

arxiv logopreprintSep 15 2025
Time-series forecasting and causal discovery are central in neuroscience, as predicting brain activity and identifying causal relationships between neural populations and circuits can shed light on the mechanisms underlying cognition and disease. With the rise of foundation models, an open question is how they compare to traditional methods for brain signal forecasting and causality analysis, and whether they can be applied in a zero-shot setting. In this work, we evaluate a foundation model against classical methods for inferring directional interactions from spontaneous brain activity measured with functional magnetic resonance imaging (fMRI) in humans. Traditional approaches often rely on Wiener-Granger causality. We tested the forecasting ability of the foundation model in both zero-shot and fine-tuned settings, and assessed causality by comparing Granger-like estimates from the model with standard Granger causality. We validated the approach using synthetic time series generated from ground-truth causal models, including logistic map coupling and Ornstein-Uhlenbeck processes. The foundation model achieved competitive zero-shot forecasting fMRI time series (mean absolute percentage error of 0.55 in controls and 0.27 in patients). Although standard Granger causality did not show clear quantitative differences between models, the foundation model provided a more precise detection of causal interactions. Overall, these findings suggest that foundation models offer versatility, strong zero-shot performance, and potential utility for forecasting and causal discovery in time-series data.

Alessandro Crimi, Andrea Brovelli

arxiv logopreprintSep 15 2025
Time-series forecasting and causal discovery are central in neuroscience, as predicting brain activity and identifying causal relationships between neural populations and circuits can shed light on the mechanisms underlying cognition and disease. With the rise of foundation models, an open question is how they compare to traditional methods for brain signal forecasting and causality analysis, and whether they can be applied in a zero-shot setting. In this work, we evaluate a foundation model against classical methods for inferring directional interactions from spontaneous brain activity measured with functional magnetic resonance imaging (fMRI) in humans. Traditional approaches often rely on Wiener-Granger causality. We tested the forecasting ability of the foundation model in both zero-shot and fine-tuned settings, and assessed causality by comparing Granger-like estimates from the model with standard Granger causality. We validated the approach using synthetic time series generated from ground-truth causal models, including logistic map coupling and Ornstein-Uhlenbeck processes. The foundation model achieved competitive zero-shot forecasting fMRI time series (mean absolute percentage error of 0.55 in controls and 0.27 in patients). Although standard Granger causality did not show clear quantitative differences between models, the foundation model provided a more precise detection of causal interactions. Overall, these findings suggest that foundation models offer versatility, strong zero-shot performance, and potential utility for forecasting and causal discovery in time-series data.

Kulkarni SV, Poornapushpakala S

pubmed logopapersSep 15 2025
Medical imaging has undergone significant advancements with the integration of deep learning techniques, leading to enhanced accuracy in image analysis. These methods autonomously extract relevant features from medical images, thereby improving the detection and classification of various diseases. Among imaging modalities, Magnetic Resonance Imaging (MRI) is particularly valuable due to its high contrast resolution, which enables the differentiation of soft tissues, making it indispensable in the diagnosis of brain disorders. The accurate classification of brain tumors is crucial for diagnosing many neurological conditions. However, conventional classification techniques are often limited by high computational complexity and suboptimal accuracy. Motivated by these issues, an innovative model is proposed in this work for segmenting and classifying brain tumors. The research aims to develop a robust and efficient deep learning framework that can assist clinicians in making precise and early diagnoses, ultimately leading to more effective treatment planning. The proposed methodology begins with the acquisition of MRI images from standardized medical imaging databases. Subsequently, the abnormal regions from the images are segmented using the Multiscale Bilateral Awareness Network (MBANet), which incorporates multi-scale operations to enhance feature representation and image quality. A novel classificationarchitecture then processes the segmented images, termed Region Vision Transformer-based Adaptive EfficientNetB7 with Atrous Spatial Pyramid Pooling (RVAEB7-ASPP). To optimize the performance of the classification model, hyperparameters are fine-tuned using the Modified Random Parameter-based Hippopotamus Optimization Algorithm (MRP-HOA). The model's effectiveness is verified through a comprehensive experimental evaluation that utilizes various performance metrics and is compared to current state-of-the-art methods. The proposed MRP-HOA-RVAEB7-ASPP model achieves an impressive classification accuracy of 98.2%, significantly outperforming conventional approaches in brain tumor classification tasks. The MBANet effectively performs brain tumor segmentation, while the RVAEB7-ASPP model provides reliable classification. The integration of the MRP-HOA-RVAEB7-ASPP model optimizes feature extractions and parameter tuning, leading to improved accuracy and robustness. The integration of advanced segmentation, adaptive feature extraction, and optimal parameter tuning enhances the reliability and accuracy of the model. This framework provides a more effective and trustworthy solution for the early detection and clinical assessment of brain tumors, leading to improved patient outcomes through timely intervention.

Boudi A, He J, Abd El Kader I, Liu X, Mouhafid M

pubmed logopapersSep 15 2025
Alzheimer's disease (AD) is a progressive neurodegenerative disorder that currently affects over 55 million individuals worldwide. Conventional diagnostic approaches often rely on subjective clinical assessments and isolated biomarkers, limiting their accuracy and early-stage effectiveness. With the rising global burden of AD, there is an urgent need for objective, automated tools that enhance diagnostic precision using neuroimaging data. This study proposes a novel diagnostic framework combining a fine-tuned VGG19 deep convolutional neural network with an eXtreme Gradient Boosting (XGBoost) classifier. The model was trained and validated on the OASIS MRI dataset (Dataset 2), which was manually balanced to ensure equitable class representation across the four AD stages. The VGG19 model was pre-trained on ImageNet and fine-tuned by unfreezing its last ten layers. Data augmentation strategies, including random rotation and zoom, were applied to improve generalization. Extracted features were classified using XGBoost, incorporating class weighting, early stopping, and adaptive learning. Model performance was evaluated using accuracy, precision, recall, F1-score, and ROC-AUC. The proposed VGG19-XGBoost model achieved a test accuracy of 99.6%, with an average precision of 1.00, a recall of 0.99, and an F1-score of 0.99 on the balanced OASIS dataset. ROC curves indicated high separability across AD stages, confirming strong discriminatory power and robustness in classification. The integration of deep feature extraction with ensemble learning demonstrated substantial improvement over conventional single-model approaches. The hybrid model effectively mitigated issues of class imbalance and overfitting, offering stable performance across all dementia stages. These findings suggest the method's practical viability for clinical decision support in early AD diagnosis. This study presents a high-performing, automated diagnostic tool for Alzheimer's disease based on neuroimaging. The VGG19-XGBoost hybrid architecture demonstrates exceptional accuracy and robustness, underscoring its potential for real-world applications. Future work will focus on integrating multimodal data and validating the model on larger and more diverse populations to enhance clinical utility and generalizability.
Page 165 of 6486473 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.