Sort by:
Page 318 of 6626611 results

Wu Z, Gong L, Luo J, Chen X, Yang F, Wen J, Hao Y, Wang Z, Gu R, Zhang Y, Liao H, Wen G

pubmed logopapersJul 31 2025
This study aimed to develop an interpretable 3-year disease-free survival risk prediction tool to stratify patients with stage II colorectal cancer (CRC) by integrating CT images and clinicopathological factors. A total of 769 patients with pathologically confirmed stage II CRC and disease-free survival (DFS) follow-up information were recruited from three medical centers and divided into training (n = 442), test (n = 190), and validation cohorts (n = 137). CT-based tumor radiomics features were extracted, selected, and used to calculate a Radscore. A combined model was developed using artificial neural network (ANN) algorithm, by integrating the Radscore with significant clinicoradiological factors to classify patients into high- and low-risk groups. Model performance was assessed using the area under the curve (AUC), and feature contributions were qualified using the Shapley additive explanation (SHAP) algorithm. Kaplan-Meier survival analysis revealed the prognostic stratification value of the risk groups. Fourteen radiomics features and five clinicoradiological factors were selected to construct the radiomics and clinicoradiological models, respectively. The combined model demonstrated optimal performance, with AUCs of 0.811 and 0.846 in the test and validation cohorts, respectively. Kaplan-Meier curves confirmed effective patient stratification (p < 0.001) in both test and validation cohorts. A high Radscore, rough intestinal outer edge, and advanced age were identified as key prognostic risk factors using the SHAP. The combined model effectively stratified patients with stage II CRC into different prognostic risk groups, aiding clinical decision-making. Integrating CT images with clinicopathological information can facilitate the identification of patients with stage II CRC who are most likely to benefit from adjuvant chemotherapy. The effectiveness of adjuvant chemotherapy for stage II colorectal cancer remains debated. A combined model successfully identified high-risk stage II colorectal cancer patients. Shapley additive explanations enhance the interpretability of the model's predictions.

Herpe G, Vesoul T, Zille P, Pluot E, Guillin R, Rizk B, Ardon R, Adam C, d'Assignies G, Gondim Teixeira PA

pubmed logopapersJul 31 2025
Knee injuries frequently require Magnetic Resonance Imaging (MRI) evaluation, increasing radiologists' workload. This study evaluates the impact of a Knee AI assistant on radiologists' diagnostic accuracy and efficiency in detecting anterior cruciate ligament (ACL), meniscus, cartilage, and medial collateral ligament (MCL) lesions on knee MRI exams. This retrospective reader study was conducted from January 2024 to April 2024. Knee MRI studies were evaluated with and without AI assistance by six radiologists with between 2 and 10 years of experience in musculoskeletal imaging in two sessions, 1 month apart. The AI algorithm was trained on 23,074 MRI studies separate from the study dataset and tested on various knee structures, including ACL, MCL, menisci, and cartilage. The reference standard was established by the consensus of three expert MSK radiologists. Statistical analysis included sensitivity, specificity, accuracy, and Fleiss' Kappa. The study dataset involved 165 knee MRIs (89 males, 76 females; mean age, 42.3 ± 15.7 years). AI assistance improved sensitivity from 81% (134/165, 95% CI = [79.7, 83.3]) to 86%(142/165, 95% CI = [84.2, 87.5]) (p < 0.001), accuracy from 86% (142/165, 95% CI = [85.4, 86.9]) to 91%(150/165, 95% CI = [90.7, 92.1]) (p < 0.001), and specificity from 88% (145/165, 95% CI = [87.1, 88.5]) to 93% (153/165, 95% CI = [92.7, 93.8]) (p < 0.001). Sensitivity and accuracy improvements were observed across all knee structures with varied statistical significance ranging from < 0.001 to 0.28. The Fleiss' Kappa values among readers increased from 54% (95% CI = [53.0, 55.3]) to 78% (95% CI = [76.6, 79.0]) (p < 0.001) post-AI integration. The integration of AI improved diagnostic accuracy, efficiency, and inter-reader agreement in knee MRI interpretation, highlighting the value of this approach in clinical practice. Question Can artificial intelligence (AI) assistance improve the diagnostic accuracy and efficiency of radiologists in detecting main lesions anterior cruciate ligament, meniscus, cartilage, and medial collateral ligament lesions in knee MRI? Findings AI assistance in knee MRI interpretation increased radiologists' sensitivity from 81 to 86% and accuracy from 86 to 91% for detecting knee lesions while improving inter-reader agreement (p < 0.001). Clinical relevance AI-assisted knee MRI interpretation enhances diagnostic precision and consistency among radiologists, potentially leading to more accurate injury detection, improved patient outcomes, and reduced diagnostic variability in musculoskeletal imaging.

Li H, Zhang T, Han G, Huang Z, Xiao H, Ni Y, Liu B, Lin W, Lin Y

pubmed logopapersJul 31 2025
Stroke is one of the leading causes of death and disability worldwide, with a significantly elevated incidence among individuals with hypertension. Conventional risk assessment methods primarily rely on a limited set of clinical parameters and often exclude imaging-derived structural features, resulting in suboptimal predictive accuracy. This study aimed to develop a deep learning-based multimodal stroke risk prediction model by integrating carotid ultrasound imaging with multidimensional clinical data to enable precise identification of high-risk individuals among hypertensive patients. A total of 2,176 carotid artery ultrasound images from 1,088 hypertensive patients were collected. ResNet50 was employed to automatically segment the carotid intima-media and extract key structural features. These imaging features, along with clinical variables such as age, blood pressure, and smoking history, were fused using a Vision Transformer (ViT) and fed into a Radial Basis Probabilistic Neural Network (RBPNN) for risk stratification. The model's performance was systematically evaluated using metrics including AUC, Dice coefficient, IoU, and Precision-Recall curves. The proposed multimodal fusion model achieved outstanding performance on the test set, with an AUC of 0.97, a Dice coefficient of 0.90, and an IoU of 0.80. Ablation studies demonstrated that the inclusion of ViT and RBPNN modules significantly enhanced predictive accuracy. Subgroup analysis further confirmed the model's robust performance in high-risk populations, such as those with diabetes or smoking history. The deep learning-based multimodal fusion model effectively integrates carotid ultrasound imaging and clinical features, significantly improving the accuracy of stroke risk prediction in hypertensive patients. The model demonstrates strong generalizability and clinical application potential, offering a valuable tool for early screening and personalized intervention planning for stroke prevention. Not applicable.

Bao S, Zheng F, Jiang L, Wang Q, Lyu Y

pubmed logopapersJul 31 2025
Early diagnosis of Alzheimer's disease (AD) and its precursor, mild cognitive impairment (MCI), is critical for effective prevention and treatment. Computer-aided diagnosis using magnetic resonance imaging (MRI) provides a cost-effective and objective approach. However, existing methods often segment 3D MRI images into 2D slices, leading to spatial information loss and reduced diagnostic accuracy. To overcome this limitation, we propose TA-SSM Net, a deep learning model that leverages tri-directional attention and structured state-space model (SSM) for improved MRI-based diagnosis of AD and MCI. The tri-directional attention mechanism captures spatial and contextual information from forward, backward, and vertical directions in 3D MRI images, enabling effective feature fusion. Additionally, gradient checkpointing is applied within the SSM to enhance processing efficiency, allowing the model to handle whole-brain scans while preserving spatial correlations. To evaluate our method, we construct a dataset from the Alzheimer's Disease Neuroimaging Initiative (ADNI), consisting of 300 AD patients, 400 MCI patients, and 400 normal controls. TA-SSM Net achieved an accuracy of 90.24% for MCI detection and 95.83% for AD detection. The results demonstrate that our approach not only improves classification accuracy but also enhances processing efficiency and maintains spatial correlations, offering a promising solution for the diagnosis of Alzheimer's disease.

Lin X, Zou E, Chen W, Chen X, Lin L

pubmed logopapersJul 31 2025
This study aimed to develop and assess an advanced Attention-Based Residual U-Net (ResUNet) model for accurately segmenting different types of brain hemorrhages from CT images. The goal was to overcome the limitations of manual segmentation and current automated methods regarding precision and generalizability. A dataset of 1,347 patient CT scans was collected retrospectively, covering six types of hemorrhages: subarachnoid hemorrhage (SAH, 231 cases), subdural hematoma (SDH, 198 cases), epidural hematoma (EDH, 236 cases), cerebral contusion (CC, 230 cases), intraventricular hemorrhage (IVH, 188 cases), and intracerebral hemorrhage (ICH, 264 cases). The dataset was divided into 80% for training using a 10-fold cross-validation approach and 20% for testing. All CT scans were standardized to a common anatomical space, and intensity normalization was applied for uniformity. The ResUNet model included attention mechanisms to enhance focus on important features and residual connections to support stable learning and efficient gradient flow. Model performance was assessed using the Dice Similarity Coefficient (DSC), Intersection over Union (IoU), and directed Hausdorff distance (dHD). The ResUNet model showed excellent performance during both training and testing. On training data, the model achieved DSC scores of 95 ± 1.2 for SAH, 94 ± 1.4 for SDH, 93 ± 1.5 for EDH, 91 ± 1.4 for CC, 89 ± 1.6 for IVH, and 93 ± 2.4 for ICH. IoU values ranged from 88 to 93, with dHD between 2.1- and 2.7-mm. Testing results confirmed strong generalization, with DSC scores of 93 for SAH, 93 for SDH, 92 for EDH, 90 for CC, 88 for IVH, and 92 for ICH. IoU values were also high, indicating precise segmentation and minimal boundary errors. The ResUNet model outperformed standard U-Net variants, achieving higher multi-label segmentation accuracy. This makes it a valuable tool for clinical applications that require fast and reliable brain hemorrhage analysis. Future research could investigate semi-supervised techniques and 3D segmentation to further enhance clinical use. Not applicable.

Priyadharshini S, Bhoopalan R, Manikandan D, Ramaswamy K

pubmed logopapersJul 31 2025
Accurate identification and segmentation of brain tumors in Magnetic Resonance Imaging (MRI) images are critical for timely diagnosis and treatment. MRI is frequently used to diagnose these disorders; however medical professionals find it challenging to manually evaluate MRI pictures because of time restrictions and unpredictability. Computerized methods such as R-CNN, attention models and earlier YOLO variants face limitations due to high computational demands and suboptimal segmentation performance. To overcome these limitations, this study proposes a successive framework that evaluates YOLOv9, YOLOv10, and YOLOv11 for tumor detection and segmentation using the Figshare Brain Tumor dataset (2100 images) and BraTS2020 dataset (3170 MRI slices). Preprocessing involves log transformation for intensity normalization, histogram equalization for contrast enhancement, and edge-based ROI extraction. The models were trained on 80% of the combined dataset and evaluated on the remaining 20%. YOLOv11 demonstrated superior performance, achieving 96.22% classification accuracy on BraTS2020 and 96.41% on Figshare, with an F1-score of 0.990, recall of 0.984, [email protected] of 0.993, and mAP@ [0.5:0.95] of 0.801 during testing. With a fast inference time of 5.3 ms and a balanced precision-recall profile, YOLOv11 proves to be a robust, real-time solution for brain tumor detection in clinical applications.

Kim HB, Tan HQ, Nei WL, Tan YCRS, Cai Y, Wang F

pubmed logopapersJul 31 2025
This study aims to explore Deep Learning methods, namely Large Language Models (LLMs) and Computer Vision models to accurately predict neoadjuvant rectal (NAR) score for locally advanced rectal cancer (LARC) treated with neoadjuvant chemoradiation (NACRT). The NAR score is a validated surrogate endpoint for LARC. 160 CT scans of patients were used in this study, along with 4 different types of radiology reports, 2 generated from CT scans and other 2 from MRI scans, both before and after NACRT. For CT scans, two different approaches with convolutional neural network were utilized to tackle the 3D scan entirely or tackle it slice by slice. For radiology reports, an encoder architecture LLM was used. The performance of the approaches was quantified by the Area under the Receiver Operating Characteristic curve (AUC). The two different approaches for CT scans yielded [Formula: see text] and [Formula: see text] while the LLM trained on post NACRT MRI reports showed the most predictive potential at [Formula: see text] and a statistical improvement, p = 0.03, over the baseline clinical approach (from [Formula: see text] to [Formula: see text])). This study showcases the potential of Large Language Models and the inadequacies of CT scans in predicting NAR values. Clinical trial number Not applicable.

Nan Y, Federico FN, Humphries S, Mackintosh JA, Grainge C, Jo HE, Goh N, Reynolds PN, Hopkins PMA, Navaratnam V, Moodley Y, Walters H, Ellis S, Keir G, Zappala C, Corte T, Glaspole I, Wells AU, Yang G, Walsh SL

pubmed logopapersJul 31 2025
Predicting shorter life expectancy is crucial for prioritizing antifibrotic therapy in fibrotic lung diseases, where progression varies widely, from stability to rapid deterioration. This heterogeneity complicates treatment decisions, emphasizing the need for reliable baseline measures. This study focuses on leveraging artificial intelligence model to address heterogeneity in disease outcomes, focusing on mortality as the ultimate measure of disease trajectory. This retrospective study included 1744 anonymised patients who underwent high-resolution CT scanning. The AI model, SABRE (Smart Airway Biomarker Recognition Engine), was developed using data from patients with various lung diseases (n=460, including lung cancer, pneumonia, emphysema, and fibrosis). Then, 1284 high-resolution CT scans with evidence of diffuse FLD from the Australian IPF Registry and OSIC were used for clinical analyses. Airway branches were categorized and quantified by anatomic structures and volumes, followed by multivariable analysis to explore the associations between these categories and patients' progression and mortality, adjusting for disease severity or traditional measurements. Cox regression identified SABRE-based variables as independent predictors of mortality and progression, even adjusting for disease severity (fibrosis extent, traction bronchiectasis extent, and ILD extent), traditional measures (FVC%, DLCO%, and CPI), and previously reported deep learning algorithms for fibrosis quantification and morphological analysis. Combining SABRE with DLCO significantly improved prognosis utility, yielding an AUC of 0.852 at the first year and a C-index of 0.752. SABRE-based variables capture prognostic signals beyond that provided by traditional measurements, disease severity scores, and established AI-based methods, reflecting the progressiveness and pathogenesis of the disease.

Zhang L, Li D, Su T, Xiao T, Zhao S

pubmed logopapersJul 31 2025
Pancreatic ductal adenocarcinoma (PDAC) and mass-forming pancreatitis (MFP) share similar clinical, laboratory, and imaging features, making accurate diagnosis challenging. Nevertheless, PDAC is highly malignant with a poor prognosis, whereas MFP is an inflammatory condition typically responding well to medical or interventional therapies. Some investigators have explored radiomics-based machine learning (ML) models for distinguishing PDAC from MFP. However, systematic evidence supporting the feasibility of these models is insufficient, presenting a notable challenge for clinical application. This study intended to review the diagnostic performance of radiomics-based ML models in differentiating PDAC from MFP, summarize the methodological quality of the included studies, and provide evidence-based guidance for optimizing radiomics-based ML models and advancing their clinical use. PubMed, Embase, Cochrane, and Web of Science were searched for relevant studies up to June 29, 2024. Eligible studies comprised English cohort, case-control, or cross-sectional designs that applied fully developed radiomics-based ML models-including traditional and deep radiomics-to differentiate PDAC from MFP, while also reporting their diagnostic performance. Studies without full text, limited to image segmentation, or insufficient outcome metrics were excluded. Methodological quality was appraised by means of the radiomics quality score. Since the limited applicability of QUADAS-2 in radiomics-based ML studies, the risk of bias was not formally assessed. Pooled sensitivity, specificity, area under the curve of summary receiver operating characteristics (SROC), likelihood ratios, and diagnostic odds ratio were estimated through a bivariate mixed-effects model. Results were presented with forest plots, SROC curves, and Fagan's nomogram. Subgroup analysis was performed to appraise the diagnostic performance of radiomics-based ML models across various imaging modalities, including computed tomography (CT), magnetic resonance imaging, positron emission tomography-CT, and endoscopic ultrasound. This meta-analysis included 24 studies with 14,406 cases, including 7635 PDAC cases. All studies adopted a case-control design, with 5 conducted across multiple centers. Most studies used CT as the primary imaging modality. The radiomics quality score scores ranged from 5 points (14%) to 17 points (47%), with an average score of 9 (25%). The radiomics-based ML models demonstrated high diagnostic performance. Based on the independent validation sets, the pooled sensitivity, specificity, area under the curve of SROC, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio were 0.92 (95% CI 0.91-0.94), 0.90 (95% CI 0.85-0.94), 0.94 (95% CI 0.74-0.99), 9.3 (95% CI 6.0-14.2), 0.08 (95% CI 0.07-0.11), and 110 (95% CI 62-194), respectively. Radiomics-based ML models demonstrate high diagnostic accuracy in differentiating PDAC from MFP, underscoring their potential as noninvasive tools for clinical decision-making. Nonetheless, the overall methodological quality was moderate due to limitations in external validation, standardized protocols, and reproducibility. These findings support the promise of radiomics in clinical diagnostics while highlighting the need for more rigorous, multicenter research to enhance model generalizability and clinical applicability.

Harper JP, Lee GR, Pan I, Nguyen XV, Quails N, Prevedello LM

pubmed logopapersJul 31 2025
The Radiological Society of North America has actively promoted artificial intelligence (AI) challenges since 2017. Algorithms emerging from the recent RSNA 2022 Cervical Spine Fracture Detection Challenge demonstrated state-of-the-art performance in the competition's data set, surpassing results from prior publications. However, their performance in real-world clinical practice is not known. As an initial step toward the goal of assessing feasibility of these models in clinical practice, we conducted a generalizability test by using one of the leading algorithms of the competition. The deep learning algorithm was selected due to its performance, portability, and ease of use, and installed locally. One hundred examinations (50 consecutive cervical spine CT scans with at least 1 fracture present and 50 consecutive negative CT scans) from a level 1 trauma center not represented in the competition data set were processed at 6.4 seconds per examination. Ground truth was established based on the radiology report with retrospective confirmation of positive fracture cases. Sensitivity, specificity, F1 score, and area under the curve were calculated. The external validation data set comprised older patients in comparison to the competition set (53.5 ± 21.8 years versus 58 ± 22.0, respectively; <i>P</i> < .05). Sensitivity and specificity were 86% and 70% in the external validation group and 85% and 94% in the competition group, respectively. Fractures misclassified by the convolutional neural networks frequently had features of advanced degenerative disease, subtle nondisplaced fractures not easily identified on the axial plane, and malalignment. The model performed with a similar sensitivity on the test and external data set, suggesting that such a tool could be potentially generalizable as a triage tool in the emergency setting. Discordant factors such as age-associated comorbidities may affect accuracy and specificity of AI models when used in certain populations. Further research should be encouraged to help elucidate the potential contributions and pitfalls of these algorithms in supporting clinical care.
Page 318 of 6626611 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.