Sort by:
Page 143 of 2102095 results

Deep-learning based multi-modal models for brain age, cognition and amyloid pathology prediction.

Wang C, Zhang W, Ni M, Wang Q, Liu C, Dai L, Zhang M, Shen Y, Gao F

pubmed logopapersMay 31 2025
Magnetic resonance imaging (MRI), combined with artificial intelligence techniques, has improved our understanding of brain structural change and enabled the estimation of brain age. Neurodegenerative disorders, such as Alzheimer's disease (AD), have been linked to accelerated brain aging. In this study, we aimed to develop a deep-learning framework that processes and integrates MRI images to more accurately predict brain age, cognitive function, and amyloid pathology. In this study, we aimed to develop a deep-learning framework that processes and integrates MRI images to more accurately predict brain age, cognitive function, and amyloid pathology.We collected over 10,000 T1-weighted MRI scans from more than 7,000 individuals across six cohorts. We designed a multi-modal deep-learning framework that employs 3D convolutional neural networks to analyze MRI and additional neural networks to evaluate demographic data. Our initial model focused on predicting brain age, serving as a foundational model from which we developed separate models for cognition function and amyloid plaque prediction through transfer learning. The brain age prediction model achieved the mean absolute error (MAE) for cognitive normal population in the ADNI (test) datasets of 3.302 years. The gap between predicted brain age and chronological age significantly increases while cognition declines. The cognition prediction model exhibited a root mean square error (RMSE) of 0.334 for the Clinical Dementia Rating (CDR) regression task, achieving an area under the curve (AUC) of approximately 0.95 in identifying ing dementia patients. Dementia related brain regions, such as the medial temporal lobe, were identified by our model. Finally, amyloid plaque prediction model was trained to predict amyloid plaque, and achieved an AUC about 0.8 for dementia patients. These findings indicate that the present predictive models can identify subtle changes in brain structure, enabling precise estimates of brain age, cognitive status, and amyloid pathology. Such models could facilitate the use of MRI as a non-invasive diagnostic tool for neurodegenerative diseases, including AD.

Accelerated proton resonance frequency-based magnetic resonance thermometry by optimized deep learning method.

Xu S, Zong S, Mei CS, Shen G, Zhao Y, Wang H

pubmed logopapersMay 31 2025
Proton resonance frequency (PRF)-based magnetic resonance (MR) thermometry plays a critical role in thermal ablation therapies through focused ultrasound (FUS). For clinical applications, accurate and rapid temperature feedback is essential to ensure both the safety and effectiveness of these treatments. This work aims to improve temporal resolution in dynamic MR temperature map reconstructions using an enhanced deep-learning method, thereby supporting the real-time monitoring required for effective FUS treatments. Five classical neural network architectures-cascade net, complex-valued U-Net, shift window transformer for MRI, real-valued U-Net, and U-Net with residual blocks-along with training-optimized methods were applied to reconstruct temperature maps from 2-fold and 4-fold undersampled k-space data. The training enhancements included pre-training/training-phase data augmentations, knowledge distillation, and a novel amplitude-phase decoupling loss function. Phantom and ex vivo tissue heating experiments were conducted using a FUS transducer. Ground truth was the complex MR images with accurate temperature changes, and datasets were manually undersampled to simulate such acceleration here. Separate testing datasets were used to evaluate real-time performance and temperature accuracy. Furthermore, our proposed deep learning-based rapid reconstruction approach was validated on a clinical dataset obtained from patients with uterine fibroids, demonstrating its clinical applicability. Acceleration factors of 1.9 and 3.7 were achieved for 2× and 4× k-space under samplings, respectively. The deep learning-based reconstruction using ResUNet incorporating the four optimizations, showed superior performance. For 2-fold acceleration, the RMSE of temperature map patches were 0.89°C and 1.15°C for the phantom and ex vivo testing datasets, respectively. The DICE coefficient for the 43°C isotherm-enclosed regions was 0.81, and the Bland-Altman analysis indicated a bias of -0.25°C with limits of agreement of ±2.16°C. In the 4-fold under-sampling case, these evaluation metrics showed approximately a 10% reduction in accuracy. Additionally, the DICE coefficient measuring the overlap between the reconstructed temperature maps (using the optimized ResUNet) and the ground truth, specifically in regions where the temperature exceeded the 43°C threshold, were 0.77 and 0.74 for the 2× and 4× under-sampling scenarios, respectively. This study demonstrates that deep learning-based reconstruction significantly enhances the accuracy and efficiency of MR thermometry, particularly in the context of FUS-based clinical treatments for uterine fibroids. This approach could also be extended to other applications such as essential tremor and prostate cancer treatments where MRI-guided FUS plays a critical role.

Subclinical atrial fibrillation prediction based on deep learning and strain analysis using echocardiography.

Huang SH, Lin YC, Chen L, Unankard S, Tseng VS, Tsao HM, Tang GJ

pubmed logopapersMay 31 2025
Subclinical atrial fibrillation (SCAF), also known as atrial high-rate episodes (AHREs), refers to asymptomatic heart rate elevations associated with increased risks of atrial fibrillation and cardiovascular events. Although deep learning (DL) models leveraging echocardiographic images from ultrasound are widely used for cardiac function analysis, their application to AHRE prediction remains unexplored. This study introduces a novel DL-based framework for automatic AHRE detection using echocardiograms. The approach encompasses left atrium (LA) segmentation, LA strain feature extraction, and AHRE classification. Data from 117 patients with cardiac implantable electronic devices undergoing echocardiography were analyzed, with 80% allocated to the development set and 20% to the test set. LA segmentation accuracy was quantified using the Dice coefficient, yielding scores of 0.923 for the LA cavity and 0.741 for the LA wall. For AHRE classification, metrics such as area under the curve (AUC), accuracy, sensitivity, and specificity were employed. A transformer-based model integrating patient characteristics demonstrated robust performance, achieving mean AUC of 0.815, accuracy of 0.809, sensitivity of 0.800, and specificity of 0.783 for a 24-h AHRE duration threshold. This framework represents a reliable tool for AHRE assessment and holds significant potential for early SCAF detection, enhancing clinical decision-making and patient outcomes.

MSLesSeg: baseline and benchmarking of a new Multiple Sclerosis Lesion Segmentation dataset.

Guarnera F, Rondinella A, Crispino E, Russo G, Di Lorenzo C, Maimone D, Pappalardo F, Battiato S

pubmed logopapersMay 31 2025
This paper presents MSLesSeg, a new, publicly accessible MRI dataset designed to advance research in Multiple Sclerosis (MS) lesion segmentation. The dataset comprises 115 scans of 75 patients including T1, T2 and FLAIR sequences, along with supplementary clinical data collected across different sources. Expert-validated annotations provide high-quality lesion segmentation labels, establishing a reliable human-labeled dataset for benchmarking. Part of the dataset was shared with expert scientists with the aim to compare the last automatic AI-based image segmentation solutions with an expert-biased handmade segmentation. In addition, an AI-based lesion segmentation of MSLesSeg was developed and technically validated against the last state-of-the-art methods. The dataset, the detailed analysis of researcher contributions, and the baseline results presented here mark a significant milestone for advancing automated MS lesion segmentation research.

NeoPred: dual-phase CT AI forecasts pathologic response to neoadjuvant chemo-immunotherapy in NSCLC.

Zheng J, Yan Z, Wang R, Xiao H, Chen Z, Ge X, Li Z, Liu Z, Yu H, Liu H, Wang G, Yu P, Fu J, Zhang G, Zhang J, Liu B, Huang Y, Deng H, Wang C, Fu W, Zhang Y, Wang R, Jiang Y, Lin Y, Huang L, Yang C, Cui F, He J, Liang H

pubmed logopapersMay 31 2025
Accurate preoperative prediction of major pathological response or pathological complete response after neoadjuvant chemo-immunotherapy remains a critical unmet need in resectable non-small-cell lung cancer (NSCLC). Conventional size-based imaging criteria offer limited reliability, while biopsy confirmation is available only post-surgery. We retrospectively assembled 509 consecutive NSCLC cases from four Chinese thoracic-oncology centers (March 2018 to March 2023) and prospectively enrolled 50 additional patients. Three 3-dimensional convolutional neural networks (pre-treatment CT, pre-surgical CT, dual-phase CT) were developed; the best-performing dual-phase model (NeoPred) optionally integrated clinical variables. Model performance was measured by area under the receiver-operating-characteristic curve (AUC) and compared with nine board-certified radiologists. In an external validation set (n=59), NeoPred achieved an AUC of 0.772 (95% CI: 0.650 to 0.895), sensitivity 0.591, specificity 0.733, and accuracy 0.627; incorporating clinical data increased the AUC to 0.787. In a prospective cohort (n=50), NeoPred reached an AUC of 0.760 (95% CI: 0.628 to 0.891), surpassing the experts' mean AUC of 0.720 (95% CI: 0.574 to 0.865). Model assistance raised the pooled expert AUC to 0.829 (95% CI: 0.707 to 0.951) and accuracy to 0.820. Marked performance persisted within radiological stable-disease subgroups (external AUC 0.742, 95% CI: 0.468 to 1.000; prospective AUC 0.833, 95% CI: 0.497 to 1.000). Combining dual-phase CT and clinical variables, NeoPred reliably and non-invasively predicts pathological response to neoadjuvant chemo-immunotherapy in NSCLC, outperforms unaided expert assessment, and significantly enhances radiologist performance. Further multinational trials are needed to confirm generalizability and support surgical decision-making.

Development and validation of a 3-D deep learning system for diabetic macular oedema classification on optical coherence tomography images.

Zhu H, Ji J, Lin JW, Wang J, Zheng Y, Xie P, Liu C, Ng TK, Huang J, Xiong Y, Wu H, Lin L, Zhang M, Zhang G

pubmed logopapersMay 31 2025
To develop and validate an automated diabetic macular oedema (DME) classification system based on the images from different three-dimensional optical coherence tomography (3-D OCT) devices. A multicentre, platform-based development study using retrospective and cross-sectional data. Data were subjected to a two-level grading system by trained graders and a retina specialist, and categorised into three types: no DME, non-centre-involved DME and centre-involved DME (CI-DME). The 3-D convolutional neural networks algorithm was used for DME classification system development. The deep learning (DL) performance was compared with the diabetic retinopathy experts. Data were collected from Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Chaozhou People's Hospital and The Second Affiliated Hospital of Shantou University Medical College from January 2010 to December 2023. 7790 volumes of 7146 eyes from 4254 patients were annotated, of which 6281 images were used as the development set and 1509 images were used as the external validation set, split based on the centres. Accuracy, F1-score, sensitivity, specificity, area under receiver operating characteristic curve (AUROC) and Cohen's kappa were calculated to evaluate the performance of the DL algorithm. In classifying DME with non-DME, our model achieved an AUROCs of 0.990 (95% CI 0.983 to 0.996) and 0.916 (95% CI 0.902 to 0.930) for hold-out testing dataset and external validation dataset, respectively. To distinguish CI-DME from non-centre-involved-DME, our model achieved AUROCs of 0.859 (95% CI 0.812 to 0.906) and 0.881 (95% CI 0.859 to 0.902), respectively. In addition, our system showed comparable performance (Cohen's κ: 0.85 and 0.75) to the retina experts (Cohen's κ: 0.58-0.92 and 0.70-0.71). Our DL system achieved high accuracy in multiclassification tasks on DME classification with 3-D OCT images, which can be applied to population-based DME screening.

Study of AI algorithms on mpMRI and PHI for the diagnosis of clinically significant prostate cancer.

Luo Z, Li J, Wang K, Li S, Qian Y, Xie W, Wu P, Wang X, Han J, Zhu W, Wang H, He Y

pubmed logopapersMay 31 2025
To study the feasibility of multiple factors in improving the diagnostic accuracy of clinically significant prostate cancer (csPCa). A retrospective study with 131 patients analyzes age, PSA, PHI and pathology. Patients with ISUP > 2 were classified as csPCa, and others are non-csPCa. The mpMRI images were processed by a homemade AI algorithm, obtaining positive or negative AI results. Four logistic regression models were fitted, with pathological findings as the dependent variable. The predicted probability of the patients was used to test the prediction efficacy of the models. The DeLong test was performed to compare differences in the area under the receiver operating characteristic (ROC) curves (AUCs) between the models. The study includes 131 patients: 62 were diagnosed with csPCa and 69 were non-csPCa. Statically significant differences were found in age, PSA, PIRADS score, AI results, and PHI values between the 2 groups (all P ≤ 0.001). The conventional model (R<sup>2</sup> = 0.389), the AI model (R<sup>2</sup> = 0.566), and the PHI model (R<sup>2</sup> = 0.515) were compared to the full model (R<sup>2</sup> = 0.626) with ANOVA and showed statistically significant differences (all P < 0.05). The AUC of the full model (0.921 [95% CI: 0.871-0.972]) was significantly higher than that of the conventional model (P = 0.001), AI model (P < 0.001), and PHI model (P = 0.014). Combining multiple factors such as age, PSA, PIRADS score and PHI, adding AI algorithm based on mpMRI, the diagnostic accuracy of csPCa can be improved.

From Guidelines to Intelligence: How AI Refines Thyroid Nodule Biopsy Decisions.

Zeng W, He Y, Xu R, Mai W, Chen Y, Li S, Yi W, Ma L, Xiong R, Liu H

pubmed logopapersMay 31 2025
To evaluate the value of combining American College of Radiology (ACR) Thyroid Imaging Reporting and Data System (TI-RADS) with the Demetics ultrasound diagnostic system in reducing the rate of fine-needle aspiration (FNA) biopsies for thyroid nodules. A retrospective study analyzed 548 thyroid nodules from 454 patients, all meeting ACR TI-RADS guidelines (category ≥3 and diameter ≥10 mm) for FNA. Nodule was reclassified using the combined ACR TI-RADS and Demetics system (De TI-RADS), and the biopsy rates were compared. Using ACR TI-RADS alone, the biopsy rate was 70.6% (387/548), with a positive predictive value (PPV) of 52.5% (203/387), an unnecessary biopsy rate of 47.5% (184/387) and a missed diagnosis rate of 11.0% (25/228). Incorporating Demetics reduced the biopsy rate to 48.1% (264/548), the unnecessary biopsy rate to 17.4% (46/265) and the missed diagnosis rate to 4.4% (10/228), while increasing PPV to 82.6% (218/264). All differences between ACR TI-RADS and De TI-RADS were statistically significant (p < 0.05). The integration of ACR TI-RADS with the Demetics system improves nodule risk assessment by enhancing diagnostic and efficiency. This approach reduces unnecessary biopsies and missed diagnoses while increasing PPV, offering a more reliable tool for clinicians and patients.

Diagnostic Accuracy of an Artificial Intelligence-based Platform in Detecting Periapical Radiolucencies on Cone-Beam Computed Tomography Scans of Molars.

Allihaibi M, Koller G, Mannocci F

pubmed logopapersMay 31 2025
This study aimed to evaluate the diagnostic performance of an artificial intelligence (AI)-based platform (Diagnocat) in detecting periapical radiolucencies (PARLs) in cone-beam computed tomography (CBCT) scans of molars. Specifically, we assessed Diagnocat's performance in detecting PARLs in non-root-filled molars and compared its diagnostic performance between preoperative and postoperative scans. This retrospective study analyzed preoperative and postoperative CBCT scans of 134 molars (327 roots). PARLs detected by Diagnocat were compared with assessments independently performed by two experienced endodontists, serving as the reference standard. Diagnostic performance was assessed at both tooth and root levels using sensitivity, specificity, accuracy, positive predictive value (PPV), negative predictive value (NPV), F1 score, and the area under the receiver operating characteristic curve (AUC-ROC). In preoperative scans of non-root-filled molars, Diagnocat demonstrated high sensitivity (teeth: 93.9%, roots: 86.2%), moderate specificity (teeth: 65.2%, roots: 79.9%), accuracy (teeth: 79.1%, roots: 82.6%), PPV (teeth: 71.8%, roots: 75.8%), NPV (teeth: 91.8%, roots: 88.8%), and F1 score (teeth: 81.3%, roots: 80.7%) for PARL detection. The AUC was 0.76 at the tooth level and 0.79 at the root level. Postoperative scans showed significantly lower PPV (teeth: 54.2%; roots: 46.9%) and F1 scores (teeth: 67.2%; roots: 59.2%). Diagnocat shows promise in detecting PARLs in CBCT scans of non-root-filled molars, demonstrating high sensitivity but moderate specificity, highlighting the need for human oversight to prevent overdiagnosis. However, diagnostic performance declined significantly in postoperative scans of root-filled molars. Further research is needed to optimize the platform's performance and support its integration into clinical practice. AI-based platforms such as Diagnocat can assist clinicians in detecting PARLs in CBCT scans, enhancing diagnostic efficiency and supporting decision-making. However, human expertise remains essential to minimize the risk of overdiagnosis and avoid unnecessary treatment.

Development and interpretation of a pathomics-based model for the prediction of immune therapy response in colorectal cancer.

Luo Y, Tian Q, Xu L, Zeng D, Zhang H, Zeng T, Tang H, Wang C, Chen Y

pubmed logopapersMay 31 2025
Colorectal cancer (CRC) is the third most common malignancy and the second leading cause of cancer-related deaths worldwide, with a 5-year survival rate below 20 %. Immunotherapy, particularly immune checkpoint blockade (ICB)-based therapies, has become an important approach for CRC treatment. However, only specific patient subsets demonstrate significant clinical benefits. Although the TIDE algorithm can predict immunotherapy responses, the reliance on transcriptome sequencing data limits its clinical applicability. Recent advances in artificial intelligence and computational pathology provide new avenues for medical image analysis.In this study, we classified TCGA-CRC samples into immunotherapy responder and non-responder groups using the TIDE algorithm. Further, a pathomics model based on convolutional neural networks was constructed to directly predict immunotherapy responses from histopathological images. Single-cell analysis revealed that fibroblasts may induce immunotherapy resistance in CRC through collagen-CD44 and ITGA1 + ITGB1 signaling axes. The developed pathomics model demonstrated excellent classification performance in the test set, with an AUC of 0.88 at the patch level and 0.85 at the patient level. Moreover, key pathomics features were identified through SHAP analysis. This innovative predictive tool provides a novel method for clinical decision-making in CRC immunotherapy, with potential to optimize treatment strategies and advance precision medicine.
Page 143 of 2102095 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.