Sort by:
Page 123 of 1521519 results

Subclinical atrial fibrillation prediction based on deep learning and strain analysis using echocardiography.

Huang SH, Lin YC, Chen L, Unankard S, Tseng VS, Tsao HM, Tang GJ

pubmed logopapersMay 31 2025
Subclinical atrial fibrillation (SCAF), also known as atrial high-rate episodes (AHREs), refers to asymptomatic heart rate elevations associated with increased risks of atrial fibrillation and cardiovascular events. Although deep learning (DL) models leveraging echocardiographic images from ultrasound are widely used for cardiac function analysis, their application to AHRE prediction remains unexplored. This study introduces a novel DL-based framework for automatic AHRE detection using echocardiograms. The approach encompasses left atrium (LA) segmentation, LA strain feature extraction, and AHRE classification. Data from 117 patients with cardiac implantable electronic devices undergoing echocardiography were analyzed, with 80% allocated to the development set and 20% to the test set. LA segmentation accuracy was quantified using the Dice coefficient, yielding scores of 0.923 for the LA cavity and 0.741 for the LA wall. For AHRE classification, metrics such as area under the curve (AUC), accuracy, sensitivity, and specificity were employed. A transformer-based model integrating patient characteristics demonstrated robust performance, achieving mean AUC of 0.815, accuracy of 0.809, sensitivity of 0.800, and specificity of 0.783 for a 24-h AHRE duration threshold. This framework represents a reliable tool for AHRE assessment and holds significant potential for early SCAF detection, enhancing clinical decision-making and patient outcomes.

MSLesSeg: baseline and benchmarking of a new Multiple Sclerosis Lesion Segmentation dataset.

Guarnera F, Rondinella A, Crispino E, Russo G, Di Lorenzo C, Maimone D, Pappalardo F, Battiato S

pubmed logopapersMay 31 2025
This paper presents MSLesSeg, a new, publicly accessible MRI dataset designed to advance research in Multiple Sclerosis (MS) lesion segmentation. The dataset comprises 115 scans of 75 patients including T1, T2 and FLAIR sequences, along with supplementary clinical data collected across different sources. Expert-validated annotations provide high-quality lesion segmentation labels, establishing a reliable human-labeled dataset for benchmarking. Part of the dataset was shared with expert scientists with the aim to compare the last automatic AI-based image segmentation solutions with an expert-biased handmade segmentation. In addition, an AI-based lesion segmentation of MSLesSeg was developed and technically validated against the last state-of-the-art methods. The dataset, the detailed analysis of researcher contributions, and the baseline results presented here mark a significant milestone for advancing automated MS lesion segmentation research.

NeoPred: dual-phase CT AI forecasts pathologic response to neoadjuvant chemo-immunotherapy in NSCLC.

Zheng J, Yan Z, Wang R, Xiao H, Chen Z, Ge X, Li Z, Liu Z, Yu H, Liu H, Wang G, Yu P, Fu J, Zhang G, Zhang J, Liu B, Huang Y, Deng H, Wang C, Fu W, Zhang Y, Wang R, Jiang Y, Lin Y, Huang L, Yang C, Cui F, He J, Liang H

pubmed logopapersMay 31 2025
Accurate preoperative prediction of major pathological response or pathological complete response after neoadjuvant chemo-immunotherapy remains a critical unmet need in resectable non-small-cell lung cancer (NSCLC). Conventional size-based imaging criteria offer limited reliability, while biopsy confirmation is available only post-surgery. We retrospectively assembled 509 consecutive NSCLC cases from four Chinese thoracic-oncology centers (March 2018 to March 2023) and prospectively enrolled 50 additional patients. Three 3-dimensional convolutional neural networks (pre-treatment CT, pre-surgical CT, dual-phase CT) were developed; the best-performing dual-phase model (NeoPred) optionally integrated clinical variables. Model performance was measured by area under the receiver-operating-characteristic curve (AUC) and compared with nine board-certified radiologists. In an external validation set (n=59), NeoPred achieved an AUC of 0.772 (95% CI: 0.650 to 0.895), sensitivity 0.591, specificity 0.733, and accuracy 0.627; incorporating clinical data increased the AUC to 0.787. In a prospective cohort (n=50), NeoPred reached an AUC of 0.760 (95% CI: 0.628 to 0.891), surpassing the experts' mean AUC of 0.720 (95% CI: 0.574 to 0.865). Model assistance raised the pooled expert AUC to 0.829 (95% CI: 0.707 to 0.951) and accuracy to 0.820. Marked performance persisted within radiological stable-disease subgroups (external AUC 0.742, 95% CI: 0.468 to 1.000; prospective AUC 0.833, 95% CI: 0.497 to 1.000). Combining dual-phase CT and clinical variables, NeoPred reliably and non-invasively predicts pathological response to neoadjuvant chemo-immunotherapy in NSCLC, outperforms unaided expert assessment, and significantly enhances radiologist performance. Further multinational trials are needed to confirm generalizability and support surgical decision-making.

Development and validation of a 3-D deep learning system for diabetic macular oedema classification on optical coherence tomography images.

Zhu H, Ji J, Lin JW, Wang J, Zheng Y, Xie P, Liu C, Ng TK, Huang J, Xiong Y, Wu H, Lin L, Zhang M, Zhang G

pubmed logopapersMay 31 2025
To develop and validate an automated diabetic macular oedema (DME) classification system based on the images from different three-dimensional optical coherence tomography (3-D OCT) devices. A multicentre, platform-based development study using retrospective and cross-sectional data. Data were subjected to a two-level grading system by trained graders and a retina specialist, and categorised into three types: no DME, non-centre-involved DME and centre-involved DME (CI-DME). The 3-D convolutional neural networks algorithm was used for DME classification system development. The deep learning (DL) performance was compared with the diabetic retinopathy experts. Data were collected from Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Chaozhou People's Hospital and The Second Affiliated Hospital of Shantou University Medical College from January 2010 to December 2023. 7790 volumes of 7146 eyes from 4254 patients were annotated, of which 6281 images were used as the development set and 1509 images were used as the external validation set, split based on the centres. Accuracy, F1-score, sensitivity, specificity, area under receiver operating characteristic curve (AUROC) and Cohen's kappa were calculated to evaluate the performance of the DL algorithm. In classifying DME with non-DME, our model achieved an AUROCs of 0.990 (95% CI 0.983 to 0.996) and 0.916 (95% CI 0.902 to 0.930) for hold-out testing dataset and external validation dataset, respectively. To distinguish CI-DME from non-centre-involved-DME, our model achieved AUROCs of 0.859 (95% CI 0.812 to 0.906) and 0.881 (95% CI 0.859 to 0.902), respectively. In addition, our system showed comparable performance (Cohen's κ: 0.85 and 0.75) to the retina experts (Cohen's κ: 0.58-0.92 and 0.70-0.71). Our DL system achieved high accuracy in multiclassification tasks on DME classification with 3-D OCT images, which can be applied to population-based DME screening.

Study of AI algorithms on mpMRI and PHI for the diagnosis of clinically significant prostate cancer.

Luo Z, Li J, Wang K, Li S, Qian Y, Xie W, Wu P, Wang X, Han J, Zhu W, Wang H, He Y

pubmed logopapersMay 31 2025
To study the feasibility of multiple factors in improving the diagnostic accuracy of clinically significant prostate cancer (csPCa). A retrospective study with 131 patients analyzes age, PSA, PHI and pathology. Patients with ISUP > 2 were classified as csPCa, and others are non-csPCa. The mpMRI images were processed by a homemade AI algorithm, obtaining positive or negative AI results. Four logistic regression models were fitted, with pathological findings as the dependent variable. The predicted probability of the patients was used to test the prediction efficacy of the models. The DeLong test was performed to compare differences in the area under the receiver operating characteristic (ROC) curves (AUCs) between the models. The study includes 131 patients: 62 were diagnosed with csPCa and 69 were non-csPCa. Statically significant differences were found in age, PSA, PIRADS score, AI results, and PHI values between the 2 groups (all P ≤ 0.001). The conventional model (R<sup>2</sup> = 0.389), the AI model (R<sup>2</sup> = 0.566), and the PHI model (R<sup>2</sup> = 0.515) were compared to the full model (R<sup>2</sup> = 0.626) with ANOVA and showed statistically significant differences (all P < 0.05). The AUC of the full model (0.921 [95% CI: 0.871-0.972]) was significantly higher than that of the conventional model (P = 0.001), AI model (P < 0.001), and PHI model (P = 0.014). Combining multiple factors such as age, PSA, PIRADS score and PHI, adding AI algorithm based on mpMRI, the diagnostic accuracy of csPCa can be improved.

From Guidelines to Intelligence: How AI Refines Thyroid Nodule Biopsy Decisions.

Zeng W, He Y, Xu R, Mai W, Chen Y, Li S, Yi W, Ma L, Xiong R, Liu H

pubmed logopapersMay 31 2025
To evaluate the value of combining American College of Radiology (ACR) Thyroid Imaging Reporting and Data System (TI-RADS) with the Demetics ultrasound diagnostic system in reducing the rate of fine-needle aspiration (FNA) biopsies for thyroid nodules. A retrospective study analyzed 548 thyroid nodules from 454 patients, all meeting ACR TI-RADS guidelines (category ≥3 and diameter ≥10 mm) for FNA. Nodule was reclassified using the combined ACR TI-RADS and Demetics system (De TI-RADS), and the biopsy rates were compared. Using ACR TI-RADS alone, the biopsy rate was 70.6% (387/548), with a positive predictive value (PPV) of 52.5% (203/387), an unnecessary biopsy rate of 47.5% (184/387) and a missed diagnosis rate of 11.0% (25/228). Incorporating Demetics reduced the biopsy rate to 48.1% (264/548), the unnecessary biopsy rate to 17.4% (46/265) and the missed diagnosis rate to 4.4% (10/228), while increasing PPV to 82.6% (218/264). All differences between ACR TI-RADS and De TI-RADS were statistically significant (p < 0.05). The integration of ACR TI-RADS with the Demetics system improves nodule risk assessment by enhancing diagnostic and efficiency. This approach reduces unnecessary biopsies and missed diagnoses while increasing PPV, offering a more reliable tool for clinicians and patients.

Diagnostic Accuracy of an Artificial Intelligence-based Platform in Detecting Periapical Radiolucencies on Cone-Beam Computed Tomography Scans of Molars.

Allihaibi M, Koller G, Mannocci F

pubmed logopapersMay 31 2025
This study aimed to evaluate the diagnostic performance of an artificial intelligence (AI)-based platform (Diagnocat) in detecting periapical radiolucencies (PARLs) in cone-beam computed tomography (CBCT) scans of molars. Specifically, we assessed Diagnocat's performance in detecting PARLs in non-root-filled molars and compared its diagnostic performance between preoperative and postoperative scans. This retrospective study analyzed preoperative and postoperative CBCT scans of 134 molars (327 roots). PARLs detected by Diagnocat were compared with assessments independently performed by two experienced endodontists, serving as the reference standard. Diagnostic performance was assessed at both tooth and root levels using sensitivity, specificity, accuracy, positive predictive value (PPV), negative predictive value (NPV), F1 score, and the area under the receiver operating characteristic curve (AUC-ROC). In preoperative scans of non-root-filled molars, Diagnocat demonstrated high sensitivity (teeth: 93.9%, roots: 86.2%), moderate specificity (teeth: 65.2%, roots: 79.9%), accuracy (teeth: 79.1%, roots: 82.6%), PPV (teeth: 71.8%, roots: 75.8%), NPV (teeth: 91.8%, roots: 88.8%), and F1 score (teeth: 81.3%, roots: 80.7%) for PARL detection. The AUC was 0.76 at the tooth level and 0.79 at the root level. Postoperative scans showed significantly lower PPV (teeth: 54.2%; roots: 46.9%) and F1 scores (teeth: 67.2%; roots: 59.2%). Diagnocat shows promise in detecting PARLs in CBCT scans of non-root-filled molars, demonstrating high sensitivity but moderate specificity, highlighting the need for human oversight to prevent overdiagnosis. However, diagnostic performance declined significantly in postoperative scans of root-filled molars. Further research is needed to optimize the platform's performance and support its integration into clinical practice. AI-based platforms such as Diagnocat can assist clinicians in detecting PARLs in CBCT scans, enhancing diagnostic efficiency and supporting decision-making. However, human expertise remains essential to minimize the risk of overdiagnosis and avoid unnecessary treatment.

Development and interpretation of a pathomics-based model for the prediction of immune therapy response in colorectal cancer.

Luo Y, Tian Q, Xu L, Zeng D, Zhang H, Zeng T, Tang H, Wang C, Chen Y

pubmed logopapersMay 31 2025
Colorectal cancer (CRC) is the third most common malignancy and the second leading cause of cancer-related deaths worldwide, with a 5-year survival rate below 20 %. Immunotherapy, particularly immune checkpoint blockade (ICB)-based therapies, has become an important approach for CRC treatment. However, only specific patient subsets demonstrate significant clinical benefits. Although the TIDE algorithm can predict immunotherapy responses, the reliance on transcriptome sequencing data limits its clinical applicability. Recent advances in artificial intelligence and computational pathology provide new avenues for medical image analysis.In this study, we classified TCGA-CRC samples into immunotherapy responder and non-responder groups using the TIDE algorithm. Further, a pathomics model based on convolutional neural networks was constructed to directly predict immunotherapy responses from histopathological images. Single-cell analysis revealed that fibroblasts may induce immunotherapy resistance in CRC through collagen-CD44 and ITGA1 + ITGB1 signaling axes. The developed pathomics model demonstrated excellent classification performance in the test set, with an AUC of 0.88 at the patch level and 0.85 at the patient level. Moreover, key pathomics features were identified through SHAP analysis. This innovative predictive tool provides a novel method for clinical decision-making in CRC immunotherapy, with potential to optimize treatment strategies and advance precision medicine.

CineMA: A Foundation Model for Cine Cardiac MRI

Yunguan Fu, Weixi Yi, Charlotte Manisty, Anish N Bhuva, Thomas A Treibel, James C Moon, Matthew J Clarkson, Rhodri Huw Davies, Yipeng Hu

arxiv logopreprintMay 31 2025
Cardiac magnetic resonance (CMR) is a key investigation in clinical cardiovascular medicine and has been used extensively in population research. However, extracting clinically important measurements such as ejection fraction for diagnosing cardiovascular diseases remains time-consuming and subjective. We developed CineMA, a foundation AI model automating these tasks with limited labels. CineMA is a self-supervised autoencoder model trained on 74,916 cine CMR studies to reconstruct images from masked inputs. After fine-tuning, it was evaluated across eight datasets on 23 tasks from four categories: ventricle and myocardium segmentation, left and right ventricle ejection fraction calculation, disease detection and classification, and landmark localisation. CineMA is the first foundation model for cine CMR to match or outperform convolutional neural networks (CNNs). CineMA demonstrated greater label efficiency than CNNs, achieving comparable or better performance with fewer annotations. This reduces the burden of clinician labelling and supports replacing task-specific training with fine-tuning foundation models in future cardiac imaging applications. Models and code for pre-training and fine-tuning are available at https://github.com/mathpluscode/CineMA, democratising access to high-performance models that otherwise require substantial computational resources, promoting reproducibility and accelerating clinical translation.

QoQ-Med: Building Multimodal Clinical Foundation Models with Domain-Aware GRPO Training

Wei Dai, Peilin Chen, Chanakya Ekbote, Paul Pu Liang

arxiv logopreprintMay 31 2025
Clinical decision-making routinely demands reasoning over heterogeneous data, yet existing multimodal language models (MLLMs) remain largely vision-centric and fail to generalize across clinical specialties. To bridge this gap, we introduce QoQ-Med-7B/32B, the first open generalist clinical foundation model that jointly reasons across medical images, time-series signals, and text reports. QoQ-Med is trained with Domain-aware Relative Policy Optimization (DRPO), a novel reinforcement-learning objective that hierarchically scales normalized rewards according to domain rarity and modality difficulty, mitigating performance imbalance caused by skewed clinical data distributions. Trained on 2.61 million instruction tuning pairs spanning 9 clinical domains, we show that DRPO training boosts diagnostic performance by 43% in macro-F1 on average across all visual domains as compared to other critic-free training methods like GRPO. Furthermore, with QoQ-Med trained on intensive segmentation data, it is able to highlight salient regions related to the diagnosis, with an IoU 10x higher than open models while reaching the performance of OpenAI o4-mini. To foster reproducibility and downstream research, we release (i) the full model weights, (ii) the modular training pipeline, and (iii) all intermediate reasoning traces at https://github.com/DDVD233/QoQ_Med.
Page 123 of 1521519 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.