Sort by:
Page 39 of 2922917 results

Multi-modal models using fMRI, urine and serum biomarkers for classification and risk prognosis in diabetic kidney disease.

Shao X, Xu H, Chen L, Bai P, Sun H, Yang Q, Chen R, Lin Q, Wang L, Li Y, Lin Y, Yu P

pubmed logopapersJul 2 2025
Functional magnetic resonance imaging (fMRI) is a powerful tool for non-invasive evaluation of micro-changes in the kidneys. This study aims to develop classification and prognostic models based on multi-modal data. A total of 172 participants were included, and high-resolution multi-parameter fMRI technology was employed to obtain T2-weighted imaging (T2WI), blood oxygen level dependent (BOLD), and diffusion tensor imaging (DTI) sequence images. Based on clinical indicators, fMRI markers, serum and urine biomarkers (CD300LF, CST4, MMRN2, SERPINA1, l-glutamic acid dimethyl ester and phosphatidylcholine), machine learning algorithms were applied to establish and validate classification diagnosis models (Models 1-6) and risk-prognostic models (Models A-E). Additionally, accuracy, sensitivity, specificity, precision, area under the curve (AUC) and recall were used to evaluate the predictive performance of the models. A total of six classification models were established. Model 5 (fMRI + clinical indicators) exhibited superior performance, with an accuracy of 0.833 (95% confidence interval [CI]: 0.653-0.944). Notably, the multi-modal model incorporating image, serum and urine multi-omics and clinical indicators (Model 6) demonstrated higher predictive performance, achieving an accuracy of 0.923 (95% CI: 0.749-0.991). Furthermore, a total of five prognostic models at 2-year and 3-year follow-up were established. The Model E exhibited superior performance, achieving AUC values of 0.975 at the 2-year follow-up and 0.932 at the 3-year follow-up. Furthermore, Model E can identify patients with a high-risk prognosis. In clinical practice, the multi-modal models presented in this study demonstrate potential to enhance clinical decision-making capabilities regarding patient classification and prognosis prediction.

Diagnostic performance of artificial intelligence based on contrast-enhanced computed tomography in pancreatic ductal adenocarcinoma: a systematic review and meta-analysis.

Yan G, Chen X, Wang Y

pubmed logopapersJul 2 2025
This meta-analysis systematically evaluated the diagnostic performance of artificial intelligence (AI) based on contrast-enhanced computed tomography (CECT) in detecting pancreatic ductal adenocarcinoma (PDAC). Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses for Diagnostic Test Accuracy (PRISMA-DTA) guidelines, a comprehensive literature search was conducted across PubMed, Embase, and Web of Science from inception to March 2025. Bivariate random-effects models pooled sensitivity, specificity, and area under the curve (AUC). Heterogeneity was quantified via I² statistics, with subgroup analyses examining sources of variability, including AI methodologies, model architectures, sample sizes, geographic distributions, control groups and tumor stages. Nineteen studies involving 5,986 patients in internal validation cohorts and 2,069 patients in external validation cohorts were included. AI models demonstrated robust diagnostic accuracy in internal validation, with pooled sensitivity of 0.94 (95% CI 0.89-0.96), specificity of 0.93 (95% CI 0.90-0.96), and AUC of 0.98 (95% CI 0.96-0.99). External validation revealed moderately reduced sensitivity (0.84; 95% CI 0.78-0.89) and AUC (0.94; 95% CI 0.92-0.96), while specificity remained comparable (0.93; 95% CI 0.87-0.96). Substantial heterogeneity (I² > 85%) was observed, predominantly attributed to methodological variations in AI architectures and disparities in cohort sizes. AI demonstrates excellent diagnostic performance for PDAC on CECT, achieving high sensitivity and specificity across validation scenarios. However, its efficacy varies significantly with clinical context and tumor stage. Therefore, prospective multicenter trials that utilize standardized protocols and diverse cohorts, including early-stage tumors and complex benign conditions, are essential to validate the clinical utility of AI.

Multimodal Generative Artificial Intelligence Model for Creating Radiology Reports for Chest Radiographs in Patients Undergoing Tuberculosis Screening.

Hong EK, Kim HW, Song OK, Lee KC, Kim DK, Cho JB, Kim J, Lee S, Bae W, Roh B

pubmed logopapersJul 2 2025
<b>Background:</b> Chest radiographs play a crucial role in tuberculosis screening in high-prevalence regions, although widespread radiographic screening requires expertise that may be unavailable in settings with limited medical resources. <b>Objectives:</b> To evaluate a multimodal generative artificial intelligence (AI) model for detecting tuberculosis-associated abnormalities on chest radiography in patients undergoing tuberculosis screening. <b>Methods:</b> This retrospective study evaluated 800 chest radiographs obtained from two public datasets originating from tuberculosis screening programs. A generative AI model was used to create free-text reports for the radiographs. AI-generated reports were classified in terms of presence versus absence and laterality of tuberculosis-related abnormalities. Two radiologists independently reviewed the radiographs for tuberculosis presence and laterality in separate sessions, without and with use of AI-generated reports and recorded if they would accept the report without modification. Two additional radiologists reviewed radiographs and clinical readings from the datasets to determine the reference standard. <b>Results:</b> By the reference standard, 422/800 radiographs were positive for tuberculosis-related abnormalities. For detection of tuberculosis-related abnormalities, sensitivity, specificity, and accuracy were 95.2%, 86.7%, and 90.8% for AI-generated reports; 93.1%, 93.6%, and 93.4% for reader 1 without AI-generated reports; 93.1%, 95.0%, and 94.1% for reader 1 with AI-generated reports; 95.8%, 87.2%, and 91.3% for reader 2 without AI-generated reports; and 95.8%, 91.5%, and 93.5% for reader 2 with AI-generated reports. Accuracy was significantly lower for AI-generated reports than for both readers alone (p<.001), but significantly higher with than without AI-generated reports for one reader (reader 1: p=.47; reader 2: p=.47). Localization performance was significantly lower (p<.001) for AI-generated reports (63.3%) than for reader 1 (79.9%) and reader 2 (77.9%) without AI-generated reports and did not significantly change for either reader with AI-generated reports (reader 1: 78.7%, p=.71; reader 2: 81.5%, p=.23). Among normal and abnormal radiographs, reader 1 accepted 91.7% and 52.4%, while reader 2 accepted 83.2% and 37.0%, respectively, of AI-generated reports. <b>Conclusion:</b> While AI-generated reports may augment radiologists' diagnostic assessments, the current model requires human oversight given inferior standalone performance. <b>Clinical Impact:</b> The generative AI model could have potential application to aid tuberculosis screening programs in medically underserved regions, although technical improvements remain required.

Artificial Intelligence-Driven Cancer Diagnostics: Enhancing Radiology and Pathology through Reproducibility, Explainability, and Multimodality.

Khosravi P, Fuchs TJ, Ho DJ

pubmed logopapersJul 2 2025
The integration of artificial intelligence (AI) in cancer research has significantly advanced radiology, pathology, and multimodal approaches, offering unprecedented capabilities in image analysis, diagnosis, and treatment planning. AI techniques provide standardized assistance to clinicians, in which many diagnostic and predictive tasks are manually conducted, causing low reproducibility. These AI methods can additionally provide explainability to help clinicians make the best decisions for patient care. This review explores state-of-the-art AI methods, focusing on their application in image classification, image segmentation, multiple instance learning, generative models, and self-supervised learning. In radiology, AI enhances tumor detection, diagnosis, and treatment planning through advanced imaging modalities and real-time applications. In pathology, AI-driven image analysis improves cancer detection, biomarker discovery, and diagnostic consistency. Multimodal AI approaches can integrate data from radiology, pathology, and genomics to provide comprehensive diagnostic insights. Emerging trends, challenges, and future directions in AI-driven cancer research are discussed, emphasizing the transformative potential of these technologies in improving patient outcomes and advancing cancer care. This article is part of a special series: Driving Cancer Discoveries with Computational Research, Data Science, and Machine Learning/AI.

Clinical validation of AI assisted animal ultrasound models for diagnosis of early liver trauma.

Song Q, He X, Wang Y, Gao H, Tan L, Ma J, Kang L, Han P, Luo Y, Wang K

pubmed logopapersJul 2 2025
The study aimed to develop an AI-assisted ultrasound model for early liver trauma identification, using data from Bama miniature pigs and patients in Beijing, China. A deep learning model was created and fine-tuned with animal and clinical data, achieving high accuracy metrics. In internal tests, the model outperformed both Junior and Senior sonographers. External tests showed the model's effectiveness, with a Dice Similarity Coefficient of 0.74, True Positive Rate of 0.80, Positive Predictive Value of 0.74, and 95% Hausdorff distance of 14.84. The model's performance was comparable to Junior sonographers and slightly lower than Senior sonographers. This AI model shows promise for liver injury detection, offering a valuable tool with diagnostic capabilities similar to those of less experienced human operators.

A deep learning-based computed tomography reading system for the diagnosis of lung cancer associated with cystic airspaces.

Hu Z, Zhang X, Yang J, Zhang B, Chen H, Shen W, Li H, Zhou Y, Zhang J, Qiu K, Xie Z, Xu G, Tan J, Pang C

pubmed logopapersJul 2 2025
To propose a deep learning model and explore its performance in the auxiliary diagnosis of lung cancer associated with cystic airspaces (LCCA) in computed tomography (CT) images. This study is a retrospective analysis that incorporated a total of 342 CT series, comprising 272 series from patients diagnosed with LCCA and 70 series from patients with pulmonary bulla. A deep learning model named LungSSFNet, developed based on nnUnet, was utilized for image recognition and segmentation by experienced thoracic surgeons. The dataset was divided into a training set (245 series), a validation set (62 series), and a test set (35 series). The performance of LungSSFNet was compared with other models such as UNet, M2Snet, TANet, MADGNet, and nnUnet to evaluate its effectiveness in recognizing and segmenting LCCA and pulmonary bulla. LungSSFNet achieved an intersection over union of 81.05% and a Dice similarity coefficient of 75.15% for LCCA, and 93.03% and 92.04% for pulmonary bulla, respectively. These outcomes demonstrate that LungSSFNet outperformed many existing models in segmentation tasks. Additionally, it attained an accuracy of 96.77%, a precision of 100%, and a sensitivity of 96.15%. LungSSFNet, a new deep-learning model, substantially improved the diagnosis of early-stage LCCA and is potentially valuable for auxiliary clinical decision-making. Our LungSSFNet code is available at https://github.com/zx0412/LungSSFNet .

Combining multi-parametric MRI radiomics features with tumor abnormal protein to construct a machine learning-based predictive model for prostate cancer.

Zhang C, Wang Z, Shang P, Zhou Y, Zhu J, Xu L, Chen Z, Yu M, Zang Y

pubmed logopapersJul 2 2025
This study aims to investigate the diagnostic value of integrating multi-parametric magnetic resonance imaging (mpMRI) radiomic features with tumor abnormal protein (TAP) and clinical characteristics for diagnosing prostate cancer. A cohort of 109 patients who underwent both mpMRI and TAP assessments prior to prostate biopsy were enrolled. Radiomic features were meticulously extracted from T2-weighted imaging (T2WI) and the apparent diffusion coefficient (ADC) maps. Feature selection was performed using t-tests and the Least Absolute Shrinkage and Selection Operator (LASSO) regression, followed by model construction using the random forest algorithm. To further enhance the model's accuracy and predictive performance, this study incorporated clinical factors including age, serum prostate-specific antigen (PSA) levels, and prostate volume. By integrating these clinical indicators with radiomic features, a more comprehensive and precise predictive model was developed. Finally, the model's performance was quantified by calculating accuracy, sensitivity, specificity, precision, recall, F1 score, and the area under the curve (AUC). From mpMRI sequences of T2WI, dADC(b = 100/1000 s/mm<sup>2</sup>), and dADC(b = 100/2000 s/mm<sup>2</sup>), 8, 10, and 13 radiomic features were identified as significantly correlated with prostate cancer, respectively. Random forest models constructed based on these three sets of radiomic features achieved AUCs of 0.83, 0.86, and 0.87, respectively. When integrating all three sets of data to formulate a random forest model, an AUC of 0.84 was obtained. Additionally, a random forest model constructed on TAP and clinical characteristics achieved an AUC of 0.85. Notably, combining mpMRI radiomic features with TAP and clinical characteristics, or integrating dADC (b = 100/2000 s/mm²) sequence with TAP and clinical characteristics to construct random forest models, improved the AUCs to 0.91 and 0.92, respectively. The proposed model, which integrates radiomic features, TAP and clinical characteristics using machine learning, demonstrated high predictive efficiency in diagnosing prostate cancer.

Deep learning strategies for semantic segmentation of pediatric brain tumors in multiparametric MRI.

Cariola A, Sibilano E, Guerriero A, Bevilacqua V, Brunetti A

pubmed logopapersJul 2 2025
Automated segmentation of pediatric brain tumors (PBTs) can support precise diagnosis and treatment monitoring, but it is still poorly investigated in literature. This study proposes two different Deep Learning approaches for semantic segmentation of tumor regions in PBTs from MRI scans. Two pipelines were developed for segmenting enhanced tumor (ET), tumor core (TC), and whole tumor (WT) in pediatric gliomas from the BraTS-PEDs 2024 dataset. First, a pre-trained SegResNet model was retrained with a transfer learning approach and tested on the pediatric cohort. Then, two novel multi-encoder architectures leveraging the attention mechanism were designed and trained from scratch. To enhance the performance on ET regions, an ensemble paradigm and post-processing techniques were implemented. Overall, the 3-encoder model achieved the best performance in terms of Dice Score on TC and WT when trained with Dice Loss and on ET when trained with Generalized Dice Focal Loss. SegResNet showed higher recall on TC and WT, and higher precision on ET. After post-processing, we reached Dice Scores of 0.843, 0.869, 0.757 with the pre-trained model and 0.852, 0.876, 0.764 with the ensemble model for TC, WT and ET, respectively. Both strategies yielded state-of-the-art performances, although the ensemble demonstrated significantly superior results. Segmentation of the ET region was improved after post-processing, which increased test metrics while maintaining the integrity of the data.

A novel neuroimaging based early detection framework for alzheimer disease using deep learning.

Alasiry A, Shinan K, Alsadhan AA, Alhazmi HE, Alanazi F, Ashraf MU, Muhammad T

pubmed logopapersJul 2 2025
Alzheimer's disease (AD) is a progressive neurodegenerative disorder that significantly impacts cognitive function, posing a major global health challenge. Despite its rising prevalence, particularly in low and middle-income countries, early diagnosis remains inadequate, with projections estimating over 55 million affected individuals by 2022, expected to triple by 2050. Accurate early detection is critical for effective intervention. This study presents Neuroimaging-based Early Detection of Alzheimer's Disease using Deep Learning (NEDA-DL), a novel computer-aided diagnostic (CAD) framework leveraging a hybrid ResNet-50 and AlexNet architecture optimized with CUDA-based parallel processing. The proposed deep learning model processes MRI and PET neuroimaging data, utilizing depthwise separable convolutions to enhance computational efficiency. Performance evaluation using key metrics including accuracy, sensitivity, specificity, and F1-score demonstrates state-of-the-art classification performance, with the Softmax classifier achieving 99.87% accuracy. Comparative analyses further validate the superiority of NEDA-DL over existing methods. By integrating structural and functional neuroimaging insights, this approach enhances diagnostic precision and supports clinical decision-making in Alzheimer's disease detection.

Multi channel fusion diffusion models for brain tumor MRI data augmentation.

Zuo C, Xue J, Yuan C

pubmed logopapersJul 2 2025
The early diagnosis of brain tumors is crucial for patient prognosis, and medical imaging techniques such as MRI and CT scans are essential tools for diagnosing brain tumors. However, high-quality medical image data for brain tumors is often scarce and difficult to obtain, which hinders the development and application of medical image analysis models. With the advancement of artificial intelligence, particularly deep learning technologies in the field of medical imaging, new concepts and tools have been introduced for the early diagnosis, treatment planning, and prognosis evaluation of brain tumors. To address the challenge of imbalanced brain tumor datasets, we propose a novel data augmentation technique based on a diffusion model, referred to as the Multi-Channel Fusion Diffusion Model(MCFDiffusion). This method tackles the issue of data imbalance by converting healthy brain MRI images into images containing tumors, thereby enabling deep learning models to achieve better performance and assisting physicians in making more accurate diagnoses and treatment plans. In our experiments, we used a publicly available brain tumor dataset and compared the performance of image classification and segmentation tasks between the original data and the data enhanced by our method. The results show that the enhanced data improved the classification accuracy by approximately 3% and the Dice coefficient for segmentation tasks by 1.5%-2.5%. Our research builds upon previous work involving Denoising Diffusion Implicit Models (DDIMs) for image generation and further enhances the applicability of this model in medical imaging by introducing a multi-channel approach and fusing defective areas with healthy images. Future work will explore the application of this model to various types of medical images and further optimize the model to improve its generalization capabilities. We release our code at https://github.com/feiyueaaa/MCFDiffusion.
Page 39 of 2922917 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.