Sort by:
Page 50 of 3993982 results

Utility of an artificial intelligence-based lung CT airway model in the quantitative evaluation of large and small airway lesions in patients with chronic obstructive pulmonary disease.

Liu Z, Li J, Li B, Yi G, Pang S, Zhang R, Li P, Yin Z, Zhang J, Lv B, Yan J, Ma J

pubmed logopapersAug 1 2025
Accurate quantification of the extent of bronchial damage across various airway levels in chronic obstructive pulmonary disease (COPD) remains a challenge. In this study, artificial intelligence (AI) was employed to develop an airway segmentation model to investigate the morphological changes of the central and peripheral airways in COPD patients and the effects of these airway changes on pulmonary function classification and acute COPD exacerbations. Clinical data from a total of 340 patients with COPD and 73 healthy volunteers were collected and compiled. An AI-driven airway segmentation model was constructed using Convolutional Neural Regressor (CNR) and Airway Transfer Network (ATN) algorithms. The efficacy of the model was evaluated through support vector machine (SVM) and random forest regression approaches. The area under the receiver operating characteristic (ROC) curve (AUC) of the SVM in evaluating the COPD airway segmentation model was 0.96, with a sensitivity of 97% and a specificity of 92%, however, the AUC value of the SVM was 0.81 when it was replaced the healthy group by non-COPD outpatients. Compared with the healthy group, the grade and the total number of airway segmentation were decreased and the diameters of the right main bronchus and bilateral lobar bronchi of patients with COPD were smaller and the airway walls were thinner (all P < 0.01). However, the diameters of the subsegmental and small airway bronchi were increased, and airway walls were thickened, and the arc lengths were shorter ( all P < 0.01), especially in patients with severe COPD (all P < 0.05). Correlation and regression analysis showed that FEV1%pre was positively correlated with the diameters and airway wall thickness of the main and lobar airway, and the arc lengths of small airway bronchi (all P < 0.05). Airway wall thickness of the subsegment and small airway were found to have the greatest impact on the frequency of COPD exacerbations. Artificial intelligence lung CT airway segmentation model is a non-invasive quantitative tool for measuring chronic obstructive pulmonary disease. The main changes in COPD patients are that the central airway diameter becomes narrower and the thickness becomes thinner. The arc length of the peripheral airway becomes shorter, and the diameter and airway wall thickness become larger, which is more obvious in severe patients. Pulmonary function classification and small and medium airway dysfunction are also affected by the diameter, thickness and arc length of large and small airways. Small airway remodeling is more significant in acute exacerbations of COPD.

Coronary CT angiography evaluation with artificial intelligence for individualized medical treatment of atherosclerosis: a Consensus Statement from the QCI Study Group.

Schulze K, Stantien AM, Williams MC, Vassiliou VS, Giannopoulos AA, Nieman K, Maurovich-Horvat P, Tarkin JM, Vliegenthart R, Weir-McCall J, Mohamed M, Föllmer B, Biavati F, Stahl AC, Knape J, Balogh H, Galea N, Išgum I, Arbab-Zadeh A, Alkadhi H, Manka R, Wood DA, Nicol ED, Nurmohamed NS, Martens FMAC, Dey D, Newby DE, Dewey M

pubmed logopapersAug 1 2025
Coronary CT angiography is widely implemented, with an estimated 2.2 million procedures in patients with stable chest pain every year in Europe alone. In parallel, artificial intelligence and machine learning are poised to transform coronary atherosclerotic plaque evaluation by improving reliability and speed. However, little is known about how to use coronary atherosclerosis imaging biomarkers to individualize recommendations for medical treatment. This Consensus Statement from the Quantitative Cardiovascular Imaging (QCI) Study Group outlines key recommendations derived from a three-step Delphi process that took place after the third international QCI Study Group meeting in September 2024. Experts from various fields of cardiovascular imaging agreed on the use of age-adjusted and gender-adjusted percentile curves, based on coronary plaque data from the DISCHARGE and SCOT-HEART trials. Two key issues were addressed: the need to harness the reliability and precision of artificial intelligence and machine learning tools and to tailor treatment on the basis of individualized plaque analysis. The QCI Study Group recommends that the presence of any atherosclerotic plaque should lead to a recommendation of pharmacological treatment, whereas the 70th percentile of total plaque volume warrants high-intensity treatment. The aim of these recommendations is to lay the groundwork for future trials and to unlock the potential of coronary CT angiography to improve patient outcomes globally.

BEA-CACE: branch-endpoint-aware double-DQN for coronary artery centerline extraction in CT angiography images.

Zhang Y, Luo G, Wang W, Cao S, Dong S, Yu D, Wang X, Wang K

pubmed logopapersAug 1 2025
In order to automate the centerline extraction of the coronary tree, three challenges must be addressed: tracking branches automatically, passing through plaques successfully, and detecting endpoints accurately. This study aims to develop a method to solve the three challenges. We propose a branch-endpoint-aware coronary centerline extraction framework. The framework consists of a deep reinforcement learning-based tracker and a 3D dilated CNN-based detector. The tracker is designed to predict the actions of an agent with the objective of tracking the centerline. The detector identifies bifurcation points and endpoints, assisting the tracker in tracking branches and terminating the tracking process automatically. The detector can also estimate the radius values of the coronary artery. The method achieves the state-of-the-art performance in both the centerline extraction and radius estimate. Furthermore, the method necessitates minimal user interaction to extract a coronary tree, a feature that surpasses other interactive methods. The method can track branches automatically, pass through plaques successfully and detect endpoints accurately. Compared with other interactive methods that require multiple seeds, our method only needs one seed to extract the entire coronary tree.

Acute lymphoblastic leukemia diagnosis using machine learning techniques based on selected features.

El Houby EMF

pubmed logopapersAug 1 2025
Cancer is considered one of the deadliest diseases worldwide. Early detection of cancer can significantly improve patient survival rates. In recent years, computer-aided diagnosis (CAD) systems have been increasingly employed in cancer diagnosis through various medical image modalities. These systems play a critical role in enhancing diagnostic accuracy, reducing physician workload, providing consistent second opinions, and contributing to the efficiency of the medical industry. Acute lymphoblastic leukemia (ALL) is a fast-progressing blood cancer that primarily affects children but can also occur in adults. Early and accurate diagnosis of ALL is crucial for effective treatment and improved outcomes, making it a vital area for CAD system development. In this research, a CAD system for ALL diagnosis has been developed. It contains four phases which are preprocessing, segmentation, feature extraction and selection phase, and classification of suspicious regions as normal or abnormal. The proposed system was applied to microscopic blood images to classify each case as ALL or normal. Three classifiers which are Naïve Bayes (NB), Support Vector Machine (SVM) and K-nearest Neighbor (K-NN) were utilized to classify the images based on selected features. Ant Colony Optimization (ACO) was combined with the classifiers as a feature selection method to identify the optimal subset of features among the extracted features from segmented cell parts that yield the highest classification accuracy. The NB classifier achieved the best performance, with accuracy, sensitivity, and specificity of 96.15%, 97.56, and 94.59%, respectively.

Deep learning-based super-resolution US radiomics to differentiate testicular seminoma and non-seminoma: an international multicenter study.

Zhang Y, Lu S, Peng C, Zhou S, Campo I, Bertolotto M, Li Q, Wang Z, Xu D, Wang Y, Xu J, Wu Q, Hu X, Zheng W, Zhou J

pubmed logopapersAug 1 2025
Subvariants of testicular germ cell tumor (TGCT) significantly affect therapeutic strategies and patient prognosis. However, preoperatively distinguishing seminoma (SE) from non-seminoma (n-SE) remains a challenge. This study aimed to evaluate the performance of a deep learning-based super-resolution (SR) US radiomics model for SE/n-SE differentiation. This international multicenter retrospective study recruited patients with confirmed TGCT between 2015 and 2023. A pre-trained SR reconstruction algorithm was applied to enhance native resolution (NR) images. NR and SR radiomics models were constructed, and the superior model was then integrated with clinical features to construct clinical-radiomics models. Diagnostic performance was evaluated by ROC analysis (AUC) and compared with radiologists' assessments using the DeLong test. A total of 486 male patients were enrolled for training (n = 338), domestic (n = 92), and international (n = 59) validation sets. The SR radiomics model achieved AUCs of 0.90, 0.82, and 0.91, respectively, in the training, domestic, and international validation sets, significantly surpassing the NR model (p < 0.001, p = 0.031, and p = 0.001, respectively). The clinical-radiomics model exhibited a significantly higher across both domestic and international validation sets compared to the SR radiomics model alone (0.95 vs 0.82, p = 0.004; 0.97 vs 0.91, p = 0.031). Moreover, the clinical-radiomics model surpassed the performance of experienced radiologists in both domestic (AUC, 0.95 vs 0.85, p = 0.012) and international (AUC, 0.97 vs 0.77, p < 0.001) validation cohorts. The SR-based clinical-radiomics model can effectively differentiate between SE and n-SE. This international multicenter study demonstrated that a radiomics model of deep learning-based SR reconstructed US images enabled effective differentiation between SE and n-SE. Clinical parameters and radiologists' assessments exhibit limited diagnostic accuracy for SE/n-SE differentiation in TGCT. Based on scrotal US images of TGCT, the SR radiomics models performed better than the NR radiomics models. The SR-based clinical-radiomics model outperforms both the radiomics model and radiologists' assessment, enabling accurate, non-invasive preoperative differentiation between SE and n-SE.

Explainable multimodal deep learning for predicting thyroid cancer lateral lymph node metastasis using ultrasound imaging.

Shen P, Yang Z, Sun J, Wang Y, Qiu C, Wang Y, Ren Y, Liu S, Cai W, Lu H, Yao S

pubmed logopapersAug 1 2025
Preoperative prediction of lateral lymph node metastasis is clinically crucial for guiding surgical strategy and prognosis assessment, yet precise prediction methods are lacking. We therefore develop Lateral Lymph Node Metastasis Network (LLNM-Net), a bidirectional-attention deep-learning model that fuses multimodal data (preoperative ultrasound images, radiology reports, pathological findings, and demographics) from 29,615 patients and 9836 surgical cases across seven centers. Integrating nodule morphology and position with clinical text, LLNM-Net achieves an Area Under the Curve (AUC) of 0.944 and 84.7% accuracy in multicenter testing, outperforming human experts (64.3% accuracy) and surpassing previous models by 7.4%. Here we show tumors within 0.25 cm of the thyroid capsule carry >72% metastasis risk, with middle and upper lobes as high-risk regions. Leveraging location, shape, echogenicity, margins, demographics, and clinician inputs, LLNM-Net further attains an AUC of 0.983 for identifying high-risk patients. The model is thus a promising for tool for preoperative screening and risk stratification.

Transparent brain tumor detection using DenseNet169 and LIME.

Abraham LA, Palanisamy G, Veerapu G

pubmed logopapersAug 1 2025
A crucial area of research in the field of medical imaging is that of brain tumor classification, which greatly aids diagnosis and facilitates treatment planning. This paper proposes DenseNet169-LIME-TumorNet, a model based on deep learning and an integrated combination of DenseNet169 with LIME to boost the performance of brain tumor classification and its interpretability. The model was trained and evaluated on the publicly available Brain Tumor MRI Dataset containing 2,870 images spanning three tumor types. Dense169-LIME-TumorNet achieves a classification accuracy of 98.78%, outperforming widely used architectures including Inception V3, ResNet50, MobileNet V2, EfficientNet variants, and other DenseNet configurations. The integration of LIME provides visual explanations that enhance transparency and reliability in clinical decision-making. Furthermore, the model demonstrates minimal computational overhead, enabling faster inference and deployment in resource-constrained clinical environments, thereby highlighting its practical utility for real-time diagnostic support. Work in the future should run towards creating generalization through the adoption of a multi-modal learning approach, hybrid deep learning development, and real-time application development for AI-assisted diagnosis.

Multimodal data curation via interoperability: use cases with the Medical Imaging and Data Resource Center.

Chen W, Whitney HM, Kahaki S, Meyer C, Li H, Sá RC, Lauderdale D, Napel S, Gersing K, Grossman RL, Giger ML

pubmed logopapersAug 1 2025
Interoperability (the ability of data or tools from non-cooperating resources to integrate or work together with minimal effort) is particularly important for curation of multimodal datasets from multiple data sources. The Medical Imaging and Data Resource Center (MIDRC), a multi-institutional collaborative initiative to collect, curate, and share medical imaging datasets, has made interoperability with other data commons one of its top priorities. The purpose of this study was to demonstrate the interoperability between MIDRC and two other data repositories, BioData Catalyst (BDC) and National Clinical Cohort Collaborative (N3C). Using interoperability capabilities of the data repositories, we built two cohorts for example use cases, with each containing clinical and imaging data on matched patients. The representativeness of the cohorts is characterized by comparing with CDC population statistics using the Jensen-Shannon distance. The process and methods of interoperability demonstrated in this work can be utilized by MIDRC, BDC, and N3C users to create multimodal datasets for development of artificial intelligence/machine learning models.

Keyword-based AI assistance in the generation of radiology reports: A pilot study.

Dong F, Nie S, Chen M, Xu F, Li Q

pubmed logopapersAug 1 2025
Radiology reporting is a time-intensive process, and artificial intelligence (AI) shows potential for textual processing in radiology reporting. In this study, we proposed a keyword-based AI-assisted radiology reporting paradigm and evaluated its potential for clinical implementation. Using MRI data from 100 patients with intracranial tumors, two radiology residents independently wrote both a routine complete report (routine report) and a keyword report for each patient. Based on the keyword reports and a designed prompt, AI-assisted reports were generated (AI-generated reports). The results demonstrated median reporting time reduction ratios of 27.1% and 28.8% (mean, 28.0%) for the two residents, with no significant difference in quality scores between AI-generated and routine reports (p > 0.50). AI-generated reports showed primary diagnosis accuracies of 68.0% (Resident 1) and 76.0% (Resident 2) (mean, 72.0%). These findings suggest that the keyword-based AI-assisted reporting paradigm exhibits significant potential for clinical translation.

Development and Validation of a Brain Aging Biomarker in Middle-Aged and Older Adults: Deep Learning Approach.

Li Z, Li J, Li J, Wang M, Xu A, Huang Y, Yu Q, Zhang L, Li Y, Li Z, Wu X, Bu J, Li W

pubmed logopapersAug 1 2025
Precise assessment of brain aging is crucial for early detection of neurodegenerative disorders and aiding clinical practice. Existing magnetic resonance imaging (MRI)-based methods excel in this task, but they still have room for improvement in capturing local morphological variations across brain regions and preserving the inherent neurobiological topological structures. To develop and validate a deep learning framework incorporating both connectivity and complexity for accurate brain aging estimation, facilitating early identification of neurodegenerative diseases. We used 5889 T1-weighted MRI scans from the Alzheimer's Disease Neuroimaging Initiative dataset. We proposed a novel brain vision graph neural network (BVGN), incorporating neurobiologically informed feature extraction modules and global association mechanisms to provide a sensitive deep learning-based imaging biomarker. Model performance was evaluated using mean absolute error (MAE) against benchmark models, while generalization capability was further validated on an external UK Biobank dataset. We calculated the brain age gap across distinct cognitive states and conducted multiple logistic regressions to compare its discriminative capacity against conventional cognitive-related variables in distinguishing cognitively normal (CN) and mild cognitive impairment (MCI) states. Longitudinal track, Cox regression, and Kaplan-Meier plots were used to investigate the longitudinal performance of the brain age gap. The BVGN model achieved an MAE of 2.39 years, surpassing current state-of-the-art approaches while obtaining an interpretable saliency map and graph theory supported by medical evidence. Furthermore, its performance was validated on the UK Biobank cohort (N=34,352) with an MAE of 2.49 years. The brain age gap derived from BVGN exhibited significant difference across cognitive states (CN vs MCI vs Alzheimer disease; P<.001), and demonstrated the highest discriminative capacity between CN and MCI than general cognitive assessments, brain volume features, and apolipoprotein E4 carriage (area under the receiver operating characteristic curve [AUC] of 0.885 vs AUC ranging from 0.646 to 0.815). Brain age gap exhibited clinical feasibility combined with Functional Activities Questionnaire, with improved discriminative capacity in models achieving lower MAEs (AUC of 0.945 vs 0.923 and 0.911; AUC of 0.935 vs 0.900 and 0.881). An increasing brain age gap identified by BVGN may indicate underlying pathological changes in the CN to MCI progression, with each unit increase linked to a 55% (hazard ratio=1.55, 95% CI 1.13-2.13; P=.006) higher risk of cognitive decline in individuals who are CN and a 29% (hazard ratio=1.29, 95% CI 1.09-1.51; P=.002) increase in individuals with MCI. BVGN offers a precise framework for brain aging assessment, demonstrates strong generalization on an external large-scale dataset, and proposes novel interpretability strategies to elucidate multiregional cooperative aging patterns. The brain age gap derived from BVGN is validated as a sensitive biomarker for early identification of MCI and predicting cognitive decline, offering substantial potential for clinical applications.
Page 50 of 3993982 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.