Sort by:
Page 33 of 2182174 results

A Contrast-Enhanced Ultrasound Cine-Based Deep Learning Model for Predicting the Response of Advanced Hepatocellular Carcinoma to Hepatic Arterial Infusion Chemotherapy Combined With Systemic Therapies.

Han X, Peng C, Ruan SM, Li L, He M, Shi M, Huang B, Luo Y, Liu J, Wen H, Wang W, Zhou J, Lu M, Chen X, Zou R, Liu Z

pubmed logopapersJul 1 2025
Recently, a hepatic arterial infusion chemotherapy (HAIC)-associated combination therapeutic regimen, comprising HAIC and systemic therapies (molecular targeted therapy plus immunotherapy), referred to as HAIC combination therapy, has demonstrated promising anticancer effects. Identifying individuals who may potentially benefit from HAIC combination therapy could contribute to improved treatment decision-making for patients with advanced hepatocellular carcinoma (HCC). This dual-center study was a retrospective analysis of prospectively collected data with advanced HCC patients who underwent HAIC combination therapy and pretreatment contrast-enhanced ultrasound (CEUS) evaluations from March 2019 to March 2023. Two deep learning models, AE-3DNet and 3DNet, along with a time-intensity curve-based model, were developed for predicting therapeutic responses from pretreatment CEUS cine images. Diagnostic metrics, including the area under the receiver-operating-characteristic curve (AUC), were calculated to compare the performance of the models. Survival analysis was used to assess the relationship between predicted responses and prognostic outcomes. The model of AE-3DNet was constructed on the top of 3DNet, with innovative incorporation of spatiotemporal attention modules to enhance the capacity for dynamic feature extraction. 326 patients were included, 243 of whom formed the internal validation cohort, which was utilized for model development and fivefold cross-validation, while the rest formed the external validation cohort. Objective response (OR) or non-objective response (non-OR) were observed in 63% (206/326) and 37% (120/326) of the participants, respectively. Among the three efficacy prediction models assessed, AE-3DNet performed superiorly with AUC values of 0.84 and 0.85 in the internal and external validation cohorts, respectively. AE-3DNet's predicted response survival curves closely resembled actual clinical outcomes. The deep learning model of AE-3DNet developed based on pretreatment CEUS cine performed satisfactorily in predicting the responses of advanced HCC to HAIC combination therapy, which may serve as a promising tool for guiding combined therapy and individualized treatment strategies. Trial Registration: NCT02973685.

Deep Learning Models for CT Segmentation of Invasive Pulmonary Aspergillosis, Mucormycosis, Bacterial Pneumonia and Tuberculosis: A Multicentre Study.

Li Y, Huang F, Chen D, Zhang Y, Zhang X, Liang L, Pan J, Tan L, Liu S, Lin J, Li Z, Hu G, Chen H, Peng C, Ye F, Zheng J

pubmed logopapersJul 1 2025
The differential diagnosis of invasive pulmonary aspergillosis (IPA), pulmonary mucormycosis (PM), bacterial pneumonia (BP) and pulmonary tuberculosis (PTB) are challenging due to overlapping clinical and imaging features. Manual CT lesion segmentation is time-consuming, deep-learning (DL)-based segmentation models offer a promising solution, yet disease-specific models for these infections remain underexplored. We aimed to develop and validate dedicated CT segmentation models for IPA, PM, BP and PTB to enhance diagnostic accuracy. Methods:Retrospective multi-centre data (115 IPA, 53 PM, 130 BP, 125 PTB) were used for training/internal validation, with 21 IPA, 8PM, 30 BP and 31 PTB cases for external validation. Expert-annotated lesions served as ground truth. An improved 3D U-Net architecture was employed for segmentation, with preprocessing steps including normalisations, cropping and data augmentation. Performance was evaluated using Dice coefficients. Results:Internal validation achieved Dice scores of 78.83% (IPA), 93.38% (PM), 80.12% (BP) and 90.47% (PTB). External validation showed slightly reduced but robust performance: 75.09% (IPA), 77.53% (PM), 67.40% (BP) and 80.07% (PTB). The PM model demonstrated exceptional generalisability, scoring 83.41% on IPA data. Cross-validation revealed mutual applicability, with IPA/PTB models achieving > 75% Dice for each other's lesions. BP segmentation showed lower but clinically acceptable performance ( >72%), likely due to complex radiological patterns. Disease-specific DL segmentation models exhibited high accuracy, particularly for PM and PTB. While IPA and BP models require refinement, all demonstrated cross-disease utility, suggesting immediate clinical value for preliminary lesion annotation. Future efforts should enhance datasets and optimise models for intricate cases.

Synthetic Versus Classic Data Augmentation: Impacts on Breast Ultrasound Image Classification.

Medghalchi Y, Zakariaei N, Rahmim A, Hacihaliloglu I

pubmed logopapersJul 1 2025
The effectiveness of deep neural networks (DNNs) for the ultrasound image analysis depends on the availability and accuracy of the training data. However, the large-scale data collection and annotation, particularly in medical fields, is often costly and time consuming, especially when healthcare professionals are already burdened with their clinical responsibilities. Ensuring that a model remains robust across different imaging conditions-such as variations in ultrasound devices and manual transducer operation-is crucial in the ultrasound image analysis. The data augmentation is a widely used solution, as it increases both the size and diversity of datasets, thereby enhancing the generalization performance of DNNs. With the advent of generative networks such as generative adversarial networks (GANs) and diffusion-based models, the synthetic data generation has emerged as a promising augmentation technique. However, comprehensive studies comparing classic and generative method-based augmentation methods are lacking, particularly in ultrasound-based breast cancer imaging, where variability in breast density, tumor morphology, and operator skill poses significant challenges. This study aims to compare the effectiveness of classic and generative network-based data augmentation techniques in improving the performance and robustness of breast ultrasound image classification models. Specifically, we seek to determine whether the computational intensity of generative networks is justified in data augmentation. This analysis will provide valuable insights into the role and benefits of each technique in enhancing the diagnostic accuracy of DNN for breast cancer diagnosis. The code for this work will be available at: ht.tps://github.com/yasamin-med/SCDA.git.

A Deep Learning Approach for Nerve Injury Classification in Brachial Plexopathies Using Magnetic Resonance Neurography with Modified Hiking Optimization Algorithm.

Dahou A, Elaziz MA, Khattap MG, Hassan HGEMA

pubmed logopapersJul 1 2025
Brachial plexopathies (BPs) encompass a complex spectrum of nerve injuries affecting motor and sensory function in the upper extremities. Diagnosis is challenging due to the intricate anatomy and symptom overlap with other neuropathies. Magnetic Resonance Neurography (MRN) provides advanced imaging but requires specialized interpretation. This study proposes an AI-based framework that combines deep learning (DL) with the modified Hiking Optimization Algorithm (MHOA) enhanced by a Comprehensive Learning (CL) technique to improve the classification of nerve injuries (neuropraxia, axonotmesis, neurotmesis) using MRN data. The framework utilizes MobileNetV4 for feature extraction and MHOA for optimized feature selection across different MRI sequences (STIR, T2, T1, and DWI). A dataset of 39 patients diagnosed with BP was used. The framework classifies injuries based on Seddon's criteria, distinguishing between normal and abnormal conditions as well as injury severity. The model achieved excellent performance, with 1.0000 accuracy in distinguishing normal from abnormal conditions using STIR and T2 sequences. For injury severity classification, accuracy was 0.9820 in STIR, outperforming the original HOA and other metaheuristic algorithms. Additionally, high classification accuracy (0.9667) was observed in DWI. The proposed framework outperformed traditional methods and demonstrated high sensitivity and specificity. The proposed AI-based framework significantly improves the diagnosis of BP by accurately classifying nerve injury types. By integrating DL and optimization techniques, it reduces diagnostic variability, making it a valuable tool for clinical settings with limited specialized neuroimaging expertise. This framework has the potential to enhance clinical decision-making and optimize patient outcomes through precise and timely diagnoses.

Deep Learning-enhanced Opportunistic Osteoporosis Screening in Ultralow-Voltage (80 kV) Chest CT: A Preliminary Study.

Li Y, Liu S, Zhang Y, Zhang M, Jiang C, Ni M, Jin D, Qian Z, Wang J, Pan X, Yuan H

pubmed logopapersJul 1 2025
To explore the feasibility of deep learning (DL)-enhanced, fully automated bone mineral density (BMD) measurement using the ultralow-voltage 80 kV chest CT scans performed for lung cancer screening. This study involved 987 patients who underwent 80 kV chest and 120 kV lumbar CT from January to July 2024. Patients were collected from six CT scanners and divided into the training, validation, and test sets 1 and 2 (561: 177: 112: 137). Four convolutional neural networks (CNNs) were employed for automated segmentation (3D VB-Net and SCN), region of interest extraction (3D VB-Net), and BMD calculation (DenseNet and ResNet) of the target vertebrae (T12-L2). The BMD values of T12-L2 were obtained using 80 and 120 kV quantitative CT (QCT), the latter serving as the standard reference. Linear regression and Bland-Altman analyses were used to compare BMD values between 120 kV QCT and 80 kV CNNs, and between 120 kV QCT and 80 kV QCT. Receiver operating characteristic curve analysis was used to assess the diagnostic performance of the 80 kV CNNs and 80 kV QCT for osteoporosis and low BMD from normal BMD. Linear regression and Bland-ltman analyses revealed a stronger correlation (R<sup>2</sup>=0.991-0.998 and 0.990-0.991, P<0.001) and better agreement (mean error, -1.36 to 1.62 and 1.72 to 2.27 mg/cm<sup>3</sup>; 95% limits of agreement, -9.73 to 7.01 and -5.71 to 10.19mg/cm<sup>3</sup>) for BMD between 120 kV QCT and 80 kV CNNs than between 120 kV QCT and 80 kV QCT. The areas under the curve of the 80 kV CNNs and 80 kV QCT in detecting osteoporosis and low BMD were 0.997-1.000 and 0.997-0.998, and 0.998-1.000 and 0.997, respectively. The DL method could achieve fully automated BMD calculation for opportunistic osteoporosis screening with high accuracy using ultralow-voltage 80 kV chest CT performed for lung cancer screening.

Artificial intelligence image analysis for Hounsfield units in preoperative thoracolumbar CT scans: an automated screening for osteoporosis in patients undergoing spine surgery.

Feng E, Jayasuriya NM, Nathani KR, Katsos K, Machlab LA, Johnson GW, Freedman BA, Bydon M

pubmed logopapersJul 1 2025
This study aimed to develop an artificial intelligence (AI) model for automatically detecting Hounsfield unit (HU) values at the L1 vertebra in preoperative thoracolumbar CT scans. This model serves as a screening tool for osteoporosis in patients undergoing spine surgery, offering an alternative to traditional bone mineral density measurement methods like dual-energy x-ray absorptiometry. The authors utilized two CT scan datasets, comprising 501 images, which were split into training, validation, and test subsets. The nnU-Net framework was used for segmentation, followed by an algorithm to calculate HU values from the L1 vertebra. The model's performance was validated against manual HU calculations by expert raters on 56 CT scans. Statistical measures included the Dice coefficient, Pearson correlation coefficient, intraclass correlation coefficient (ICC), and Bland-Altman plots to assess the agreement between AI and human-derived HU measurements. The AI model achieved a high Dice coefficient of 0.91 for vertebral segmentation. The Pearson correlation coefficient between AI-derived HU and human-derived HU values was 0.96, indicating strong agreement. ICC values for interrater reliability were 0.95 and 0.94 for raters 1 and 2, respectively. The mean difference between AI and human HU values was 7.0 HU, with limits of agreement ranging from -21.1 to 35.2 HU. A paired t-test showed no significant difference between AI and human measurements (p = 0.21). The AI model demonstrated strong agreement with human experts in measuring HU values, validating its potential as a reliable tool for automated osteoporosis screening in spine surgery patients. This approach can enhance preoperative risk assessment and perioperative bone health optimization. Future research should focus on external validation and inclusion of diverse patient demographics to ensure broader applicability.

Artificial Intelligence in CT Angiography for the Detection of Coronary Artery Stenosis and Calcified Plaque: A Systematic Review and Meta-analysis.

Du M, He S, Liu J, Yuan L

pubmed logopapersJul 1 2025
We aimed to evaluate the diagnostic performance of artificial intelligence (AI) in detecting coronary artery stenosis and calcified plaque on CT angiography (CTA), comparing its diagnostic performance with that of radiologists. A thorough search of the literature was performed using PubMed, Web of Science, and Embase, focusing on studies published until October 2024. Studies were included if they evaluated AI models in detecting coronary artery stenosis and calcified plaque on CTA. A bivariate random-effects model was employed to determine combined sensitivity and specificity. Study heterogeneity was assessed using I<sup>2</sup> statistics. The risk of bias was assessed using the revised quality assessment of diagnostic accuracy studies-2 tool, and the evidence level was graded using the Grading of Recommendations Assessment, Development and Evalutiuon (GRADE) system. Out of 1071 initially identified studies, 17 studies with 5560 patients and images were ultimately included for the final analysis. For coronary artery stenosis ≥50%, AI showed a sensitivity of 0.92 (95% CI: 0.88-0.95), specificity of 0.87 (95% CI: 0.80-0.92), and AUC of 0.96 (95% CI: 0.94-0.97), outperforming radiologists with sensitivity of 0.85 (95% CI: 0.67-0.94), specificity of 0.84 (95% CI: 0.62-0.94), and AUC of 0.91 (95% CI: 0.89-0.93). For stenosis ≥70%, AI achieved a sensitivity of 0.88 (95% CI: 0.70-0.96), specificity of 0.96 (95% CI: 0.90-0.99), and AUC of 0.98 (95% CI: 0.96-0.99). In calcified plaque detection, AI demonstrated a sensitivity of 0.93 (95% CI: 0.84-0.97), specificity of 0.94 (95% CI: 0.88-0.96), and AUC of 0.98 (95% CI: 0.96-0.99)." AI-based CT demonstrated superior diagnostic performance compared to clinicians in identifying ≥50% stenosis in coronary arteries and showed excellent diagnostic performance in recognizing ≥70% coronary artery stenosis and calcified plaque. However, limitations include retrospective study designs and heterogeneity in CTA technologies. Further external validation through prospective, multicenter trials is required to confirm these findings. The original findings of this research are included in the article. For additional inquiries, please contact the corresponding authors.

Machine-Learning-Based Computed Tomography Radiomics Regression Model for Predicting Pulmonary Function.

Wang W, Sun Y, Wu R, Jin L, Shi Z, Tuersun B, Yang S, Li M

pubmed logopapersJul 1 2025
Chest computed tomography (CT) radiomics can be utilized for categorical predictions; however, models predicting pulmonary function indices directly are lacking. This study aimed to develop machine-learning-based regression models to predict pulmonary function using chest CT radiomics. This retrospective study enrolled patients who underwent chest CT and pulmonary function tests between January 2018 and April 2024. Machine-learning regression models were constructed and validated to predict pulmonary function indices, including forced vital capacity (FVC) and forced expiratory volume in 1 s (FEV<sub>1</sub>). The models incorporated radiomics of the whole lung and clinical features. Model performance was evaluated using mean absolute error, mean squared error, root mean squared error, concordance correlation coefficient (CCC), and R-squared (R<sup>2</sup>) value and compared to spirometry results. Individual explanations of the models' decisions were analyzed using an explainable approach based on SHapley Additive exPlanations. In total, 1585 cases were included in the analysis, with 102 of them being external cases. Across the training, validation, test, and external test sets, the combined model consistently achieved the best performance in the regression task for predicting FVC (e.g. external test set: CCC, 0.745 [95% confidence interval 0.642-0.818]; R<sup>2</sup>, 0.601 [0.453-0.707]) and FEV<sub>1</sub> (e.g. external test set: CCC, 0.744 [0.633-0.824]; R<sup>2</sup>, 0.527 [0.298-0.675]). Age, sex, and emphysema were important factors for both FVC and FEV<sub>1</sub>, while distinct radiomics features contributed to each. Whole-lung-based radiomics features can be used to construct regression models to improve pulmonary function prediction.

Machine Learning-Based Diagnostic Prediction Model Using T1-Weighted Striatal Magnetic Resonance Imaging for Early-Stage Parkinson's Disease Detection.

Accioly ARM, Menezes VO, Calixto LH, Bispo DPCF, Lachmann M, Mourato FA, Machado MAD, Diniz PRB

pubmed logopapersJul 1 2025
Diagnosing Parkinson's disease (PD) typically relies on clinical evaluations, often detecting it in advanced stages. Recently, artificial intelligence has increasingly been applied to imaging for neurodegenerative disorders. This study aims to develop a diagnostic prediction model using T1-weighted magnetic resonance imaging (T1-MRI) data from the caudate and putamen in individuals with early-stage PD. This retrospective case-control study included 69 early-stage PD patients and 22 controls, recruited through the Parkinson's Progression Markers Initiative. T1-MRI scans were acquired using a 3-tesla system. 432 radiomic features were extracted from images of the segmented caudate and putâmen in an automated way. Feature selection was performed using Pearson's correlation and recursive feature elimination to identify the most relevant variables. Three machine learning algorithms-random forest (RF), support vector machine and logistic regression-were evaluated for diagnostic prediction effectiveness using a cross-validation method. The Shapley Additive Explanations technique identified the most significant features distinguishing between the groups. The metrics used to evaluate the performance were discrimination, expressed in area under the ROC curve (AUC), sensitivity and specificity; and calibration, expressed as accuracy. The RF algorithm showed superior performance with an average accuracy of 92.85%, precision of 100.00%, sensitivity of 86.66%, specificity of 96.65% and AUC of 0.93. The three most influential features were contrast, elongation, and gray-level non-uniformity, all from the putamen. Machine learning-based models can differentiate early-stage PD from controls using T1-weighted MRI radiomic features.

External Validation of an Artificial Intelligence Algorithm Using Biparametric MRI and Its Simulated Integration with Conventional PI-RADS for Prostate Cancer Detection.

Belue MJ, Mukhtar V, Ram R, Gokden N, Jose J, Massey JL, Biben E, Buddha S, Langford T, Shah S, Harmon SA, Turkbey B, Aydin AM

pubmed logopapersJul 1 2025
Prostate imaging reporting and data systems (PI-RADS) experiences considerable variability in inter-reader performance. Artificial Intelligence (AI) algorithms were suggested to provide comparable performance to PI-RADS for assessing prostate cancer (PCa) risk, albeit tested in highly selected cohorts. This study aimed to assess an AI algorithm for PCa detection in a clinical practice setting and simulate integration of the AI model with PI-RADS for assessment of indeterminate PI-RADS 3 lesions. This retrospective cohort study externally validated a biparametric MRI-based AI model for PCa detection in a consecutive cohort of patients who underwent prostate MRI and subsequently targeted and systematic prostate biopsy at a urology clinic between January 2022 and March 2024. Radiologist interpretations followed PI-RADS v2.1, and biopsies were conducted per PI-RADS scores. The previously developed AI model provided lesion segmentations and cancer probability maps which were compared to biopsy results. Additionally, we conducted a simulation to adjust biopsy thresholds for index PI-RADS category 3 studies, where AI predictions within these studies upgraded them to PI-RADS category 4. Among 144 patients with a median age of 70 years and PSA density of 0.17ng/mL/cc, AI's sensitivity for detection of PCa (86.6%) and clinically significant PCa (csPCa, 88.4%) was comparable to radiologists (85.7%, p=0.84, and 89.5%, p=0.80, respectively). The simulation combining radiologist and AI evaluations improved clinically significant PCa sensitivity by 5.8% (p=0.025). The combination of AI, PI-RADS and PSA density provided the best diagnostic performance for csPCa (area under the curve [AUC]=0.76). The AI algorithm demonstrated comparable PCa detection rates to PI-RADS. The combination of AI with radiologist interpretation improved sensitivity and could be instrumental in assessment of low-risk and indeterminate PI-RADS lesions. The role of AI in PCa screening remains to be further elucidated.
Page 33 of 2182174 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.