Sort by:
Page 534 of 7577568 results

Tang W, Jin C, Kong Q, Liu C, Chen S, Ding S, Liu B, Feng Z, Li Y, Dai Y, Zhang L, Chen Y, Han X, Liu S, Chen D, Weng Z, Liu W, Wei X, Jiang X, Zhou Q, Mao N, Guo Y

pubmed logopapersJul 1 2025
The accurate and early evaluation of response to neoadjuvant chemotherapy (NAC) in breast cancer is crucial for optimizing treatment strategies and minimizing unnecessary interventions. While deep learning (DL)-based approaches have shown promise in medical imaging analysis, existing models often fail to comprehensively integrate spatial and temporal tumor dynamics. This study aims to develop and validate a spatiotemporal interaction (STI) model based on longitudinal MRI data to predict pathological complete response (pCR) to NAC in breast cancer patients. This study included retrospective and prospective datasets from five medical centers in China, collected from June 2018 to December 2024. These datasets were assigned to the primary cohort (including training and internal validation sets), external validation cohorts, and a prospective validation cohort. DCE-MRI scans from both pre-NAC (T0) and early-NAC (T1) stages were collected for each patient, along with surgical pathology results. A Siamese network-based STI model was developed, integrating spatial features from tumor segmentation with temporal dependencies using a transformer-based multi-head attention mechanism. This model was designed to simultaneously capture spatial heterogeneity and temporal dynamics, enabling accurate prediction of NAC response. The STI model's performance was evaluated using the area under the ROC curve (AUC) and Precision-Recall curve (AP), accuracy, sensitivity, and specificity. Additionally, the I-SPY1 and I-SPY2 datasets were used for Kaplan-Meier survival analysis and to explore the biological basis of the STI model, respectively. The prospective cohort was registered with Chinese Clinical Trial Registration Centre (ChiCTR2500102170). A total of 1044 patients were included in this study, with the pCR rate ranging from 23.8% to 35.9%. The STI model demonstrated good performance in early prediction of NAC response in breast cancer. In the external validation cohorts, the AUC values were 0.923 (95% CI: 0.859-0.987), 0.892 (95% CI: 0.821-0.963), and 0.913 (95% CI: 0.835-0.991), all outperforming the single-timepoint T0 or T1 models, as well as models with spatial information added (all p < 0.05, Delong test). Additionally, the STI model significantly outperformed the clinical model (p < 0.05, Delong test) and radiologists' predictions. In the prospective validation cohort, the STI model identified 90.2% (37/41) of non-pCR and 82.6% (19/23) of pCR patients, reducing misclassification rates by 58.7% and 63.3% compared to radiologists. This indicates that these patients might benefit from treatment adjustment or continued therapy in the early NAC stage. Survival analysis showed a significant correlation between the STI model and both recurrence-free survival (RFS) and overall survival (OS) in breast cancer patients. Further investigation revealed that favorable NAC responses predicted by the STI model were closely linked to upregulated immune-related genes and enhanced immune cell infiltration. Our study established a novel noninvasive STI model that integrates the spatiotemporal evolution of MRI before and during NAC to achieve early and accurate pCR prediction, offering potential guidance for personalized treatment. This study was supported by the National Natural Science Foundation of China (82302314, 62271448, 82171920, 81901711), Basic and Applied Basic Research Foundation of Guangdong Province (2022A1515110792, 2023A1515220097, 2024A1515010653), Medical Scientific Research Foundation of Guangdong Province (A2023073, A2024116), Science and Technology Projects in Guangzhou (2023A04J1275, 2024A03J1030, 2025A03J4163, 2025A03J4162); Guangzhou First People's Hospital Frontier Medical Technology Project (QY-C04).

Han X, Peng C, Ruan SM, Li L, He M, Shi M, Huang B, Luo Y, Liu J, Wen H, Wang W, Zhou J, Lu M, Chen X, Zou R, Liu Z

pubmed logopapersJul 1 2025
Recently, a hepatic arterial infusion chemotherapy (HAIC)-associated combination therapeutic regimen, comprising HAIC and systemic therapies (molecular targeted therapy plus immunotherapy), referred to as HAIC combination therapy, has demonstrated promising anticancer effects. Identifying individuals who may potentially benefit from HAIC combination therapy could contribute to improved treatment decision-making for patients with advanced hepatocellular carcinoma (HCC). This dual-center study was a retrospective analysis of prospectively collected data with advanced HCC patients who underwent HAIC combination therapy and pretreatment contrast-enhanced ultrasound (CEUS) evaluations from March 2019 to March 2023. Two deep learning models, AE-3DNet and 3DNet, along with a time-intensity curve-based model, were developed for predicting therapeutic responses from pretreatment CEUS cine images. Diagnostic metrics, including the area under the receiver-operating-characteristic curve (AUC), were calculated to compare the performance of the models. Survival analysis was used to assess the relationship between predicted responses and prognostic outcomes. The model of AE-3DNet was constructed on the top of 3DNet, with innovative incorporation of spatiotemporal attention modules to enhance the capacity for dynamic feature extraction. 326 patients were included, 243 of whom formed the internal validation cohort, which was utilized for model development and fivefold cross-validation, while the rest formed the external validation cohort. Objective response (OR) or non-objective response (non-OR) were observed in 63% (206/326) and 37% (120/326) of the participants, respectively. Among the three efficacy prediction models assessed, AE-3DNet performed superiorly with AUC values of 0.84 and 0.85 in the internal and external validation cohorts, respectively. AE-3DNet's predicted response survival curves closely resembled actual clinical outcomes. The deep learning model of AE-3DNet developed based on pretreatment CEUS cine performed satisfactorily in predicting the responses of advanced HCC to HAIC combination therapy, which may serve as a promising tool for guiding combined therapy and individualized treatment strategies. Trial Registration: NCT02973685.

Li Y, Huang F, Chen D, Zhang Y, Zhang X, Liang L, Pan J, Tan L, Liu S, Lin J, Li Z, Hu G, Chen H, Peng C, Ye F, Zheng J

pubmed logopapersJul 1 2025
The differential diagnosis of invasive pulmonary aspergillosis (IPA), pulmonary mucormycosis (PM), bacterial pneumonia (BP) and pulmonary tuberculosis (PTB) are challenging due to overlapping clinical and imaging features. Manual CT lesion segmentation is time-consuming, deep-learning (DL)-based segmentation models offer a promising solution, yet disease-specific models for these infections remain underexplored. We aimed to develop and validate dedicated CT segmentation models for IPA, PM, BP and PTB to enhance diagnostic accuracy. Methods:Retrospective multi-centre data (115 IPA, 53 PM, 130 BP, 125 PTB) were used for training/internal validation, with 21 IPA, 8PM, 30 BP and 31 PTB cases for external validation. Expert-annotated lesions served as ground truth. An improved 3D U-Net architecture was employed for segmentation, with preprocessing steps including normalisations, cropping and data augmentation. Performance was evaluated using Dice coefficients. Results:Internal validation achieved Dice scores of 78.83% (IPA), 93.38% (PM), 80.12% (BP) and 90.47% (PTB). External validation showed slightly reduced but robust performance: 75.09% (IPA), 77.53% (PM), 67.40% (BP) and 80.07% (PTB). The PM model demonstrated exceptional generalisability, scoring 83.41% on IPA data. Cross-validation revealed mutual applicability, with IPA/PTB models achieving > 75% Dice for each other's lesions. BP segmentation showed lower but clinically acceptable performance ( >72%), likely due to complex radiological patterns. Disease-specific DL segmentation models exhibited high accuracy, particularly for PM and PTB. While IPA and BP models require refinement, all demonstrated cross-disease utility, suggesting immediate clinical value for preliminary lesion annotation. Future efforts should enhance datasets and optimise models for intricate cases.

Medghalchi Y, Zakariaei N, Rahmim A, Hacihaliloglu I

pubmed logopapersJul 1 2025
The effectiveness of deep neural networks (DNNs) for the ultrasound image analysis depends on the availability and accuracy of the training data. However, the large-scale data collection and annotation, particularly in medical fields, is often costly and time consuming, especially when healthcare professionals are already burdened with their clinical responsibilities. Ensuring that a model remains robust across different imaging conditions-such as variations in ultrasound devices and manual transducer operation-is crucial in the ultrasound image analysis. The data augmentation is a widely used solution, as it increases both the size and diversity of datasets, thereby enhancing the generalization performance of DNNs. With the advent of generative networks such as generative adversarial networks (GANs) and diffusion-based models, the synthetic data generation has emerged as a promising augmentation technique. However, comprehensive studies comparing classic and generative method-based augmentation methods are lacking, particularly in ultrasound-based breast cancer imaging, where variability in breast density, tumor morphology, and operator skill poses significant challenges. This study aims to compare the effectiveness of classic and generative network-based data augmentation techniques in improving the performance and robustness of breast ultrasound image classification models. Specifically, we seek to determine whether the computational intensity of generative networks is justified in data augmentation. This analysis will provide valuable insights into the role and benefits of each technique in enhancing the diagnostic accuracy of DNN for breast cancer diagnosis. The code for this work will be available at: ht.tps://github.com/yasamin-med/SCDA.git.

Dahou A, Elaziz MA, Khattap MG, Hassan HGEMA

pubmed logopapersJul 1 2025
Brachial plexopathies (BPs) encompass a complex spectrum of nerve injuries affecting motor and sensory function in the upper extremities. Diagnosis is challenging due to the intricate anatomy and symptom overlap with other neuropathies. Magnetic Resonance Neurography (MRN) provides advanced imaging but requires specialized interpretation. This study proposes an AI-based framework that combines deep learning (DL) with the modified Hiking Optimization Algorithm (MHOA) enhanced by a Comprehensive Learning (CL) technique to improve the classification of nerve injuries (neuropraxia, axonotmesis, neurotmesis) using MRN data. The framework utilizes MobileNetV4 for feature extraction and MHOA for optimized feature selection across different MRI sequences (STIR, T2, T1, and DWI). A dataset of 39 patients diagnosed with BP was used. The framework classifies injuries based on Seddon's criteria, distinguishing between normal and abnormal conditions as well as injury severity. The model achieved excellent performance, with 1.0000 accuracy in distinguishing normal from abnormal conditions using STIR and T2 sequences. For injury severity classification, accuracy was 0.9820 in STIR, outperforming the original HOA and other metaheuristic algorithms. Additionally, high classification accuracy (0.9667) was observed in DWI. The proposed framework outperformed traditional methods and demonstrated high sensitivity and specificity. The proposed AI-based framework significantly improves the diagnosis of BP by accurately classifying nerve injury types. By integrating DL and optimization techniques, it reduces diagnostic variability, making it a valuable tool for clinical settings with limited specialized neuroimaging expertise. This framework has the potential to enhance clinical decision-making and optimize patient outcomes through precise and timely diagnoses.

Li Y, Liu S, Zhang Y, Zhang M, Jiang C, Ni M, Jin D, Qian Z, Wang J, Pan X, Yuan H

pubmed logopapersJul 1 2025
To explore the feasibility of deep learning (DL)-enhanced, fully automated bone mineral density (BMD) measurement using the ultralow-voltage 80 kV chest CT scans performed for lung cancer screening. This study involved 987 patients who underwent 80 kV chest and 120 kV lumbar CT from January to July 2024. Patients were collected from six CT scanners and divided into the training, validation, and test sets 1 and 2 (561: 177: 112: 137). Four convolutional neural networks (CNNs) were employed for automated segmentation (3D VB-Net and SCN), region of interest extraction (3D VB-Net), and BMD calculation (DenseNet and ResNet) of the target vertebrae (T12-L2). The BMD values of T12-L2 were obtained using 80 and 120 kV quantitative CT (QCT), the latter serving as the standard reference. Linear regression and Bland-Altman analyses were used to compare BMD values between 120 kV QCT and 80 kV CNNs, and between 120 kV QCT and 80 kV QCT. Receiver operating characteristic curve analysis was used to assess the diagnostic performance of the 80 kV CNNs and 80 kV QCT for osteoporosis and low BMD from normal BMD. Linear regression and Bland-ltman analyses revealed a stronger correlation (R<sup>2</sup>=0.991-0.998 and 0.990-0.991, P<0.001) and better agreement (mean error, -1.36 to 1.62 and 1.72 to 2.27 mg/cm<sup>3</sup>; 95% limits of agreement, -9.73 to 7.01 and -5.71 to 10.19mg/cm<sup>3</sup>) for BMD between 120 kV QCT and 80 kV CNNs than between 120 kV QCT and 80 kV QCT. The areas under the curve of the 80 kV CNNs and 80 kV QCT in detecting osteoporosis and low BMD were 0.997-1.000 and 0.997-0.998, and 0.998-1.000 and 0.997, respectively. The DL method could achieve fully automated BMD calculation for opportunistic osteoporosis screening with high accuracy using ultralow-voltage 80 kV chest CT performed for lung cancer screening.

Feng E, Jayasuriya NM, Nathani KR, Katsos K, Machlab LA, Johnson GW, Freedman BA, Bydon M

pubmed logopapersJul 1 2025
This study aimed to develop an artificial intelligence (AI) model for automatically detecting Hounsfield unit (HU) values at the L1 vertebra in preoperative thoracolumbar CT scans. This model serves as a screening tool for osteoporosis in patients undergoing spine surgery, offering an alternative to traditional bone mineral density measurement methods like dual-energy x-ray absorptiometry. The authors utilized two CT scan datasets, comprising 501 images, which were split into training, validation, and test subsets. The nnU-Net framework was used for segmentation, followed by an algorithm to calculate HU values from the L1 vertebra. The model's performance was validated against manual HU calculations by expert raters on 56 CT scans. Statistical measures included the Dice coefficient, Pearson correlation coefficient, intraclass correlation coefficient (ICC), and Bland-Altman plots to assess the agreement between AI and human-derived HU measurements. The AI model achieved a high Dice coefficient of 0.91 for vertebral segmentation. The Pearson correlation coefficient between AI-derived HU and human-derived HU values was 0.96, indicating strong agreement. ICC values for interrater reliability were 0.95 and 0.94 for raters 1 and 2, respectively. The mean difference between AI and human HU values was 7.0 HU, with limits of agreement ranging from -21.1 to 35.2 HU. A paired t-test showed no significant difference between AI and human measurements (p = 0.21). The AI model demonstrated strong agreement with human experts in measuring HU values, validating its potential as a reliable tool for automated osteoporosis screening in spine surgery patients. This approach can enhance preoperative risk assessment and perioperative bone health optimization. Future research should focus on external validation and inclusion of diverse patient demographics to ensure broader applicability.

Du M, He S, Liu J, Yuan L

pubmed logopapersJul 1 2025
We aimed to evaluate the diagnostic performance of artificial intelligence (AI) in detecting coronary artery stenosis and calcified plaque on CT angiography (CTA), comparing its diagnostic performance with that of radiologists. A thorough search of the literature was performed using PubMed, Web of Science, and Embase, focusing on studies published until October 2024. Studies were included if they evaluated AI models in detecting coronary artery stenosis and calcified plaque on CTA. A bivariate random-effects model was employed to determine combined sensitivity and specificity. Study heterogeneity was assessed using I<sup>2</sup> statistics. The risk of bias was assessed using the revised quality assessment of diagnostic accuracy studies-2 tool, and the evidence level was graded using the Grading of Recommendations Assessment, Development and Evalutiuon (GRADE) system. Out of 1071 initially identified studies, 17 studies with 5560 patients and images were ultimately included for the final analysis. For coronary artery stenosis ≥50%, AI showed a sensitivity of 0.92 (95% CI: 0.88-0.95), specificity of 0.87 (95% CI: 0.80-0.92), and AUC of 0.96 (95% CI: 0.94-0.97), outperforming radiologists with sensitivity of 0.85 (95% CI: 0.67-0.94), specificity of 0.84 (95% CI: 0.62-0.94), and AUC of 0.91 (95% CI: 0.89-0.93). For stenosis ≥70%, AI achieved a sensitivity of 0.88 (95% CI: 0.70-0.96), specificity of 0.96 (95% CI: 0.90-0.99), and AUC of 0.98 (95% CI: 0.96-0.99). In calcified plaque detection, AI demonstrated a sensitivity of 0.93 (95% CI: 0.84-0.97), specificity of 0.94 (95% CI: 0.88-0.96), and AUC of 0.98 (95% CI: 0.96-0.99)." AI-based CT demonstrated superior diagnostic performance compared to clinicians in identifying ≥50% stenosis in coronary arteries and showed excellent diagnostic performance in recognizing ≥70% coronary artery stenosis and calcified plaque. However, limitations include retrospective study designs and heterogeneity in CTA technologies. Further external validation through prospective, multicenter trials is required to confirm these findings. The original findings of this research are included in the article. For additional inquiries, please contact the corresponding authors.

Wang W, Sun Y, Wu R, Jin L, Shi Z, Tuersun B, Yang S, Li M

pubmed logopapersJul 1 2025
Chest computed tomography (CT) radiomics can be utilized for categorical predictions; however, models predicting pulmonary function indices directly are lacking. This study aimed to develop machine-learning-based regression models to predict pulmonary function using chest CT radiomics. This retrospective study enrolled patients who underwent chest CT and pulmonary function tests between January 2018 and April 2024. Machine-learning regression models were constructed and validated to predict pulmonary function indices, including forced vital capacity (FVC) and forced expiratory volume in 1 s (FEV<sub>1</sub>). The models incorporated radiomics of the whole lung and clinical features. Model performance was evaluated using mean absolute error, mean squared error, root mean squared error, concordance correlation coefficient (CCC), and R-squared (R<sup>2</sup>) value and compared to spirometry results. Individual explanations of the models' decisions were analyzed using an explainable approach based on SHapley Additive exPlanations. In total, 1585 cases were included in the analysis, with 102 of them being external cases. Across the training, validation, test, and external test sets, the combined model consistently achieved the best performance in the regression task for predicting FVC (e.g. external test set: CCC, 0.745 [95% confidence interval 0.642-0.818]; R<sup>2</sup>, 0.601 [0.453-0.707]) and FEV<sub>1</sub> (e.g. external test set: CCC, 0.744 [0.633-0.824]; R<sup>2</sup>, 0.527 [0.298-0.675]). Age, sex, and emphysema were important factors for both FVC and FEV<sub>1</sub>, while distinct radiomics features contributed to each. Whole-lung-based radiomics features can be used to construct regression models to improve pulmonary function prediction.

Accioly ARM, Menezes VO, Calixto LH, Bispo DPCF, Lachmann M, Mourato FA, Machado MAD, Diniz PRB

pubmed logopapersJul 1 2025
Diagnosing Parkinson's disease (PD) typically relies on clinical evaluations, often detecting it in advanced stages. Recently, artificial intelligence has increasingly been applied to imaging for neurodegenerative disorders. This study aims to develop a diagnostic prediction model using T1-weighted magnetic resonance imaging (T1-MRI) data from the caudate and putamen in individuals with early-stage PD. This retrospective case-control study included 69 early-stage PD patients and 22 controls, recruited through the Parkinson's Progression Markers Initiative. T1-MRI scans were acquired using a 3-tesla system. 432 radiomic features were extracted from images of the segmented caudate and putâmen in an automated way. Feature selection was performed using Pearson's correlation and recursive feature elimination to identify the most relevant variables. Three machine learning algorithms-random forest (RF), support vector machine and logistic regression-were evaluated for diagnostic prediction effectiveness using a cross-validation method. The Shapley Additive Explanations technique identified the most significant features distinguishing between the groups. The metrics used to evaluate the performance were discrimination, expressed in area under the ROC curve (AUC), sensitivity and specificity; and calibration, expressed as accuracy. The RF algorithm showed superior performance with an average accuracy of 92.85%, precision of 100.00%, sensitivity of 86.66%, specificity of 96.65% and AUC of 0.93. The three most influential features were contrast, elongation, and gray-level non-uniformity, all from the putamen. Machine learning-based models can differentiate early-stage PD from controls using T1-weighted MRI radiomic features.
Page 534 of 7577568 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,600+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.