Sort by:
Page 163 of 1981980 results

Development and validation of ultrasound-based radiomics deep learning model to identify bone erosion in rheumatoid arthritis.

Yan L, Xu J, Ye X, Lin M, Gong Y, Fang Y, Chen S

pubmed logopapersMay 19 2025
To develop and validate a deep learning radiomics fusion model (DLR) based on ultrasound (US) images to identify bone erosion in rheumatoid arthritis (RA) patients. A total of 432 patients with RA at two institutions were collected. Three hundred twelve patients from center 1 were randomly divided into a training set (N = 218) and an internal test set (N = 94) in a 7:3 ratio; meanwhile, 124 patients from center 2 were as an external test set. Radiomics (Rad) and deep learning (DL) features were extracted based on hand-crafted radiomics and deep transfer learning networks. The least absolute shrinkage and selection operator regression was employed to establish DLR fusion feature from the Rad and DL features. Subsequently, 10 machine learning algorithms were used to construct models and the final optimal model was selected. The performance of models was evaluated using receiver operating characteristic (ROC) and decision curve analysis (DCA). The diagnostic efficacy of sonographers was compared with and without the assistance of the optimal model. LR was chosen as the optimal algorithm for model construction account for superior performance (Rad/DL/DLR: area under the curve [AUC] = 0.906/0.974/0.979) in the training set. In the internal test set, DLR_LR as the final model had the highest AUC (AUC = 0.966), which was also validated in the external test set (AUC = 0.932). With the aid of DLR_LR model, the overall performance of both junior and senior sonographers improved significantly (P < 0.05), and there was no significant difference between the junior sonographer with DLR_LR model assistance and the senior sonographer without assistance (P > 0.05). DLR model based on US images is the best performer and is expected to become an important tool for identifying bone erosion in RA patients. Key Points • DLR model based on US images is the best performer in identifying BE in RA patients. • DLR model may assist the sonographers to improve the accuracy of BE evaluations.

Semiautomated segmentation of breast tumor on automatic breast ultrasound image using a large-scale model with customized modules.

Zhou Y, Ye M, Ye H, Zeng S, Shu X, Pan Y, Wu A, Liu P, Zhang G, Cai S, Chen S

pubmed logopapersMay 19 2025
To verify the capability of the Segment Anything Model for medical images in 3D (SAM-Med3D), tailored with low-rank adaptation (LoRA) strategies, in segmenting breast tumors in Automated Breast Ultrasound (ABUS) images. This retrospective study collected data from 329 patients diagnosed with breast cancer (average age 54 years). The dataset was randomly divided into training (n = 204), validation (n = 29), and test sets (n = 59). Two experienced radiologists manually annotated the regions of interest of each sample in the dataset, which served as ground truth for training and evaluating the SAM-Med3D model with additional customized modules. For semi-automatic tumor segmentation, points were randomly sampled within the lesion areas to simulate the radiologists' clicks in real-world scenarios. The segmentation performance was evaluated using the Dice coefficient. A total of 492 cases (200 from the "Tumor Detection, Segmentation, and Classification Challenge on Automated 3D Breast Ultrasound (TDSC-ABUS) 2023 challenge") were subjected to semi-automatic segmentation inference. The average Dice Similariy Coefficient (DSC) scores for the training, validation, and test sets of the Lishui dataset were 0.75, 0.78, and 0.75, respectively. The Breast Imaging Reporting and Data System (BI-RADS) categories of all samples range from BI-RADS 3 to 6, yielding an average DSC coefficient between 0.73 and 0.77. By categorizing the samples (lesion volumes ranging from 1.64 to 100.03 cm<sup>3</sup>) based on lesion size, the average DSC falls between 0.72 and 0.77.And the overall average DSC for the TDSC-ABUS 2023 challenge dataset was 0.79, with the test set achieving a sora-of-art scores of 0.79. The SAM-Med3D model with additional customized modules demonstrates good performance in semi-automatic 3D ABUS breast cancer tumor segmentation, indicating its feasibility for application in computer-aided diagnosis systems.

Non-invasive CT based multiregional radiomics for predicting pathologic complete response to preoperative neoadjuvant chemoimmunotherapy in non-small cell lung cancer.

Fan S, Xie J, Zheng S, Wang J, Zhang B, Zhang Z, Wang S, Cui Y, Liu J, Zheng X, Ye Z, Cui X, Yue D

pubmed logopapersMay 19 2025
This study aims to develop and validate a multiregional radiomics model to predict pathological complete response (pCR) to neoadjuvant chemoimmunotherapy in non-small cell lung cancer (NSCLC), and further evaluate the performance of the model in different specific subgroups (N2 stage and anti-PD-1/PD-L1). 216 patients with NSCLC who underwent neoadjuvant chemoimmunotherapy followed by surgical intervention were included and assigned to training and validation sets randomly. From pre-treatment baseline CT, one intratumoral (T) and two peritumoral regions (P<sub>3</sub>: 0-3 mm; P<sub>6</sub>: 0-6 mm) were extracted. Five radiomics models were developed using machine learning algorithms to predict pCR, utilizing selected features from intratumoral (T), peritumoral (P<sub>3</sub>, P<sub>6</sub>), and combined intra- and peritumoral regions (T + P<sub>3</sub>, T + P<sub>6</sub>). Additionally, the predictive efficacy of the optimal model was specifically assessed for patients in the N2 stage and anti-PD-1/PD-L1 subgroups. A total of 51.4 % (111/216) of patients exhibited pCR following neoadjuvant chemoimmunotherapy. Multivariable analysis identified that only the T + P<sub>3</sub> radiomics signature served as independent predictor of pCR (P < 0.001). The multiregional radiomics model (T + P<sub>3</sub>) exhibited superior predictive performance for pCR, achieving an area under the curve (AUC) of 0.75 in the validation cohort. Furthermore, this multiregional model maintained robust predictive accuracy in both N2 stage and anti-PD-1/PD-L1 subgroups, with an AUC of 0.829 and 0.833, respectively. The proposed multiregional radiomics model showed potential in predicting pCR in NSCLC after neoadjuvant chemoimmunotherapy, and demonstrated good predictive performance in different specific subgroups. This capability may assist clinicians in identifying suitable candidates for neoadjuvant chemoimmunotherapy and promote the advancement in precision therapy.

Improving Deep Learning-Based Grading of Partial-thickness Supraspinatus Tendon Tears with Guided Diffusion Augmentation.

Ni M, Jiesisibieke D, Zhao Y, Wang Q, Gao L, Tian C, Yuan H

pubmed logopapersMay 19 2025
To develop and validate a deep learning system with guided diffusion-based data augmentation for grading partial-thickness supraspinatus tendon (SST) tears and to compare its performance with experienced radiologists, including external validation. This retrospective study included 1150 patients with arthroscopically confirmed SST tears, divided into a training set (741 patients), validation set (185 patients), and internal test set (185 patients). An independent external test set of 224 patients was used for generalizability assessment. To address data imbalance, MRI images were augmented using a guided diffusion model. A ResNet-34 model was employed for Ellman grading of bursal-sided and articular-sided partial-thickness tears across different MRI sequences (oblique coronal [OCOR], oblique sagittal [OSAG], and combined OCOR+OSAG). Performance was evaluated using AUC and precision-recall curves, and compared to three experienced musculoskeletal (MSK) radiologists. The DeLong test was used to compare performance across different sequence combinations. A total of 26,020 OCOR images and 26,356 OSAG images were generated using the guided diffusion model. For bursal-sided partial-thickness tears in the internal dataset, the model achieved AUCs of 0.99, 0.98, and 0.97 for OCOR, OSAG, and combined sequences, respectively, while for articular-sided tears, AUCs were 0.99, 0.99, and 0.99. The DeLong test showed no significant differences among sequence combinations (P=0.17, 0.14, 0.07). In the external dataset, the combined-sequence model achieved AUCs of 0.99, 0.97, and 0.97 for bursal-sided tears and 0.99, 0.95, and 0.95 for articular-sided tears. Radiologists demonstrated an ICC of 0.99, but their grading performance was significantly lower than the ResNet-34 model (P<0.001). The deep learning system improved grading consistency and significantly reduced evaluation time, while guided diffusion augmentation enhanced model robustness. The proposed deep learning system provides a reliable and efficient method for grading partial-thickness SST tears, achieving radiologist-level accuracy with greater consistency and faster evaluation speed.

Effect of low-dose colchicine on pericoronary inflammation and coronary plaque composition in chronic coronary disease: a subanalysis of the LoDoCo2 trial.

Fiolet ATL, Lin A, Kwiecinski J, Tutein Nolthenius J, McElhinney P, Grodecki K, Kietselaer B, Opstal TS, Cornel JH, Knol RJ, Schaap J, Aarts RAHM, Tutein Nolthenius AMFA, Nidorf SM, Velthuis BK, Dey D, Mosterd A

pubmed logopapersMay 19 2025
Low-dose colchicine (0.5 mg once daily) reduces the risk of major cardiovascular events in coronary disease, but its mechanism of action is not yet fully understood. We investigated whether low-dose colchicine is associated with changes in pericoronary inflammation and plaque composition in patients with chronic coronary disease. We performed a cross-sectional, nationwide, subanalysis of the Low-Dose Colchicine 2 Trial (LoDoCo2, n=5522). CT angiography studies were performed in 151 participants randomised to colchicine or placebo coronary after a median treatment duration of 28.2 months. Pericoronary adipose tissue (PCAT) attenuation measurements around proximal coronary artery segments and quantitative plaque analysis for the entire coronary tree were performed using artificial intelligence-enabled plaque analysis software. Median PCAT attenuation was not significantly different between the two groups (-79.5 Hounsfield units (HU) for colchicine versus -78.7 HU for placebo, p=0.236). Participants assigned to colchicine had a higher volume (169.6 mm<sup>3</sup> vs 113.1 mm<sup>3</sup>, p=0.041) and burden (9.6% vs 7.0%, p=0.035) of calcified plaque, and higher volume of dense calcified plaque (192.8 mm<sup>3</sup> vs 144.3 mm<sup>3</sup>, p=0.048) compared with placebo, independent of statin therapy. Colchicine treatment was associated with a lower burden of low-attenuation plaque in participants on a low-intensity statin, but not in those on a high-intensity statin (p<sub>interaction</sub>=0.037). Pericoronary inflammation did not differ among participants who received low-dose colchicine compared with placebo. Low-dose colchicine was associated with a higher volume of calcified plaque, particularly dense calcified plaque, which is considered a feature of plaque stability.

Accuracy of segment anything model for classification of vascular stenosis in digital subtraction angiography.

Navasardyan V, Katz M, Goertz L, Zohranyan V, Navasardyan H, Shahzadi I, Kröger JR, Borggrefe J

pubmed logopapersMay 19 2025
This retrospective study evaluates the diagnostic performance of an optimized comprehensive multi-stage framework based on the Segment Anything Model (SAM), which we named Dr-SAM, for detecting and grading vascular stenosis in the abdominal aorta and iliac arteries using digital subtraction angiography (DSA). A total of 100 DSA examinations were conducted on 100 patients. The infrarenal abdominal aorta (AAI), common iliac arteries (CIA), and external iliac arteries (EIA) were independently evaluated by two experienced radiologists using a standardized 5-point grading scale. Dr-SAM analyzed the same DSA images, and its assessments were compared with the average stenosis grading provided by the radiologists. Diagnostic accuracy was evaluated using Cohen's kappa, specificity, sensitivity, and Wilcoxon signed-rank tests. Interobserver agreement between radiologists, which established the reference standard, was strong (Cohen's kappa: CIA right = 0.95, CIA left = 0.94, EIA right = 0.98, EIA left = 0.98, AAI = 0.79). Dr-SAM showed high agreement with radiologist consensus for CIA (κ = 0.93 right, 0.91 left), moderate agreement for EIA (κ = 0.79 right, 0.76 left), and fair agreement for AAI (κ = 0.70). Dr-SAM demonstrated excellent specificity (up to 1.0) and robust sensitivity (0.67-0.83). Wilcoxon tests revealed no significant differences between Dr-SAM and radiologist grading (p > 0.05). Dr-SAM proved to be an accurate and efficient tool for vascular assessment, with the potential to streamline diagnostic workflows and reduce variability in stenosis grading. Its ability to deliver rapid and consistent evaluations may contribute to earlier detection of disease and the optimization of treatment strategies. Further studies are needed to confirm these findings in prospective settings and to enhance its capabilities, particularly in the detection of occlusions.

An overview of artificial intelligence and machine learning in shoulder surgery.

Cho SH, Kim YS

pubmed logopapersMay 19 2025
Machine learning (ML), a subset of artificial intelligence (AI), utilizes advanced algorithms to learn patterns from data, enabling accurate predictions and decision-making without explicit programming. In orthopedic surgery, ML is transforming clinical practice, particularly in shoulder arthroplasty and rotator cuff tears (RCTs) management. This review explores the fundamental paradigms of ML, including supervised, unsupervised, and reinforcement learning, alongside key algorithms such as XGBoost, neural networks, and generative adversarial networks. In shoulder arthroplasty, ML accurately predicts postoperative outcomes, complications, and implant selection, facilitating personalized surgical planning and cost optimization. Predictive models, including ensemble learning methods, achieve over 90% accuracy in forecasting complications, while neural networks enhance surgical precision through AI-assisted navigation. In RCTs treatment, ML enhances diagnostic accuracy using deep learning models on magnetic resonance imaging and ultrasound, achieving area under the curve values exceeding 0.90. ML models also predict tear reparability with 85% accuracy and postoperative functional outcomes, including range of motion and patient-reported outcomes. Despite remarkable advancements, challenges such as data variability, model interpretability, and integration into clinical workflows persist. Future directions involve federated learning for robust model generalization and explainable AI to enhance transparency. ML continues to revolutionize orthopedic care by providing data-driven, personalized treatment strategies and optimizing surgical outcomes.

Deep learning feature-based model for predicting lymphovascular invasion in urothelial carcinoma of bladder using CT images.

Xiao B, Lv Y, Peng C, Wei Z, Xv Q, Lv F, Jiang Q, Liu H, Li F, Xv Y, He Q, Xiao M

pubmed logopapersMay 18 2025
Lymphovascular invasion significantly impacts the prognosis of urothelial carcinoma of the bladder. Traditional lymphovascular invasion detection methods are time-consuming and costly. This study aims to develop a deep learning-based model to preoperatively predict lymphovascular invasion status in urothelial carcinoma of bladder using CT images. Data and CT images of 577 patients across four medical centers were retrospectively collected. The largest tumor slices from the transverse, coronal, and sagittal planes were selected and used to train CNN models (InceptionV3, DenseNet121, ResNet18, ResNet34, ResNet50, and VGG11). Deep learning features were extracted and visualized using Grad-CAM. Principal Component Analysis reduced features to 64. Using the extracted features, Decision Tree, XGBoost, and LightGBM models were trained with 5-fold cross-validation and ensembled in a stacking model. Clinical risk factors were identified through logistic regression analyses and combined with DL scores to enhance lymphovascular invasion prediction accuracy. The ResNet50-based model achieved an AUC of 0.818 in the validation set and 0.708 in the testing set. The combined model showed an AUC of 0.794 in the validation set and 0.767 in the testing set, demonstrating robust performance across diverse data. We developed a robust radiomics model based on deep learning features from CT images to preoperatively predict lymphovascular invasion status in urothelial carcinoma of the bladder. This model offers a non-invasive, cost-effective tool to assist clinicians in personalized treatment planning. We developed a robust radiomics model based on deep learning features from CT images to preoperatively predict lymphovascular invasion status in urothelial carcinoma of the bladder. We developed a deep learning feature-based stacking model to predict lymphovascular invasion in urothelial carcinoma of the bladder patients using CT. Max cross sections from three dimensions of the CT image are used to train the CNN model. We made comparisons across six CNN networks, including ResNet50.

ChatGPT-4-Driven Liver Ultrasound Radiomics Analysis: Advantages and Drawbacks Compared to Traditional Techniques.

Sultan L, Venkatakrishna SSB, Anupindi S, Andronikou S, Acord M, Otero H, Darge K, Sehgal C, Holmes J

pubmed logopapersMay 18 2025
Artificial intelligence (AI) is transforming medical imaging, with large language models such as ChatGPT-4 emerging as potential tools for automated image interpretation. While AI-driven radiomics has shown promise in diagnostic imaging, the efficacy of ChatGPT-4 in liver ultrasound analysis remains largely unexamined. This study evaluates the capability of ChatGPT-4 in liver ultrasound radiomics, specifically its ability to differentiate fibrosis, steatosis, and normal liver tissue, compared to conventional image analysis software. Seventy grayscale ultrasound images from a preclinical liver disease model, including fibrosis (n=31), fatty liver (n=18), and normal liver (n=21), were analyzed. ChatGPT-4 extracted texture features, which were compared to those obtained using Interactive Data Language (IDL), a traditional image analysis software. One-way ANOVA was used to identify statistically significant features differentiating liver conditions, and logistic regression models were employed to assess diagnostic performance. ChatGPT-4 extracted nine key textural features-echo intensity, heterogeneity, skewness, kurtosis, contrast, homogeneity, dissimilarity, angular second moment, and entropy-all of which significantly differed across liver conditions (p < 0.05). Among individual features, echo intensity achieved the highest F1-score (0.85). When combined, ChatGPT-4 attained 76% accuracy and 83% sensitivity in classifying liver disease. ROC analysis demonstrated strong discriminatory performance, with AUC values of 0.75 for fibrosis, 0.87 for normal liver, and 0.97 for steatosis. Compared to Interactive Data Language (IDL) image analysis software, ChatGPT-4 exhibited slightly lower sensitivity (0.83 vs. 0.89) but showed moderate correlation (R = 0.68, p < 0.0001) with IDL-derived features. However, it significantly outperformed IDL in processing efficiency, reducing analysis time by 40%, highlighting its potential for high throughput radiomic analysis. Despite slightly lower sensitivity than IDL, ChatGPT-4 demonstrated high feasibility for ultrasound radiomics, offering faster processing, high-throughput analysis, and automated multi-image evaluation. These findings support its potential integration into AI-driven imaging workflows, with further refinements needed to enhance feature reproducibility and diagnostic accuracy.

Machine Learning-Based Dose Prediction in [<sup>177</sup>Lu]Lu-PSMA-617 Therapy by Integrating Biomarkers and Radiomic Features from [<sup>68</sup>Ga]Ga-PSMA-11 PET/CT.

Yazdani E, Sadeghi M, Karamzade-Ziarati N, Jabari P, Amini P, Vosoughi H, Akbari MS, Asadi M, Kheradpisheh SR, Geramifar P

pubmed logopapersMay 18 2025
The study aimed to develop machine learning (ML) models for pretherapy prediction of absorbed doses (ADs) in kidneys and tumoral lesions for metastatic castration-resistant prostate cancer (mCRPC) patients undergoing [<sup>177</sup>Lu]Lu-PSMA-617 (Lu-PSMA) radioligand therapy (RLT). By leveraging radiomic features (RFs) from [<sup>68</sup>Ga]Ga-PSMA-11 (Ga-PSMA) PET/CT scans and clinical biomarkers (CBs), the approach has the potential to improve patient selection and tailor dosimetry-guided therapy. Twenty patients with mCRPC underwent Ga-PSMA PET/CT scans prior to the administration of an initial 6.8±0.4 GBq dose of the first Lu-PSMA RLT cycle. Post-therapy dosimetry involved sequential scintigraphy imaging at approximately 4, 48, and 72 h, along with a SPECT/CT image at around 48 h, to calculate time-integrated activity (TIA) coefficients. Monte Carlo (MC) simulations, leveraging the Geant4 application for tomographic emission (GATE) toolkit, were employed to derive ADs. The ML models were trained using pretherapy RFs from Ga-PSMA PET/CT and CBs as input, while the ADs in kidneys and lesions (n=130), determined using MC simulations from scintigraphy and SPECT imaging, served as the ground truth. Model performance was assessed through leave-one-out cross-validation (LOOCV), with evaluation metrics including R² and root mean squared error (RMSE). The mean delivered ADs were 0.88 ± 0.34 Gy/GBq for kidneys and 2.36 ± 2.10 Gy/GBq for lesions. Combining CBs with the best RFs produced optimal results: the extra trees regressor (ETR) was the best ML model for predicting kidney ADs, achieving an RMSE of 0.11 Gy/GBq and an R² of 0.87. For lesion ADs, the gradient boosting regressor (GBR) performed best, with an RMSE of 1.04 Gy/GBq and an R² of 0.77. Integrating pretherapy Ga-PSMA PET/CT RFs with CBs shows potential in predicting ADs in RLT. To personalize treatment planning and enhance patient stratification, it is crucial to validate these preliminary findings with a larger sample size and an independent cohort.
Page 163 of 1981980 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.