Sort by:
Page 13 of 1861852 results

Automated Scoliosis Cobb Angle Classification in Biplanar Radiograph Imaging With Explainable Machine Learning Models.

Yu J, Lahoti YS, McCandless KC, Namiri NK, Miyasaka MS, Ahmed H, Song J, Corvi JJ, Berman DC, Cho SK, Kim JS

pubmed logopapersJul 1 2025
Retrospective cohort study. To quantify the pathology of the spine in patients with scoliosis through one-dimensional feature analysis. Biplanar radiograph (EOS) imaging is a low-dose technology offering high-resolution spinal curvature measurement, crucial for assessing scoliosis severity and guiding treatment decisions. Machine learning (ML) algorithms, utilizing one-dimensional image features, can enable automated Cobb angle classification, improving accuracy and efficiency in scoliosis evaluation while reducing the need for manual measurements, thus supporting clinical decision-making. This study used 816 annotated AP EOS spinal images with a spine segmentation mask and a 10° polynomial to represent curvature. Engineered features included the first and second derivatives, Fourier transform, and curve energy, normalized for robustness. XGBoost selected the top 32 features. The models classified scoliosis into multiple groups based on curvature degree, measured through Cobb angle. To address the class imbalance, stratified sampling, undersampling, and oversampling techniques were used, with 10-fold stratified K-fold cross-validation for generalization. An automatic grid search was used for hyperparameter optimization, with K-fold cross-validation (K=3). The top-performing model was Random Forest, achieving an ROC AUC of 91.8%. An accuracy of 86.1%, precision of 86.0%, recall of 86.0%, and an F1 score of 85.1% were also achieved. Of the three techniques used to address class imbalance, stratified sampling produced the best out-of-sample results. SHAP values were generated for the top 20 features, including spine curve length and linear regression error, with the most predictive features ranked at the top, enhancing model explainability. Feature engineering with classical ML methods offers an effective approach for classifying scoliosis severity based on Cobb angle ranges. The high interpretability of features in representing spinal pathology, along with the ease of use of classical ML techniques, makes this an attractive solution for developing automated tools to manage complex spinal measurements.

Measuring kidney stone volume - practical considerations and current evidence from the EAU endourology section.

Grossmann NC, Panthier F, Afferi L, Kallidonis P, Somani BK

pubmed logopapersJul 1 2025
This narrative review provides an overview of the use, differences, and clinical impact of current methods for kidney stone volume assessment. The different approaches to volume measurement are based on noncontrast computed tomography (NCCT). While volume measurement using formulas is sufficient for smaller stones, it tends to overestimate volume for larger or irregularly shaped calculi. In contrast, software-based segmentation significantly improves accuracy and reproducibility, and artificial intelligence based volumetry additionally shows excellent agreement with reference standards while reducing observer variability and measurement time. Moreover, specific CT preparation protocols may further enhance image quality and thus improve measurement accuracy. Clinically, stone volume has proven to be a superior predictor of stone-related events during follow-up, spontaneous stone passage under conservative management, and stone-free rates after shockwave lithotripsy (SWL) and ureteroscopy (URS) compared to linear measurements. Although manual measurement remains practical, its accuracy diminishes for complex or larger stones. Software-based segmentation and volumetry offer higher precision and efficiency but require established standards and broader access to dedicated software for routine clinical use.

Prediction of adverse pathology in prostate cancer using a multimodal deep learning approach based on [<sup>18</sup>F]PSMA-1007 PET/CT and multiparametric MRI.

Lin H, Yao F, Yi X, Yuan Y, Xu J, Chen L, Wang H, Zhuang Y, Lin Q, Xue Y, Yang Y, Pan Z

pubmed logopapersJul 1 2025
Accurate prediction of adverse pathology (AP) in prostate cancer (PCa) patients is crucial for formulating effective treatment strategies. This study aims to develop and evaluate a multimodal deep learning model based on [<sup>18</sup>F]PSMA-1007 PET/CT and multiparametric MRI (mpMRI) to predict the presence of AP, and investigate whether the model that integrates [<sup>18</sup>F]PSMA-1007 PET/CT and mpMRI outperforms the individual PET/CT or mpMRI models in predicting AP. 341 PCa patients who underwent radical prostatectomy (RP) with mpMRI and PET/CT scans were retrospectively analyzed. We generated deep learning signature from mpMRI and PET/CT with a multimodal deep learning model (MPC) based on convolutional neural networks and transformer, which was subsequently incorporated with clinical characteristics to construct an integrated model (MPCC). These models were compared with clinical models and single mpMRI or PET/CT models. The MPCC model showed the best performance in predicting AP (AUC, 0.955 [95% CI: 0.932-0.975]), which is higher than MPC model (AUC, 0.930 [95% CI: 0.901-0.955]). The performance of the MPC model is better than that of single PET/CT (AUC, 0.813 [95% CI: 0.780-0.845]) or mpMRI (AUC, 0.865 [95% CI: 0.829-0.901]). Additionally, MPCC model is also effective in predicting single adverse pathological features. The deep learning model that integrates mpMRI and [<sup>18</sup>F]PSMA-1007 PET/CT enhances the predictive capabilities for the presence of AP in PCa patients. This improvement aids physicians in making informed preoperative decisions, ultimately enhancing patient prognosis.

Habitat-Based Radiomics for Revealing Tumor Heterogeneity and Predicting Residual Cancer Burden Classification in Breast Cancer.

Li ZY, Wu SN, Lin P, Jiang MC, Chen C, Lin WJ, Xue ES, Liang RX, Lin ZH

pubmed logopapersJul 1 2025
To investigate the feasibility of characterizing tumor heterogeneity in breast cancer ultrasound images using habitat analysis technology and establish a radiomics machine learning model for predicting response to neoadjuvant chemotherapy (NAC). Ultrasound images from patients with pathologically confirmed breast cancer who underwent neoadjuvant therapy at our institution between July 2021 and December 2023 were retrospectively reviewed. Initially, the region of interest was delineated and segmented into multiple habitat areas using local feature delineation and cluster analysis techniques. Subsequently, radiomics features were extracted from each habitat area to construct 3 machine learning models. Finally, the model's efficacy was assessed through operating characteristic (ROC) curve analysis, decision curve analysis (DCA), and calibration curve evaluation. A total of 945 patients were enrolled, with 333 demonstrating a favorable response to NAC and 612 exhibiting an unfavorable response to NAC. Through the application of habitat analysis techniques, 3 distinct habitat regions within the tumor were identified. Subsequently, a predictive model was developed by incorporating 19 radiomics features, and all 3 machine learning models demonstrated excellent performance in predicting treatment outcomes. Notably, extreme gradient boosting (XGBoost) exhibited superior performance with an area under the curve (AUC) of 0.872 in the training cohort and 0.740 in the testing cohort. Additionally, DCA and calibration curves were employed for further evaluation. The habitat analysis technique effectively distinguishes distinct biological subregions of breast cancer, while the established radiomics machine learning model predicts NAC response by forecasting residual cancer burden (RCB) classification.

CT-Based Machine Learning Radiomics Analysis to Diagnose Dysthyroid Optic Neuropathy.

Ma L, Jiang X, Yang X, Wang M, Hou Z, Zhang J, Li D

pubmed logopapersJul 1 2025
To develop CT-based machine learning radiomics models used for the diagnosis of dysthyroid optic neuropathy (DON). This is a retrospective study included 57 patients (114 orbits) diagnosed with thyroid-associated ophthalmopathy (TAO) at the Beijing Tongren Hospital between December 2019 and June 2023. CT scans, medical history, examination results, and clinical data of the participants were collected. DON was diagnosed based on clinical manifestations and examinations. The DON orbits and non-DON orbits were then divided into a training set and a test set at a ratio of approximately 7:3. The 3D slicer software was used to identify the volumes of interest (VOI). Radiomics features were extracted using the Pyradiomics and selected by t-test and least absolute shrinkage and selection operator (LASSO) regression algorithm with 10-fold cross-validation. Machine-learning models, including random forest (RF) model, support vector machine (SVM) model, and logistic regression (LR) model were built and validated by receiver operating characteristic (ROC) curves, area under the curves (AUC) and confusion matrix-related data. The net benefit of the models is shown by the decision curve analysis (DCA). We extracted 107 features from the imaging data, representing various image information of the optic nerve and surrounding orbital tissues. Using the LASSO method, we identified the five most informative features. The AUC ranged from 0.77 to 0.80 in the training set and the AUC of the RF, SVM and LR models based on the features were 0.86, 0.80 and 0.83 in the test set, respectively. The DeLong test showed there was no significant difference between the three models (RF model vs SVM model: <i>p</i> = .92; RF model vs LR model: <i>p</i> = .94; SVM model vs LR model: <i>p</i> = .98) and the models showed optimal clinical efficacy in DCA. The CT-based machine learning radiomics analysis exhibited excellent ability to diagnose DON and may enhance diagnostic convenience.

Diffusion-driven multi-modality medical image fusion.

Qu J, Huang D, Shi Y, Liu J, Tang W

pubmed logopapersJul 1 2025
Multi-modality medical image fusion (MMIF) technology utilizes the complementarity of different modalities to provide more comprehensive diagnostic insights for clinical practice. Existing deep learning-based methods often focus on extracting the primary information from individual modalities while ignoring the correlation of information distribution across different modalities, which leads to insufficient fusion of image details and color information. To address this problem, a diffusion-driven MMIF method is proposed to leverage the information distribution relationship among multi-modality images in the latent space. To better preserve the complementary information from different modalities, a local and global network (LAGN) is suggested. Additionally, a loss strategy is designed to establish robust constraints among diffusion-generated images, original images, and fused images. This strategy supervises the training process and prevents information loss in fused images. The experimental results demonstrate that the proposed method surpasses state-of-the-art image fusion methods in terms of unsupervised metrics on three datasets: MRI/CT, MRI/PET, and MRI/SPECT images. The proposed method successfully captures rich details and color information. Furthermore, 16 doctors and medical students were invited to evaluate the effectiveness of our method in assisting clinical diagnosis and treatment.

Using deep feature distances for evaluating the perceptual quality of MR image reconstructions.

Adamson PM, Desai AD, Dominic J, Varma M, Bluethgen C, Wood JP, Syed AB, Boutin RD, Stevens KJ, Vasanawala S, Pauly JM, Gunel B, Chaudhari AS

pubmed logopapersJul 1 2025
Commonly used MR image quality (IQ) metrics have poor concordance with radiologist-perceived diagnostic IQ. Here, we develop and explore deep feature distances (DFDs)-distances computed in a lower-dimensional feature space encoded by a convolutional neural network (CNN)-as improved perceptual IQ metrics for MR image reconstruction. We further explore the impact of distribution shifts between images in the DFD CNN encoder training data and the IQ metric evaluation. We compare commonly used IQ metrics (PSNR and SSIM) to two "out-of-domain" DFDs with encoders trained on natural images, an "in-domain" DFD trained on MR images alone, and two domain-adjacent DFDs trained on large medical imaging datasets. We additionally compare these with several state-of-the-art but less commonly reported IQ metrics, visual information fidelity (VIF), noise quality metric (NQM), and the high-frequency error norm (HFEN). IQ metric performance is assessed via correlations with five expert radiologist reader scores of perceived diagnostic IQ of various accelerated MR image reconstructions. We characterize the behavior of these IQ metrics under common distortions expected during image acquisition, including their sensitivity to acquisition noise. All DFDs and HFEN correlate more strongly with radiologist-perceived diagnostic IQ than SSIM, PSNR, and other state-of-the-art metrics, with correlations being comparable to radiologist inter-reader variability. Surprisingly, out-of-domain DFDs perform comparably to in-domain and domain-adjacent DFDs. A suite of IQ metrics, including DFDs and HFEN, should be used alongside commonly-reported IQ metrics for a more holistic evaluation of MR image reconstruction perceptual quality. We also observe that general vision encoders are capable of assessing visual IQ even for MR images.

Robust and generalizable artificial intelligence for multi-organ segmentation in ultra-low-dose total-body PET imaging: a multi-center and cross-tracer study.

Wang H, Qiao X, Ding W, Chen G, Miao Y, Guo R, Zhu X, Cheng Z, Xu J, Li B, Huang Q

pubmed logopapersJul 1 2025
Positron Emission Tomography (PET) is a powerful molecular imaging tool that visualizes radiotracer distribution to reveal physiological processes. Recent advances in total-body PET have enabled low-dose, CT-free imaging; however, accurate organ segmentation using PET-only data remains challenging. This study develops and validates a deep learning model for multi-organ PET segmentation across varied imaging conditions and tracers, addressing critical needs for fully PET-based quantitative analysis. This retrospective study employed a 3D deep learning-based model for automated multi-organ segmentation on PET images acquired under diverse conditions, including low-dose and non-attenuation-corrected scans. Using a dataset of 798 patients from multiple centers with varied tracers, model robustness and generalizability were evaluated via multi-center and cross-tracer tests. Ground-truth labels for 23 organs were generated from CT images, and segmentation accuracy was assessed using the Dice similarity coefficient (DSC). In the multi-center dataset from four different institutions, our model achieved average DSC values of 0.834, 0.825, 0.819, and 0.816 across varying dose reduction factors and correction conditions for FDG PET images. In the cross-tracer dataset, the model reached average DSC values of 0.737, 0.573, 0.830, 0.661, and 0.708 for DOTATATE, FAPI, FDG, Grazytracer, and PSMA, respectively. The proposed model demonstrated effective, fully PET-based multi-organ segmentation across a range of imaging conditions, centers, and tracers, achieving high robustness and generalizability. These findings underscore the model's potential to enhance clinical diagnostic workflows by supporting ultra-low dose PET imaging. Not applicable. This is a retrospective study based on collected data, which has been approved by the Research Ethics Committee of Ruijin Hospital affiliated to Shanghai Jiao Tong University School of Medicine.

Automated quantification of brain PET in PET/CT using deep learning-based CT-to-MR translation: a feasibility study.

Kim D, Choo K, Lee S, Kang S, Yun M, Yang J

pubmed logopapersJul 1 2025
Quantitative analysis of PET images in brain PET/CT relies on MRI-derived regions of interest (ROIs). However, the pairs of PET/CT and MR images are not always available, and their alignment is challenging if their acquisition times differ considerably. To address these problems, this study proposes a deep learning framework for translating CT of PET/CT to synthetic MR images (MR<sub>SYN</sub>) and performing automated quantitative regional analysis using MR<sub>SYN</sub>-derived segmentation. In this retrospective study, 139 subjects who underwent brain [<sup>18</sup>F]FBB PET/CT and T1-weighted MRI were included. A U-Net-like model was trained to translate CT images to MR<sub>SYN</sub>; subsequently, a separate model was trained to segment MR<sub>SYN</sub> into 95 regions. Regional and composite standardised uptake value ratio (SUVr) was calculated in [<sup>18</sup>F]FBB PET images using the acquired ROIs. For evaluation of MR<sub>SYN</sub>, quantitative measurements including structural similarity index measure (SSIM) were employed, while for MR<sub>SYN</sub>-based segmentation evaluation, Dice similarity coefficient (DSC) was calculated. Wilcoxon signed-rank test was performed for SUVrs computed using MR<sub>SYN</sub> and ground-truth MR (MR<sub>GT</sub>). Compared to MR<sub>GT</sub>, the mean SSIM of MR<sub>SYN</sub> was 0.974 ± 0.005. The MR<sub>SYN</sub>-based segmentation achieved a mean DSC of 0.733 across 95 regions. No statistical significance (P > 0.05) was found for SUVr between the ROIs from MR<sub>SYN</sub> and those from MR<sub>GT</sub>, excluding the precuneus. We demonstrated a deep learning framework for automated regional brain analysis in PET/CT with MR<sub>SYN</sub>. Our proposed framework can benefit patients who have difficulties in performing an MRI scan.

Integrating multi-scale information and diverse prompts in large model SAM-Med2D for accurate left ventricular ejection fraction estimation.

Wu Y, Zhao T, Hu S, Wu Q, Chen Y, Huang X, Zheng Z

pubmed logopapersJul 1 2025
Left ventricular ejection fraction (LVEF) is a critical indicator of cardiac function, aiding in the assessment of heart conditions. Accurate segmentation of the left ventricle (LV) is essential for LVEF calculation. However, current methods are often limited by small datasets and exhibit poor generalization. While leveraging large models can address this issue, many fail to capture multi-scale information and introduce additional burdens on users to generate prompts. To overcome these challenges, we propose LV-SAM, a model based on the large model SAM-Med2D, for accurate LV segmentation. It comprises three key components: an image encoder with a multi-scale adapter (MSAd), a multimodal prompt encoder (MPE), and a multi-scale decoder (MSD). The MSAd extracts multi-scale information at the encoder level and fine-tunes the model, while the MSD employs skip connections to effectively utilize multi-scale information at the decoder level. Additionally, we introduce an automated pipeline for generating self-extracted dense prompts and use a large language model to generate text prompts, reducing the user burden. The MPE processes these prompts, further enhancing model performance. Evaluations on the CAMUS dataset show that LV-SAM outperforms existing SOAT methods in LV segmentation, achieving the lowest MAE of 5.016 in LVEF estimation.
Page 13 of 1861852 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.