Sort by:
Page 1 of 3053045 results
Next

Enhancing Spinal Cord and Canal Segmentation in Degenerative Cervical Myelopathy : The Role of Interactive Learning Models with manual Click.

Han S, Oh JK, Cho W, Kim TJ, Hong N, Park SB

pubmed logopapersSep 29 2025
We aim to develop an interactive segmentation model that can offer accuracy and reliability for the segmentation of the irregularly shaped spinal cord and canal in degenerative cervical myelopathy (DCM) through manual click and model refinement. A dataset of 1444 frames from 294 magnetic resonance imaging records of DCM patients was used and we developed two different segmentation models for comparison : auto-segmentation and interactive segmentation. The former was based on U-Net and utilized a pretrained ConvNeXT-tiny as its encoder. For the latter, we employed an interactive segmentation model structured by SimpleClick, a large model that utilizes a vision transformer as its backbone, together with simple fine-tuning. The segmentation performance of the two models were compared in terms of their Dice scores, mean intersection over union (mIoU), Average Precision and Hausdorff distance. The efficiency of the interactive segmentation model was evaluated by the number of clicks required to achieve a target mIoU. Our model achieved better scores across all four-evaluation metrics for segmentation accuracy, showing improvements of +6.4%, +1.8%, +3.7%, and -53.0% for canal segmentation, and +11.7%, +6.0%, +18.2%, and -70.9% for cord segmentation with 15 clicks, respectively. The required clicks for the interactive segmentation model to achieve a 90% mIoU for spinal canal with cord cases and 80% mIoU for spinal cord cases were 11.71 and 11.99, respectively. We found that the interactive segmentation model significantly outperformed the auto-segmentation model. By incorporating simple manual inputs, the interactive model effectively identified regions of interest, particularly in the complex and irregular shapes of the spinal cord, demonstrating both enhanced accuracy and adaptability.

Artificial Intelligence to Detect Developmental Dysplasia of Hip: A Systematic Review.

Bhavsar S, Gowda BB, Bhavsar M, Patole S, Rao S, Rath C

pubmed logopapersSep 28 2025
Deep learning (DL), a branch of artificial intelligence (AI), has been applied to diagnose developmental dysplasia of the hip (DDH) on pelvic radiographs and ultrasound (US) images. This technology can potentially assist in early screening, enable timely intervention and improve cost-effectiveness. We conducted a systematic review to evaluate the diagnostic accuracy of the DL algorithm in detecting DDH. PubMed, Medline, EMBASE, EMCARE, the clinicaltrials.gov (clinical trial registry), IEEE Xplore and Cochrane Library databases were searched in October 2024. Prospective and retrospective cohort studies that included children (< 16 years) at risk of or suspected to have DDH and reported hip ultrasonography (US) or X-ray images using AI were included. A review was conducted using the guidelines of the Cochrane Collaboration Diagnostic Test Accuracy Working Group. Risk of bias was assessed using the QUADAS-2 tool. Twenty-three studies met inclusion criteria, with 15 (n = 8315) evaluating DDH on US images and eight (n = 7091) on pelvic radiographs. The area under the curve of the included studies ranged from 0.80 to 0.99 for pelvic radiographs and 0.90-0.99 for US images. Sensitivity and specificity for detecting DDH on radiographs ranged from 92.86% to 100% and 95.65% to 99.82%, respectively. For US images, sensitivity ranged from 86.54% to 100% and specificity from 62.5% to 100%. AI demonstrated comparable effectiveness to physicians in detecting DDH. However, limited evaluation on external datasets restricts its generalisability. Further research incorporating diverse datasets and real-world applications is needed to assess its broader clinical impact on DDH diagnosis.

Single-step prediction of inferior alveolar nerve injury after mandibular third molar extraction using contrastive learning and bayesian auto-tuned deep learning model.

Yoon K, Choi Y, Lee M, Kim J, Kim JY, Kim JW, Choi J, Park W

pubmed logopapersSep 27 2025
Inferior alveolar nerve (IAN) injury is a critical complication of mandibular third molar extraction. This study aimed to construct and evaluate a deep learning framework that integrates contrastive learning and Bayesian optimization to enhance predictive performance on cone-beam computed tomography (CBCT) and panoramic radiographs. A retrospective dataset of 902 panoramic radiographs and 1,500 CBCT images was used. Five deep learning architectures (MobileNetV2, ResNet101D, Vision Transformer, Twins-SVT, and SSL-ResNet50) were trained with and without contrastive learning and Bayesian optimization. Model performance was evaluated using accuracy, F1-score, and comparison with oral and maxillofacial surgeons (OMFSs). Contrastive learning significantly improved the F1-scores across all models (e.g., MobileNetV2: 0.302 to 0.740; ResNet101D: 0.188 to 0.689; Vision Transformer: 0.275 to 0.704; Twins-SVT: 0.370 to 0.719; SSL-ResNet50: 0.109 to 0.576). Bayesian optimization further enhanced the F1-scores for MobileNetV2 (from 0.740 to 0.923), ResNet101D (from 0.689 to 0.857), Vision Transformer (from 0.704 to 0.871), Twins-SVT (from 0.719 to 0.857), and SSL-ResNet50 (from 0.576 to 0.875). The AI model outperformed OMFSs on CBCT cross-sectional images (F1-score: 0.923 vs. 0.667) but underperformed on panoramic radiographs (0.666 vs. 0.730). The proposed single-step deep learning approach effectively predicts IAN injury, with contrastive learning addressing data imbalance and Bayesian optimization optimizing model performance. While artificial intelligence surpasses human performance in CBCT images, panoramic radiographs analysis still benefits from expert interpretation. Future work should focus on multi-center validation and explainable artificial intelligence for broader clinical adoption.

Generation of multimodal realistic computational phantoms as a test-bed for validating deep learning-based cross-modality synthesis techniques.

Camagni F, Nakas A, Parrella G, Vai A, Molinelli S, Vitolo V, Barcellini A, Chalaszczyk A, Imparato S, Pella A, Orlandi E, Baroni G, Riboldi M, Paganelli C

pubmed logopapersSep 27 2025
The validation of multimodal deep learning models for medical image translation is limited by the lack of high-quality, paired datasets. We propose a novel framework that leverages computational phantoms to generate realistic CT and MRI images, enabling reliable ground-truth datasets for robust validation of artificial intelligence (AI) methods that generate synthetic CT (sCT) from MRI, specifically for radiotherapy applications. Two CycleGANs (cycle-consistent generative adversarial networks) were trained to transfer the imaging style of real patients onto CT and MRI phantoms, producing synthetic data with realistic textures and continuous intensity distributions. These data were evaluated through paired assessments with original phantoms, unpaired comparisons with patient scans, and dosimetric analysis using patient-specific radiotherapy treatment plans. Additional external validation was performed on public CT datasets to assess the generalizability to unseen data. The resulting, paired CT/MRI phantoms were used to validate a GAN-based model for sCT generation from abdominal MRI in particle therapy, available in the literature. Results showed strong anatomical consistency with original phantoms, high histogram correlation with patient images (HistCC = 0.998 ± 0.001 for MRI, HistCC = 0.97 ± 0.04 for CT), and dosimetric accuracy comparable to real data. The novelty of this work lies in using generated phantoms as validation data for deep learning-based cross-modality synthesis techniques.

Development of a clinical-CT-radiomics nomogram for predicting endoscopic red color sign in cirrhotic patients with esophageal varices.

Han J, Dong J, Yan C, Zhang J, Wang Y, Gao M, Zhang M, Chen Y, Cai J, Zhao L

pubmed logopapersSep 27 2025
To evaluate the predictive performance of a clinical-CT-radiomics nomogram based on radiomics signature and independent clinical-CT predictors for predicting endoscopic red color sign (RC) in cirrhotic patients with esophageal varices (EV). We retrospectively evaluated 215 cirrhotic patients. Among them, 108 and 107 cases were positive and negative for endoscopic RC, respectively. Patients were assigned to a training cohort (n = 150) and a validation cohort (n = 65) at a 7:3 ratio. In the training cohort, univariate and multivariate logistic regression analyses were performed on clinical and CT features to develop a clinical-CT model. Radiomic features were extracted from portal venous phase CT images to generate a Radiomic score (Rad-score) and to construct five machine learning models. A combined model was built using clinical-CT predictors and Rad-score through logistic regression. The performance of different models was evaluated using the receiver operating characteristic (ROC) curves and the area under the curve (AUC). The spleen-to-platelet ratio, liver volume, splenic vein diameter, and superior mesenteric vein diameter were independent predictors. Six radiomics features were selected to construct five machine learning models. The adaptive boosting model showed excellent predictive performance, achieving an AUC of 0.964 in the validation cohort, while the combined model achieved the highest predictive accuracy with an AUC of 0.985 in the validation cohort. The clinical-CT-radiomics nomogram demonstrates high predictive accuracy for endoscopic RC in cirrhotic patients with EV, which provides a novel tool for non-invasive prediction of esophageal varices bleeding.

Quantifying 3D foot and ankle alignment using an AI-driven framework: a pilot study.

Huysentruyt R, Audenaert E, Van den Borre I, Pižurica A, Duquesne K

pubmed logopapersSep 27 2025
Accurate assessment of foot and ankle alignment through clinical measurements is essential for diagnosing deformities, treatment planning, and monitoring outcomes. The traditional 2D radiographs fail to fully represent the 3D complexity of the foot and ankle. In contrast, weight-bearing CT provides a 3D view of bone alignment under physiological loading. Nevertheless, manual landmark identification on WBCT remains time-intensive and prone to variability. This study presents a novel AI framework automating foot and ankle alignment assessment via deep learning landmark detection. By training 3D U-Net models to predict 22 anatomical landmarks directly from weight-bearing CT images, using heatmap predictions, our approach eliminates the need for segmentation and iterative mesh registration methods. A small dataset of 74 orthopedic patients, including foot deformity cases such as pes cavus and planovalgus, was used to develop and evaluate the model in a clinically relevant population. The mean absolute error was assessed for each landmark and each angle using a fivefold cross-validation. Mean absolute distance errors ranged from 1.00 mm for the proximal head center of the first phalanx to a maximum of 1.88 mm for the lowest point of the calcaneus. Automated clinical measurements derived from these landmarks achieved mean absolute errors between 0.91° for the hindfoot angle and a maximum of 2.90° for the Böhler angle. The heatmap-based AI approach enables automated foot and ankle alignment assessment from WBCT imaging, achieving accuracies comparable to the manual inter-rater variability reported in previous studies. This novel AI-driven method represents a potentially valuable approach for evaluating foot and ankle morphology. However, this exploratory study requires further evaluation with larger datasets to assess its real clinical applicability.

Enhanced diagnostic pipeline for maxillary sinus-maxillary molars relationships: a novel implementation of Detectron2 with faster R-CNN R50 FPN 3x on CBCT images.

Özemre MÖ, Bektaş J, Yanik H, Baysal L, Karslioğlu H

pubmed logopapersSep 27 2025
The anatomical relationship between the maxillary sinus and maxillary molars is critical for planning dental procedures such as tooth extraction, implant placement and periodontal surgery. This study presents a novel artificial intelligence-based approach for the detection and classification of these anatomical relationships in cone beam computed tomography (CBCT) images. The model, developed using advanced image recognition technology, can automatically detect the relationship between the maxillary sinus and adjacent molars with high accuracy. The artificial intelligence algorithm used in our study provided faster and more consistent results compared to traditional manual evaluations, reaching 89% accuracy in the classification of anatomical structures. With this technology, clinicians will be able to more accurately assess the risks of sinus perforation, oroantral fistula and other surgical complications in the maxillary posterior region preoperatively. By reducing the workload associated with CBCT analysis, the system accelerates clinicians' diagnostic process, improves treatment planning and increases patient safety. It also has the potential to assist in the early detection of maxillary sinus pathologies and the planning of sinus floor elevation procedures. These findings suggest that the integration of AI-powered image analysis solutions into daily dental practice can improve clinical decision-making in oral and maxillofacial surgery by providing accurate, efficient and reliable diagnostic support.

Automated deep learning method for whole-breast segmentation in contrast-free quantitative MRI.

Gao W, Zhang Y, Gao B, Xia Y, Liang W, Yang Q, Shi F, He T, Han G, Li X, Su X, Zhang Y

pubmed logopapersSep 26 2025
To develop a deep learning segmentation method utilizing the nnU-Net architecture for fully automated whole-breast segmentation based on diffusion-weighted imaging (DWI) and synthetic MRI (SyMRI) images. A total of 98 patients with 196 breasts were evaluated. All patients underwent 3.0T magnetic resonance (MR) examinations, which incorporated DWI and SyMRI techniques. The ground truth for breast segmentation was established through a manual, slice-by-slice approach performed by two experienced radiologists. The U-Net and nnU-Net deep learning algorithms were employed to segment the whole-breast. Performance was evaluated using various metrics, including the Dice Similarity Coefficient (DSC), accuracy, and Pearson's correlation coefficient. For DWI and proton density (PD) of SyMRI, the nnU-Net outperformed the U-Net achieving the higher DSC in both the testing set (DWI, 0.930 ± 0.029 vs. 0.785 ± 0.161; PD, 0.969 ± 0.010 vs. 0.936 ± 0.018) and independent testing set (DWI, 0.953 ± 0.019 vs. 0.789 ± 0.148; PD, 0.976 ± 0.008 vs. 0.939 ± 0.018). The PD of SyMRI exhibited better performance than DWI, attaining the highest DSC and accuracy. The correlation coefficients R² for nnU-Net were 0.99 ~ 1.00 for DWI and PD, significantly surpassing the performance of U-Net. The nnU-Net exhibited exceptional segmentation performance for fully automated breast segmentation of contrast-free quantitative images. This method serves as an effective tool for processing large-scale clinical datasets and represents a significant advancement toward computer-aided quantitative analysis of breast DWI and SyMRI images.

Segmental airway volume as a predictive indicator of postoperative extubation timing in patients with oral and maxillofacial space infections: a retrospective analysis.

Liu S, Shen H, Zhu B, Zhang X, Zhang X, Li W

pubmed logopapersSep 26 2025
The objective of this study was to investigate the significance of segmental airway volume in developing a predictive model to guide the timing of postoperative extubation in patients with oral and maxillofacial space infections (OMSIs). A retrospective cohort study was performed to analyse clinical data from 177 medical records, with a focus on key variables related to disease severity and treatment outcomes. The inclusion criteria of this study were as follows: adherence to the OMSI diagnostic criteria (local tissue inflammation characterized by erythema, oedema, hyperthermia and tenderness); compromised functions such as difficulties opening the mouth, swallowing, or breathing; the presence of purulent material confirmed by puncture or computed tomography (CT); and laboratory examinations indicating an underlying infection process. The data included age, sex, body mass index (BMI), blood test results, smoking history, history of alcohol abuse, the extent of mouth opening, the number of infected spaces, and the source of infection. DICOM files were imported into 3D Slicer for manual segmentation, followed by volume measurement of each segment. We observed statistically significant differences in age, neutrophil count, lymphocyte count, and C4 segment volume among patient subgroups stratified by extubation time. Regression analysis revealed that age and C4 segment volume were significantly correlated with extubation time. Additionally, the machine learning models yielded good evaluation metrics. Segmental airway volume shows promise as an indicator for predicting extubation time. Predictive models constructed using machine learning algorithms yield good predictive performance and may facilitate clinical decision-making.

Prediction of neoadjuvant chemotherapy efficacy in patients with HER2-low breast cancer based on ultrasound radiomics.

Peng Q, Ji Z, Xu N, Dong Z, Zhang T, Ding M, Qu L, Liu Y, Xie J, Jin F, Chen B, Song J, Zheng A

pubmed logopapersSep 26 2025
Neoadjuvant chemotherapy (NAC) is a crucial therapeutic approach for treating breast cancer, yet accurately predicting treatment response remains a significant clinical challenge. Conventional ultrasound plays a vital role in assessing tumor morphology but lacks the ability to quantitatively capture intratumoral heterogeneity. Ultrasound radiomics, which extracts high-throughput quantitative imaging features, offers a novel approach to enhance NAC response prediction. This study aims to evaluate the predictive efficacy of ultrasound radiomics models based on pre-treatment, post-treatment, and combined imaging features for assessing the NAC response in patients with HER2-low breast cancer. This retrospective multicenter study included 359 patients with HER2-low breast cancer who underwent NAC between January 1, 2016, and December 31, 2020. A total of 488 radiomic features were extracted from pre- and post-treatment ultrasound images. Feature selection was conducted in two stages: first, Pearson correlation analysis (threshold: 0.65) was applied to remove highly correlated features and reduce redundancy; then, Recursive Feature Elimination with Cross-Validation (RFECV) was employed to identify the optimal feature subset for model construction. The dataset was divided into a training set (244 patients) and an external validation set (115 patients from independent centers). Model performance was assessed via the area under the receiver operating characteristic curve (AUC), accuracy, precision, recall, and F1 score. Three models were initially developed: (1) a pre-treatment model (AUC = 0.716), (2) a post-treatment model (AUC = 0.772), and (3) a combined pre- and post-treatment model (AUC = 0.762).To enhance feature selection, Recursive Feature Elimination with Cross-Validation was applied, resulting in optimized models with reduced feature sets: (1) the pre-treatment model (AUC = 0.746), (2) the post-treatment model (AUC = 0.712), and (3) the combined model (AUC = 0.759). Ultrasound radiomics is a non-invasive and promising approach for predicting response to neoadjuvant chemotherapy in HER2-low breast cancer. The pre-treatment model yielded reliable performance after feature selection. While the combined model did not substantially enhance predictive accuracy, its stable performance suggests that longitudinal ultrasound imaging may help capture treatment-induced phenotypic changes. These findings offer preliminary support for individualized therapeutic decision-making.
Page 1 of 3053045 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.