Sort by:
Page 1 of 3793788 results
Next

Enhancing Spinal Cord and Canal Segmentation in Degenerative Cervical Myelopathy : The Role of Interactive Learning Models with manual Click.

Han S, Oh JK, Cho W, Kim TJ, Hong N, Park SB

pubmed logopapersSep 29 2025
We aim to develop an interactive segmentation model that can offer accuracy and reliability for the segmentation of the irregularly shaped spinal cord and canal in degenerative cervical myelopathy (DCM) through manual click and model refinement. A dataset of 1444 frames from 294 magnetic resonance imaging records of DCM patients was used and we developed two different segmentation models for comparison : auto-segmentation and interactive segmentation. The former was based on U-Net and utilized a pretrained ConvNeXT-tiny as its encoder. For the latter, we employed an interactive segmentation model structured by SimpleClick, a large model that utilizes a vision transformer as its backbone, together with simple fine-tuning. The segmentation performance of the two models were compared in terms of their Dice scores, mean intersection over union (mIoU), Average Precision and Hausdorff distance. The efficiency of the interactive segmentation model was evaluated by the number of clicks required to achieve a target mIoU. Our model achieved better scores across all four-evaluation metrics for segmentation accuracy, showing improvements of +6.4%, +1.8%, +3.7%, and -53.0% for canal segmentation, and +11.7%, +6.0%, +18.2%, and -70.9% for cord segmentation with 15 clicks, respectively. The required clicks for the interactive segmentation model to achieve a 90% mIoU for spinal canal with cord cases and 80% mIoU for spinal cord cases were 11.71 and 11.99, respectively. We found that the interactive segmentation model significantly outperformed the auto-segmentation model. By incorporating simple manual inputs, the interactive model effectively identified regions of interest, particularly in the complex and irregular shapes of the spinal cord, demonstrating both enhanced accuracy and adaptability.

Hepatocellular Carcinoma Risk Stratification for Cirrhosis Patients: Integrating Radiomics and Deep Learning Computed Tomography Signatures of the Liver and Spleen into a Clinical Model.

Fan R, Shi YR, Chen L, Wang CX, Qian YS, Gao YH, Wang CY, Fan XT, Liu XL, Bai HL, Zheng D, Jiang GQ, Yu YL, Liang XE, Chen JJ, Xie WF, Du LT, Yan HD, Gao YJ, Wen H, Liu JF, Liang MF, Kong F, Sun J, Ju SH, Wang HY, Hou JL

pubmed logopapersSep 28 2025
Given the high burden of hepatocellular carcinoma (HCC), risk stratification in patients with cirrhosis is critical but remains inadequate. In this study, we aimed to develop and validate an HCC prediction model by integrating radiomics and deep learning features from liver and spleen computed tomography (CT) images into the established age-male-ALBI-platelet (aMAP) clinical model. Patients were enrolled between 2018 and 2023 from a Chinese multicenter, prospective, observational cirrhosis cohort, all of whom underwent 3-phase contrast-enhanced abdominal CT scans at enrollment. The aMAP clinical score was calculated, and radiomic (PyRadiomics) and deep learning (ResNet-18) features were extracted from liver and spleen regions of interest. Feature selection was performed using the least absolute shrinkage and selection operator. Among 2,411 patients (median follow-up: 42.7 months [IQR: 32.9-54.1]), 118 developed HCC (three-year cumulative incidence: 3.59%). Chronic hepatitis B virus infection was the main etiology, accounting for 91.5% of cases. The aMAP-CT model, which incorporates CT signatures, significantly outperformed existing models (area under the receiver-operating characteristic curve: 0.809-0.869 in three cohorts). It stratified patients into high-risk (three-year HCC incidence: 26.3%) and low-risk (1.7%) groups. Stepwise application (aMAP → aMAP-CT) further refined stratification (three-year incidences: 1.8% [93.0% of the cohort] vs. 27.2% [7.0%]). The aMAP-CT model improves HCC risk prediction by integrating CT-based liver and spleen signatures, enabling precise identification of high-risk cirrhosis patients. This approach personalizes surveillance strategies, potentially facilitating earlier detection and improved outcomes.

Artificial Intelligence to Detect Developmental Dysplasia of Hip: A Systematic Review.

Bhavsar S, Gowda BB, Bhavsar M, Patole S, Rao S, Rath C

pubmed logopapersSep 28 2025
Deep learning (DL), a branch of artificial intelligence (AI), has been applied to diagnose developmental dysplasia of the hip (DDH) on pelvic radiographs and ultrasound (US) images. This technology can potentially assist in early screening, enable timely intervention and improve cost-effectiveness. We conducted a systematic review to evaluate the diagnostic accuracy of the DL algorithm in detecting DDH. PubMed, Medline, EMBASE, EMCARE, the clinicaltrials.gov (clinical trial registry), IEEE Xplore and Cochrane Library databases were searched in October 2024. Prospective and retrospective cohort studies that included children (< 16 years) at risk of or suspected to have DDH and reported hip ultrasonography (US) or X-ray images using AI were included. A review was conducted using the guidelines of the Cochrane Collaboration Diagnostic Test Accuracy Working Group. Risk of bias was assessed using the QUADAS-2 tool. Twenty-three studies met inclusion criteria, with 15 (n = 8315) evaluating DDH on US images and eight (n = 7091) on pelvic radiographs. The area under the curve of the included studies ranged from 0.80 to 0.99 for pelvic radiographs and 0.90-0.99 for US images. Sensitivity and specificity for detecting DDH on radiographs ranged from 92.86% to 100% and 95.65% to 99.82%, respectively. For US images, sensitivity ranged from 86.54% to 100% and specificity from 62.5% to 100%. AI demonstrated comparable effectiveness to physicians in detecting DDH. However, limited evaluation on external datasets restricts its generalisability. Further research incorporating diverse datasets and real-world applications is needed to assess its broader clinical impact on DDH diagnosis.

Beyond tractography in brain connectivity mapping with dMRI morphometry and functional networks.

Wang JT, Lin CP, Liu HM, Pierpaoli C, Lo CZ

pubmed logopapersSep 27 2025
Traditional brain connectivity studies have focused mainly on structural connectivity, often relying on tractography with diffusion MRI (dMRI) to reconstruct white matter pathways. In parallel, studies of functional connectivity have examined correlations in brain activity using fMRI. However, emerging methodologies are advancing our understanding of brain networks. Here we explore advanced connectivity approaches beyond conventional tractography, focusing on dMRI morphometry and the integration of structural and functional connectivity analysis. dMRI morphometry enables quantitative assessment of white matter pathway volumes through statistical comparison with normative populations, while functional connectivity reveals network organization that is not restricted to direct anatomical connections. More recently, approaches that combine diffusion tensor imaging (DTI) with functional correlation tensor (FCT) analysis have been introduced, and these complementary methods provide new perspectives into brain structure-function relationships. Together, such approaches have important implications for neurodevelopmental and neurological disorders as well as brain plasticity. The integration of these methods with artificial intelligence techniques have the potential to support both basic neuroscience research and clinical applications.

Quantifying 3D foot and ankle alignment using an AI-driven framework: a pilot study.

Huysentruyt R, Audenaert E, Van den Borre I, Pižurica A, Duquesne K

pubmed logopapersSep 27 2025
Accurate assessment of foot and ankle alignment through clinical measurements is essential for diagnosing deformities, treatment planning, and monitoring outcomes. The traditional 2D radiographs fail to fully represent the 3D complexity of the foot and ankle. In contrast, weight-bearing CT provides a 3D view of bone alignment under physiological loading. Nevertheless, manual landmark identification on WBCT remains time-intensive and prone to variability. This study presents a novel AI framework automating foot and ankle alignment assessment via deep learning landmark detection. By training 3D U-Net models to predict 22 anatomical landmarks directly from weight-bearing CT images, using heatmap predictions, our approach eliminates the need for segmentation and iterative mesh registration methods. A small dataset of 74 orthopedic patients, including foot deformity cases such as pes cavus and planovalgus, was used to develop and evaluate the model in a clinically relevant population. The mean absolute error was assessed for each landmark and each angle using a fivefold cross-validation. Mean absolute distance errors ranged from 1.00 mm for the proximal head center of the first phalanx to a maximum of 1.88 mm for the lowest point of the calcaneus. Automated clinical measurements derived from these landmarks achieved mean absolute errors between 0.91° for the hindfoot angle and a maximum of 2.90° for the Böhler angle. The heatmap-based AI approach enables automated foot and ankle alignment assessment from WBCT imaging, achieving accuracies comparable to the manual inter-rater variability reported in previous studies. This novel AI-driven method represents a potentially valuable approach for evaluating foot and ankle morphology. However, this exploratory study requires further evaluation with larger datasets to assess its real clinical applicability.

Single-step prediction of inferior alveolar nerve injury after mandibular third molar extraction using contrastive learning and bayesian auto-tuned deep learning model.

Yoon K, Choi Y, Lee M, Kim J, Kim JY, Kim JW, Choi J, Park W

pubmed logopapersSep 27 2025
Inferior alveolar nerve (IAN) injury is a critical complication of mandibular third molar extraction. This study aimed to construct and evaluate a deep learning framework that integrates contrastive learning and Bayesian optimization to enhance predictive performance on cone-beam computed tomography (CBCT) and panoramic radiographs. A retrospective dataset of 902 panoramic radiographs and 1,500 CBCT images was used. Five deep learning architectures (MobileNetV2, ResNet101D, Vision Transformer, Twins-SVT, and SSL-ResNet50) were trained with and without contrastive learning and Bayesian optimization. Model performance was evaluated using accuracy, F1-score, and comparison with oral and maxillofacial surgeons (OMFSs). Contrastive learning significantly improved the F1-scores across all models (e.g., MobileNetV2: 0.302 to 0.740; ResNet101D: 0.188 to 0.689; Vision Transformer: 0.275 to 0.704; Twins-SVT: 0.370 to 0.719; SSL-ResNet50: 0.109 to 0.576). Bayesian optimization further enhanced the F1-scores for MobileNetV2 (from 0.740 to 0.923), ResNet101D (from 0.689 to 0.857), Vision Transformer (from 0.704 to 0.871), Twins-SVT (from 0.719 to 0.857), and SSL-ResNet50 (from 0.576 to 0.875). The AI model outperformed OMFSs on CBCT cross-sectional images (F1-score: 0.923 vs. 0.667) but underperformed on panoramic radiographs (0.666 vs. 0.730). The proposed single-step deep learning approach effectively predicts IAN injury, with contrastive learning addressing data imbalance and Bayesian optimization optimizing model performance. While artificial intelligence surpasses human performance in CBCT images, panoramic radiographs analysis still benefits from expert interpretation. Future work should focus on multi-center validation and explainable artificial intelligence for broader clinical adoption.

Development of a clinical-CT-radiomics nomogram for predicting endoscopic red color sign in cirrhotic patients with esophageal varices.

Han J, Dong J, Yan C, Zhang J, Wang Y, Gao M, Zhang M, Chen Y, Cai J, Zhao L

pubmed logopapersSep 27 2025
To evaluate the predictive performance of a clinical-CT-radiomics nomogram based on radiomics signature and independent clinical-CT predictors for predicting endoscopic red color sign (RC) in cirrhotic patients with esophageal varices (EV). We retrospectively evaluated 215 cirrhotic patients. Among them, 108 and 107 cases were positive and negative for endoscopic RC, respectively. Patients were assigned to a training cohort (n = 150) and a validation cohort (n = 65) at a 7:3 ratio. In the training cohort, univariate and multivariate logistic regression analyses were performed on clinical and CT features to develop a clinical-CT model. Radiomic features were extracted from portal venous phase CT images to generate a Radiomic score (Rad-score) and to construct five machine learning models. A combined model was built using clinical-CT predictors and Rad-score through logistic regression. The performance of different models was evaluated using the receiver operating characteristic (ROC) curves and the area under the curve (AUC). The spleen-to-platelet ratio, liver volume, splenic vein diameter, and superior mesenteric vein diameter were independent predictors. Six radiomics features were selected to construct five machine learning models. The adaptive boosting model showed excellent predictive performance, achieving an AUC of 0.964 in the validation cohort, while the combined model achieved the highest predictive accuracy with an AUC of 0.985 in the validation cohort. The clinical-CT-radiomics nomogram demonstrates high predictive accuracy for endoscopic RC in cirrhotic patients with EV, which provides a novel tool for non-invasive prediction of esophageal varices bleeding.

Enhanced diagnostic pipeline for maxillary sinus-maxillary molars relationships: a novel implementation of Detectron2 with faster R-CNN R50 FPN 3x on CBCT images.

Özemre MÖ, Bektaş J, Yanik H, Baysal L, Karslioğlu H

pubmed logopapersSep 27 2025
The anatomical relationship between the maxillary sinus and maxillary molars is critical for planning dental procedures such as tooth extraction, implant placement and periodontal surgery. This study presents a novel artificial intelligence-based approach for the detection and classification of these anatomical relationships in cone beam computed tomography (CBCT) images. The model, developed using advanced image recognition technology, can automatically detect the relationship between the maxillary sinus and adjacent molars with high accuracy. The artificial intelligence algorithm used in our study provided faster and more consistent results compared to traditional manual evaluations, reaching 89% accuracy in the classification of anatomical structures. With this technology, clinicians will be able to more accurately assess the risks of sinus perforation, oroantral fistula and other surgical complications in the maxillary posterior region preoperatively. By reducing the workload associated with CBCT analysis, the system accelerates clinicians' diagnostic process, improves treatment planning and increases patient safety. It also has the potential to assist in the early detection of maxillary sinus pathologies and the planning of sinus floor elevation procedures. These findings suggest that the integration of AI-powered image analysis solutions into daily dental practice can improve clinical decision-making in oral and maxillofacial surgery by providing accurate, efficient and reliable diagnostic support.

[Advances in the application of artificial intelligence for pulmonary function assessment based on chest imaging in thoracic surgery].

Huang LC, Liang HR, Jiang Y, Lin YC, He JX

pubmed logopapersSep 27 2025
In recent years, lung function assessment has attracted increasing attention in the perioperative management of thoracic surgery. However, traditional pulmonary function testing methods remain limited in clinical practice due to high equipment requirements and complex procedures. With the rapid development of artificial intelligence (AI) technology, lung function assessment based on multimodal chest imaging (such as X-rays, CT, and MRI) has become a new research focus. Through deep learning algorithms, AI models can accurately extract imaging features of patients and have made significant progress in quantitative analysis of pulmonary ventilation, evaluation of diffusion capacity, measurement of lung volumes, and prediction of lung function decline. Previous studies have demonstrated that AI models perform well in predicting key indicators such as forced expiratory volume in one second (FEV1), diffusing capacity for carbon monoxide (DLCO), and total lung capacity (TLC). Despite these promising prospects, challenges remain in clinical translation, including insufficient data standardization, limited model interpretability, and the lack of prediction models for postoperative complications. In the future, greater emphasis should be placed on multicenter collaboration, the construction of high-quality databases, the promotion of multimodal data integration, and clinical validation to further enhance the application value of AI technology in precision decision-making for thoracic surgery.

Generation of multimodal realistic computational phantoms as a test-bed for validating deep learning-based cross-modality synthesis techniques.

Camagni F, Nakas A, Parrella G, Vai A, Molinelli S, Vitolo V, Barcellini A, Chalaszczyk A, Imparato S, Pella A, Orlandi E, Baroni G, Riboldi M, Paganelli C

pubmed logopapersSep 27 2025
The validation of multimodal deep learning models for medical image translation is limited by the lack of high-quality, paired datasets. We propose a novel framework that leverages computational phantoms to generate realistic CT and MRI images, enabling reliable ground-truth datasets for robust validation of artificial intelligence (AI) methods that generate synthetic CT (sCT) from MRI, specifically for radiotherapy applications. Two CycleGANs (cycle-consistent generative adversarial networks) were trained to transfer the imaging style of real patients onto CT and MRI phantoms, producing synthetic data with realistic textures and continuous intensity distributions. These data were evaluated through paired assessments with original phantoms, unpaired comparisons with patient scans, and dosimetric analysis using patient-specific radiotherapy treatment plans. Additional external validation was performed on public CT datasets to assess the generalizability to unseen data. The resulting, paired CT/MRI phantoms were used to validate a GAN-based model for sCT generation from abdominal MRI in particle therapy, available in the literature. Results showed strong anatomical consistency with original phantoms, high histogram correlation with patient images (HistCC = 0.998 ± 0.001 for MRI, HistCC = 0.97 ± 0.04 for CT), and dosimetric accuracy comparable to real data. The novelty of this work lies in using generated phantoms as validation data for deep learning-based cross-modality synthesis techniques.
Page 1 of 3793788 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.