Sort by:
Page 8 of 4454447 results

Artificial Intelligence to Detect Developmental Dysplasia of Hip: A Systematic Review.

Bhavsar S, Gowda BB, Bhavsar M, Patole S, Rao S, Rath C

pubmed logopapersSep 28 2025
Deep learning (DL), a branch of artificial intelligence (AI), has been applied to diagnose developmental dysplasia of the hip (DDH) on pelvic radiographs and ultrasound (US) images. This technology can potentially assist in early screening, enable timely intervention and improve cost-effectiveness. We conducted a systematic review to evaluate the diagnostic accuracy of the DL algorithm in detecting DDH. PubMed, Medline, EMBASE, EMCARE, the clinicaltrials.gov (clinical trial registry), IEEE Xplore and Cochrane Library databases were searched in October 2024. Prospective and retrospective cohort studies that included children (< 16 years) at risk of or suspected to have DDH and reported hip ultrasonography (US) or X-ray images using AI were included. A review was conducted using the guidelines of the Cochrane Collaboration Diagnostic Test Accuracy Working Group. Risk of bias was assessed using the QUADAS-2 tool. Twenty-three studies met inclusion criteria, with 15 (n = 8315) evaluating DDH on US images and eight (n = 7091) on pelvic radiographs. The area under the curve of the included studies ranged from 0.80 to 0.99 for pelvic radiographs and 0.90-0.99 for US images. Sensitivity and specificity for detecting DDH on radiographs ranged from 92.86% to 100% and 95.65% to 99.82%, respectively. For US images, sensitivity ranged from 86.54% to 100% and specificity from 62.5% to 100%. AI demonstrated comparable effectiveness to physicians in detecting DDH. However, limited evaluation on external datasets restricts its generalisability. Further research incorporating diverse datasets and real-world applications is needed to assess its broader clinical impact on DDH diagnosis.

Advances in ultrasound-based imaging for diagnosis of endometrial cancer.

Tlais M, Hamze H, Hteit A, Haddad K, El Fassih I, Zalzali I, Mahmoud S, Karaki S, Jabbour D

pubmed logopapersSep 28 2025
Endometrial cancer (EC) is the most common gynecological malignancy in high-income countries, with incidence rates rising globally. Early and accurate diagnosis is essential for improving outcomes. Transvaginal ultrasound (TVUS) remains a cost-effective first-line tool, and emerging techniques such as three-dimensional (3D) ultrasound (US), contrast-enhanced US (CEUS), elastography, and artificial intelligence (AI)-enhanced imaging may further improve diagnostic performance. To systematically review recent advances in US-based imaging techniques for the diagnosis and staging of EC, and to compare their performance with magnetic resonance imaging (MRI). A systematic search of PubMed, Scopus, Web of Science, and Google Scholar was performed to identify studies published between January 2010 and March 2025. Eligible studies evaluated TVUS, 3D-US, CEUS, elastography, or AI-enhanced US in EC diagnosis and staging. Methodological quality was assessed using the QUADAS-2 tool. Sensitivity, specificity, and area under the curve (AUC) were extracted where available, with narrative synthesis due to heterogeneity. Forty-one studies met the inclusion criteria. TVUS demonstrated high sensitivity (76%-96%) but moderate specificity (61%-86%), while MRI achieved higher specificity (84%-95%) and superior staging accuracy. 3D-US yielded accuracy comparable to MRI in selected early-stage cases. CEUS and elastography enhanced tissue characterization, and AI-enhanced US achieved pooled AUCs up to 0.91 for risk prediction and lesion segmentation. Variability in performance was noted across modalities due to patient demographics, equipment differences, and operator experience. TVUS remains a highly sensitive initial screening tool, with MRI preferred for definitive staging. 3D-US, CEUS, elastography, and AI-enhanced techniques show promise as complementary or alternative approaches, particularly in low-resource settings. Standardization, multicenter validation, and integration of multi-modal imaging are needed to optimize diagnostic pathways for EC.

Artificial intelligence in carotid computed tomography angiography plaque detection: Decade of progress and future perspectives.

Wang DY, Yang T, Zhang CT, Zhan PC, Miao ZX, Li BL, Yang H

pubmed logopapersSep 28 2025
The application of artificial intelligence (AI) in carotid atherosclerotic plaque detection <i>via</i> computed tomography angiography (CTA) has significantly advanced over the past decade. This mini-review consolidates recent innovations in deep learning architectures, domain adaptation techniques, and automated plaque characterization methodologies. Hybrid models, such as residual U-Net-Pyramid Scene Parsing Network, exhibit a remarkable precision of 80.49% in plaque segmentation, outperforming radiologists in diagnostic efficiency by reducing analysis time from minutes to mere seconds. Domain-adaptive frameworks, such as Lesion Assessment through Tracklet Evaluation, demonstrate robust performance across heterogeneous imaging datasets, achieving an area under the curve (AUC) greater than 0.88. Furthermore, novel approaches integrating U-Net and Efficient-Net architectures, enhanced by Bayesian optimization, have achieved impressive correlation coefficients (0.89) for plaque quantification. AI-powered CTA also enables high-precision three-dimensional vascular segmentation, with a Dice coefficient of 0.9119, and offers superior cardiovascular risk stratification compared to traditional Agatston scoring, yielding AUC values of 0.816 <i>vs</i> 0.729 at a 15-year follow-up. These breakthroughs address key challenges in plaque motion analysis, with systolic retractive motion biomarkers successfully identifying 80% of vulnerable plaques. Looking ahead, future directions focus on enhancing the interpretability of AI models through explainable AI and leveraging federated learning to mitigate data heterogeneity. This mini-review underscores the transformative potential of AI in carotid plaque assessment, offering substantial implications for stroke prevention and personalized cerebrovascular management strategies.

Development of a clinical-CT-radiomics nomogram for predicting endoscopic red color sign in cirrhotic patients with esophageal varices.

Han J, Dong J, Yan C, Zhang J, Wang Y, Gao M, Zhang M, Chen Y, Cai J, Zhao L

pubmed logopapersSep 27 2025
To evaluate the predictive performance of a clinical-CT-radiomics nomogram based on radiomics signature and independent clinical-CT predictors for predicting endoscopic red color sign (RC) in cirrhotic patients with esophageal varices (EV). We retrospectively evaluated 215 cirrhotic patients. Among them, 108 and 107 cases were positive and negative for endoscopic RC, respectively. Patients were assigned to a training cohort (n = 150) and a validation cohort (n = 65) at a 7:3 ratio. In the training cohort, univariate and multivariate logistic regression analyses were performed on clinical and CT features to develop a clinical-CT model. Radiomic features were extracted from portal venous phase CT images to generate a Radiomic score (Rad-score) and to construct five machine learning models. A combined model was built using clinical-CT predictors and Rad-score through logistic regression. The performance of different models was evaluated using the receiver operating characteristic (ROC) curves and the area under the curve (AUC). The spleen-to-platelet ratio, liver volume, splenic vein diameter, and superior mesenteric vein diameter were independent predictors. Six radiomics features were selected to construct five machine learning models. The adaptive boosting model showed excellent predictive performance, achieving an AUC of 0.964 in the validation cohort, while the combined model achieved the highest predictive accuracy with an AUC of 0.985 in the validation cohort. The clinical-CT-radiomics nomogram demonstrates high predictive accuracy for endoscopic RC in cirrhotic patients with EV, which provides a novel tool for non-invasive prediction of esophageal varices bleeding.

Imaging-Based Mortality Prediction in Patients with Systemic Sclerosis

Alec K. Peltekian, Karolina Senkow, Gorkem Durak, Kevin M. Grudzinski, Bradford C. Bemiss, Jane E. Dematte, Carrie Richardson, Nikolay S. Markov, Mary Carns, Kathleen Aren, Alexandra Soriano, Matthew Dapas, Harris Perlman, Aaron Gundersheimer, Kavitha C. Selvan, John Varga, Monique Hinchcliff, Krishnan Warrior, Catherine A. Gao, Richard G. Wunderink, GR Scott Budinger, Alok N. Choudhary, Anthony J. Esposito, Alexander V. Misharin, Ankit Agrawal, Ulas Bagci

arxiv logopreprintSep 27 2025
Interstitial lung disease (ILD) is a leading cause of morbidity and mortality in systemic sclerosis (SSc). Chest computed tomography (CT) is the primary imaging modality for diagnosing and monitoring lung complications in SSc patients. However, its role in disease progression and mortality prediction has not yet been fully clarified. This study introduces a novel, large-scale longitudinal chest CT analysis framework that utilizes radiomics and deep learning to predict mortality associated with lung complications of SSc. We collected and analyzed 2,125 CT scans from SSc patients enrolled in the Northwestern Scleroderma Registry, conducting mortality analyses at one, three, and five years using advanced imaging analysis techniques. Death labels were assigned based on recorded deaths over the one-, three-, and five-year intervals, confirmed by expert physicians. In our dataset, 181, 326, and 428 of the 2,125 CT scans were from patients who died within one, three, and five years, respectively. Using ResNet-18, DenseNet-121, and Swin Transformer we use pre-trained models, and fine-tuned on 2,125 images of SSc patients. Models achieved an AUC of 0.769, 0.801, 0.709 for predicting mortality within one-, three-, and five-years, respectively. Our findings highlight the potential of both radiomics and deep learning computational methods to improve early detection and risk assessment of SSc-related interstitial lung disease, marking a significant advancement in the literature.

Single-step prediction of inferior alveolar nerve injury after mandibular third molar extraction using contrastive learning and bayesian auto-tuned deep learning model.

Yoon K, Choi Y, Lee M, Kim J, Kim JY, Kim JW, Choi J, Park W

pubmed logopapersSep 27 2025
Inferior alveolar nerve (IAN) injury is a critical complication of mandibular third molar extraction. This study aimed to construct and evaluate a deep learning framework that integrates contrastive learning and Bayesian optimization to enhance predictive performance on cone-beam computed tomography (CBCT) and panoramic radiographs. A retrospective dataset of 902 panoramic radiographs and 1,500 CBCT images was used. Five deep learning architectures (MobileNetV2, ResNet101D, Vision Transformer, Twins-SVT, and SSL-ResNet50) were trained with and without contrastive learning and Bayesian optimization. Model performance was evaluated using accuracy, F1-score, and comparison with oral and maxillofacial surgeons (OMFSs). Contrastive learning significantly improved the F1-scores across all models (e.g., MobileNetV2: 0.302 to 0.740; ResNet101D: 0.188 to 0.689; Vision Transformer: 0.275 to 0.704; Twins-SVT: 0.370 to 0.719; SSL-ResNet50: 0.109 to 0.576). Bayesian optimization further enhanced the F1-scores for MobileNetV2 (from 0.740 to 0.923), ResNet101D (from 0.689 to 0.857), Vision Transformer (from 0.704 to 0.871), Twins-SVT (from 0.719 to 0.857), and SSL-ResNet50 (from 0.576 to 0.875). The AI model outperformed OMFSs on CBCT cross-sectional images (F1-score: 0.923 vs. 0.667) but underperformed on panoramic radiographs (0.666 vs. 0.730). The proposed single-step deep learning approach effectively predicts IAN injury, with contrastive learning addressing data imbalance and Bayesian optimization optimizing model performance. While artificial intelligence surpasses human performance in CBCT images, panoramic radiographs analysis still benefits from expert interpretation. Future work should focus on multi-center validation and explainable artificial intelligence for broader clinical adoption.

Generation of multimodal realistic computational phantoms as a test-bed for validating deep learning-based cross-modality synthesis techniques.

Camagni F, Nakas A, Parrella G, Vai A, Molinelli S, Vitolo V, Barcellini A, Chalaszczyk A, Imparato S, Pella A, Orlandi E, Baroni G, Riboldi M, Paganelli C

pubmed logopapersSep 27 2025
The validation of multimodal deep learning models for medical image translation is limited by the lack of high-quality, paired datasets. We propose a novel framework that leverages computational phantoms to generate realistic CT and MRI images, enabling reliable ground-truth datasets for robust validation of artificial intelligence (AI) methods that generate synthetic CT (sCT) from MRI, specifically for radiotherapy applications. Two CycleGANs (cycle-consistent generative adversarial networks) were trained to transfer the imaging style of real patients onto CT and MRI phantoms, producing synthetic data with realistic textures and continuous intensity distributions. These data were evaluated through paired assessments with original phantoms, unpaired comparisons with patient scans, and dosimetric analysis using patient-specific radiotherapy treatment plans. Additional external validation was performed on public CT datasets to assess the generalizability to unseen data. The resulting, paired CT/MRI phantoms were used to validate a GAN-based model for sCT generation from abdominal MRI in particle therapy, available in the literature. Results showed strong anatomical consistency with original phantoms, high histogram correlation with patient images (HistCC = 0.998 ± 0.001 for MRI, HistCC = 0.97 ± 0.04 for CT), and dosimetric accuracy comparable to real data. The novelty of this work lies in using generated phantoms as validation data for deep learning-based cross-modality synthesis techniques.

Enhanced diagnostic pipeline for maxillary sinus-maxillary molars relationships: a novel implementation of Detectron2 with faster R-CNN R50 FPN 3x on CBCT images.

Özemre MÖ, Bektaş J, Yanik H, Baysal L, Karslioğlu H

pubmed logopapersSep 27 2025
The anatomical relationship between the maxillary sinus and maxillary molars is critical for planning dental procedures such as tooth extraction, implant placement and periodontal surgery. This study presents a novel artificial intelligence-based approach for the detection and classification of these anatomical relationships in cone beam computed tomography (CBCT) images. The model, developed using advanced image recognition technology, can automatically detect the relationship between the maxillary sinus and adjacent molars with high accuracy. The artificial intelligence algorithm used in our study provided faster and more consistent results compared to traditional manual evaluations, reaching 89% accuracy in the classification of anatomical structures. With this technology, clinicians will be able to more accurately assess the risks of sinus perforation, oroantral fistula and other surgical complications in the maxillary posterior region preoperatively. By reducing the workload associated with CBCT analysis, the system accelerates clinicians' diagnostic process, improves treatment planning and increases patient safety. It also has the potential to assist in the early detection of maxillary sinus pathologies and the planning of sinus floor elevation procedures. These findings suggest that the integration of AI-powered image analysis solutions into daily dental practice can improve clinical decision-making in oral and maxillofacial surgery by providing accurate, efficient and reliable diagnostic support.

Quantifying 3D foot and ankle alignment using an AI-driven framework: a pilot study.

Huysentruyt R, Audenaert E, Van den Borre I, Pižurica A, Duquesne K

pubmed logopapersSep 27 2025
Accurate assessment of foot and ankle alignment through clinical measurements is essential for diagnosing deformities, treatment planning, and monitoring outcomes. The traditional 2D radiographs fail to fully represent the 3D complexity of the foot and ankle. In contrast, weight-bearing CT provides a 3D view of bone alignment under physiological loading. Nevertheless, manual landmark identification on WBCT remains time-intensive and prone to variability. This study presents a novel AI framework automating foot and ankle alignment assessment via deep learning landmark detection. By training 3D U-Net models to predict 22 anatomical landmarks directly from weight-bearing CT images, using heatmap predictions, our approach eliminates the need for segmentation and iterative mesh registration methods. A small dataset of 74 orthopedic patients, including foot deformity cases such as pes cavus and planovalgus, was used to develop and evaluate the model in a clinically relevant population. The mean absolute error was assessed for each landmark and each angle using a fivefold cross-validation. Mean absolute distance errors ranged from 1.00 mm for the proximal head center of the first phalanx to a maximum of 1.88 mm for the lowest point of the calcaneus. Automated clinical measurements derived from these landmarks achieved mean absolute errors between 0.91° for the hindfoot angle and a maximum of 2.90° for the Böhler angle. The heatmap-based AI approach enables automated foot and ankle alignment assessment from WBCT imaging, achieving accuracies comparable to the manual inter-rater variability reported in previous studies. This novel AI-driven method represents a potentially valuable approach for evaluating foot and ankle morphology. However, this exploratory study requires further evaluation with larger datasets to assess its real clinical applicability.

S$^3$F-Net: A Multi-Modal Approach to Medical Image Classification via Spatial-Spectral Summarizer Fusion Network

Md. Saiful Bari Siddiqui, Mohammed Imamul Hassan Bhuiyan

arxiv logopreprintSep 27 2025
Convolutional Neural Networks have become a cornerstone of medical image analysis due to their proficiency in learning hierarchical spatial features. However, this focus on a single domain is inefficient at capturing global, holistic patterns and fails to explicitly model an image's frequency-domain characteristics. To address these challenges, we propose the Spatial-Spectral Summarizer Fusion Network (S$^3$F-Net), a dual-branch framework that learns from both spatial and spectral representations simultaneously. The S$^3$F-Net performs a fusion of a deep spatial CNN with our proposed shallow spectral encoder, SpectraNet. SpectraNet features the proposed SpectralFilter layer, which leverages the Convolution Theorem by applying a bank of learnable filters directly to an image's full Fourier spectrum via a computation-efficient element-wise multiplication. This allows the SpectralFilter layer to attain a global receptive field instantaneously, with its output being distilled by a lightweight summarizer network. We evaluate S$^3$F-Net across four medical imaging datasets spanning different modalities to validate its efficacy and generalizability. Our framework consistently and significantly outperforms its strong spatial-only baseline in all cases, with accuracy improvements of up to 5.13%. With a powerful Bilinear Fusion, S$^3$F-Net achieves a SOTA competitive accuracy of 98.76% on the BRISC2025 dataset. Concatenation Fusion performs better on the texture-dominant Chest X-Ray Pneumonia dataset, achieving 93.11% accuracy, surpassing many top-performing, much deeper models. Our explainability analysis also reveals that the S$^3$F-Net learns to dynamically adjust its reliance on each branch based on the input pathology. These results verify that our dual-domain approach is a powerful and generalizable paradigm for medical image analysis.
Page 8 of 4454447 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.