Sort by:
Page 2 of 3813802 results

Artificial Intelligence to Detect Developmental Dysplasia of Hip: A Systematic Review.

Bhavsar S, Gowda BB, Bhavsar M, Patole S, Rao S, Rath C

pubmed logopapersSep 28 2025
Deep learning (DL), a branch of artificial intelligence (AI), has been applied to diagnose developmental dysplasia of the hip (DDH) on pelvic radiographs and ultrasound (US) images. This technology can potentially assist in early screening, enable timely intervention and improve cost-effectiveness. We conducted a systematic review to evaluate the diagnostic accuracy of the DL algorithm in detecting DDH. PubMed, Medline, EMBASE, EMCARE, the clinicaltrials.gov (clinical trial registry), IEEE Xplore and Cochrane Library databases were searched in October 2024. Prospective and retrospective cohort studies that included children (< 16 years) at risk of or suspected to have DDH and reported hip ultrasonography (US) or X-ray images using AI were included. A review was conducted using the guidelines of the Cochrane Collaboration Diagnostic Test Accuracy Working Group. Risk of bias was assessed using the QUADAS-2 tool. Twenty-three studies met inclusion criteria, with 15 (n = 8315) evaluating DDH on US images and eight (n = 7091) on pelvic radiographs. The area under the curve of the included studies ranged from 0.80 to 0.99 for pelvic radiographs and 0.90-0.99 for US images. Sensitivity and specificity for detecting DDH on radiographs ranged from 92.86% to 100% and 95.65% to 99.82%, respectively. For US images, sensitivity ranged from 86.54% to 100% and specificity from 62.5% to 100%. AI demonstrated comparable effectiveness to physicians in detecting DDH. However, limited evaluation on external datasets restricts its generalisability. Further research incorporating diverse datasets and real-world applications is needed to assess its broader clinical impact on DDH diagnosis.

Beyond tractography in brain connectivity mapping with dMRI morphometry and functional networks.

Wang JT, Lin CP, Liu HM, Pierpaoli C, Lo CZ

pubmed logopapersSep 27 2025
Traditional brain connectivity studies have focused mainly on structural connectivity, often relying on tractography with diffusion MRI (dMRI) to reconstruct white matter pathways. In parallel, studies of functional connectivity have examined correlations in brain activity using fMRI. However, emerging methodologies are advancing our understanding of brain networks. Here we explore advanced connectivity approaches beyond conventional tractography, focusing on dMRI morphometry and the integration of structural and functional connectivity analysis. dMRI morphometry enables quantitative assessment of white matter pathway volumes through statistical comparison with normative populations, while functional connectivity reveals network organization that is not restricted to direct anatomical connections. More recently, approaches that combine diffusion tensor imaging (DTI) with functional correlation tensor (FCT) analysis have been introduced, and these complementary methods provide new perspectives into brain structure-function relationships. Together, such approaches have important implications for neurodevelopmental and neurological disorders as well as brain plasticity. The integration of these methods with artificial intelligence techniques have the potential to support both basic neuroscience research and clinical applications.

Quantifying 3D foot and ankle alignment using an AI-driven framework: a pilot study.

Huysentruyt R, Audenaert E, Van den Borre I, Pižurica A, Duquesne K

pubmed logopapersSep 27 2025
Accurate assessment of foot and ankle alignment through clinical measurements is essential for diagnosing deformities, treatment planning, and monitoring outcomes. The traditional 2D radiographs fail to fully represent the 3D complexity of the foot and ankle. In contrast, weight-bearing CT provides a 3D view of bone alignment under physiological loading. Nevertheless, manual landmark identification on WBCT remains time-intensive and prone to variability. This study presents a novel AI framework automating foot and ankle alignment assessment via deep learning landmark detection. By training 3D U-Net models to predict 22 anatomical landmarks directly from weight-bearing CT images, using heatmap predictions, our approach eliminates the need for segmentation and iterative mesh registration methods. A small dataset of 74 orthopedic patients, including foot deformity cases such as pes cavus and planovalgus, was used to develop and evaluate the model in a clinically relevant population. The mean absolute error was assessed for each landmark and each angle using a fivefold cross-validation. Mean absolute distance errors ranged from 1.00 mm for the proximal head center of the first phalanx to a maximum of 1.88 mm for the lowest point of the calcaneus. Automated clinical measurements derived from these landmarks achieved mean absolute errors between 0.91° for the hindfoot angle and a maximum of 2.90° for the Böhler angle. The heatmap-based AI approach enables automated foot and ankle alignment assessment from WBCT imaging, achieving accuracies comparable to the manual inter-rater variability reported in previous studies. This novel AI-driven method represents a potentially valuable approach for evaluating foot and ankle morphology. However, this exploratory study requires further evaluation with larger datasets to assess its real clinical applicability.

Single-step prediction of inferior alveolar nerve injury after mandibular third molar extraction using contrastive learning and bayesian auto-tuned deep learning model.

Yoon K, Choi Y, Lee M, Kim J, Kim JY, Kim JW, Choi J, Park W

pubmed logopapersSep 27 2025
Inferior alveolar nerve (IAN) injury is a critical complication of mandibular third molar extraction. This study aimed to construct and evaluate a deep learning framework that integrates contrastive learning and Bayesian optimization to enhance predictive performance on cone-beam computed tomography (CBCT) and panoramic radiographs. A retrospective dataset of 902 panoramic radiographs and 1,500 CBCT images was used. Five deep learning architectures (MobileNetV2, ResNet101D, Vision Transformer, Twins-SVT, and SSL-ResNet50) were trained with and without contrastive learning and Bayesian optimization. Model performance was evaluated using accuracy, F1-score, and comparison with oral and maxillofacial surgeons (OMFSs). Contrastive learning significantly improved the F1-scores across all models (e.g., MobileNetV2: 0.302 to 0.740; ResNet101D: 0.188 to 0.689; Vision Transformer: 0.275 to 0.704; Twins-SVT: 0.370 to 0.719; SSL-ResNet50: 0.109 to 0.576). Bayesian optimization further enhanced the F1-scores for MobileNetV2 (from 0.740 to 0.923), ResNet101D (from 0.689 to 0.857), Vision Transformer (from 0.704 to 0.871), Twins-SVT (from 0.719 to 0.857), and SSL-ResNet50 (from 0.576 to 0.875). The AI model outperformed OMFSs on CBCT cross-sectional images (F1-score: 0.923 vs. 0.667) but underperformed on panoramic radiographs (0.666 vs. 0.730). The proposed single-step deep learning approach effectively predicts IAN injury, with contrastive learning addressing data imbalance and Bayesian optimization optimizing model performance. While artificial intelligence surpasses human performance in CBCT images, panoramic radiographs analysis still benefits from expert interpretation. Future work should focus on multi-center validation and explainable artificial intelligence for broader clinical adoption.

Development of a clinical-CT-radiomics nomogram for predicting endoscopic red color sign in cirrhotic patients with esophageal varices.

Han J, Dong J, Yan C, Zhang J, Wang Y, Gao M, Zhang M, Chen Y, Cai J, Zhao L

pubmed logopapersSep 27 2025
To evaluate the predictive performance of a clinical-CT-radiomics nomogram based on radiomics signature and independent clinical-CT predictors for predicting endoscopic red color sign (RC) in cirrhotic patients with esophageal varices (EV). We retrospectively evaluated 215 cirrhotic patients. Among them, 108 and 107 cases were positive and negative for endoscopic RC, respectively. Patients were assigned to a training cohort (n = 150) and a validation cohort (n = 65) at a 7:3 ratio. In the training cohort, univariate and multivariate logistic regression analyses were performed on clinical and CT features to develop a clinical-CT model. Radiomic features were extracted from portal venous phase CT images to generate a Radiomic score (Rad-score) and to construct five machine learning models. A combined model was built using clinical-CT predictors and Rad-score through logistic regression. The performance of different models was evaluated using the receiver operating characteristic (ROC) curves and the area under the curve (AUC). The spleen-to-platelet ratio, liver volume, splenic vein diameter, and superior mesenteric vein diameter were independent predictors. Six radiomics features were selected to construct five machine learning models. The adaptive boosting model showed excellent predictive performance, achieving an AUC of 0.964 in the validation cohort, while the combined model achieved the highest predictive accuracy with an AUC of 0.985 in the validation cohort. The clinical-CT-radiomics nomogram demonstrates high predictive accuracy for endoscopic RC in cirrhotic patients with EV, which provides a novel tool for non-invasive prediction of esophageal varices bleeding.

Enhanced diagnostic pipeline for maxillary sinus-maxillary molars relationships: a novel implementation of Detectron2 with faster R-CNN R50 FPN 3x on CBCT images.

Özemre MÖ, Bektaş J, Yanik H, Baysal L, Karslioğlu H

pubmed logopapersSep 27 2025
The anatomical relationship between the maxillary sinus and maxillary molars is critical for planning dental procedures such as tooth extraction, implant placement and periodontal surgery. This study presents a novel artificial intelligence-based approach for the detection and classification of these anatomical relationships in cone beam computed tomography (CBCT) images. The model, developed using advanced image recognition technology, can automatically detect the relationship between the maxillary sinus and adjacent molars with high accuracy. The artificial intelligence algorithm used in our study provided faster and more consistent results compared to traditional manual evaluations, reaching 89% accuracy in the classification of anatomical structures. With this technology, clinicians will be able to more accurately assess the risks of sinus perforation, oroantral fistula and other surgical complications in the maxillary posterior region preoperatively. By reducing the workload associated with CBCT analysis, the system accelerates clinicians' diagnostic process, improves treatment planning and increases patient safety. It also has the potential to assist in the early detection of maxillary sinus pathologies and the planning of sinus floor elevation procedures. These findings suggest that the integration of AI-powered image analysis solutions into daily dental practice can improve clinical decision-making in oral and maxillofacial surgery by providing accurate, efficient and reliable diagnostic support.

[Advances in the application of artificial intelligence for pulmonary function assessment based on chest imaging in thoracic surgery].

Huang LC, Liang HR, Jiang Y, Lin YC, He JX

pubmed logopapersSep 27 2025
In recent years, lung function assessment has attracted increasing attention in the perioperative management of thoracic surgery. However, traditional pulmonary function testing methods remain limited in clinical practice due to high equipment requirements and complex procedures. With the rapid development of artificial intelligence (AI) technology, lung function assessment based on multimodal chest imaging (such as X-rays, CT, and MRI) has become a new research focus. Through deep learning algorithms, AI models can accurately extract imaging features of patients and have made significant progress in quantitative analysis of pulmonary ventilation, evaluation of diffusion capacity, measurement of lung volumes, and prediction of lung function decline. Previous studies have demonstrated that AI models perform well in predicting key indicators such as forced expiratory volume in one second (FEV1), diffusing capacity for carbon monoxide (DLCO), and total lung capacity (TLC). Despite these promising prospects, challenges remain in clinical translation, including insufficient data standardization, limited model interpretability, and the lack of prediction models for postoperative complications. In the future, greater emphasis should be placed on multicenter collaboration, the construction of high-quality databases, the promotion of multimodal data integration, and clinical validation to further enhance the application value of AI technology in precision decision-making for thoracic surgery.

Generation of multimodal realistic computational phantoms as a test-bed for validating deep learning-based cross-modality synthesis techniques.

Camagni F, Nakas A, Parrella G, Vai A, Molinelli S, Vitolo V, Barcellini A, Chalaszczyk A, Imparato S, Pella A, Orlandi E, Baroni G, Riboldi M, Paganelli C

pubmed logopapersSep 27 2025
The validation of multimodal deep learning models for medical image translation is limited by the lack of high-quality, paired datasets. We propose a novel framework that leverages computational phantoms to generate realistic CT and MRI images, enabling reliable ground-truth datasets for robust validation of artificial intelligence (AI) methods that generate synthetic CT (sCT) from MRI, specifically for radiotherapy applications. Two CycleGANs (cycle-consistent generative adversarial networks) were trained to transfer the imaging style of real patients onto CT and MRI phantoms, producing synthetic data with realistic textures and continuous intensity distributions. These data were evaluated through paired assessments with original phantoms, unpaired comparisons with patient scans, and dosimetric analysis using patient-specific radiotherapy treatment plans. Additional external validation was performed on public CT datasets to assess the generalizability to unseen data. The resulting, paired CT/MRI phantoms were used to validate a GAN-based model for sCT generation from abdominal MRI in particle therapy, available in the literature. Results showed strong anatomical consistency with original phantoms, high histogram correlation with patient images (HistCC = 0.998 ± 0.001 for MRI, HistCC = 0.97 ± 0.04 for CT), and dosimetric accuracy comparable to real data. The novelty of this work lies in using generated phantoms as validation data for deep learning-based cross-modality synthesis techniques.

COVID-19 Pneumonia Diagnosis Using Medical Images: Deep Learning-Based Transfer Learning Approach.

Dharmik A

pubmed logopapersSep 26 2025
SARS-CoV-2, the causative agent of COVID-19, remains a global health concern due to its high transmissibility and evolving variants. Although vaccination efforts and therapeutic advancements have mitigated disease severity, emerging mutations continue to challenge diagnostics and containment strategies. As of mid-February 2025, global test positivity has risen to 11%, marking the highest level in over 6 months, despite widespread immunization efforts. Newer variants demonstrate enhanced host cell binding, increasing both infectivity and diagnostic complexity. This study aimed to evaluate the effectiveness of deep transfer learning in delivering a rapid, accurate, and mutation-resilient COVID-19 diagnosis from medical imaging, with a focus on scalability and accessibility. An automated detection system was developed using state-of-the-art convolutional neural networks, including VGG16 (Visual Geometry Group network-16 layers), ResNet50 (residual network-50 layers), ConvNeXtTiny (convolutional next-tiny), MobileNet (mobile network), NASNetMobile (neural architecture search network-mobile version), and DenseNet121 (densely connected convolutional network-121 layers), to detect COVID-19 from chest X-ray and computed tomography (CT) images. Among all the models evaluated, DenseNet121 emerged as the best-performing architecture for COVID-19 diagnosis using X-ray and CT images. It achieved an impressive accuracy of 98%, with a precision of 96.9%, a recall of 98.9%, an F1-score of 97.9%, and an area under the curve score of 99.8%, indicating a high degree of consistency and reliability in detecting both positive and negative cases. The confusion matrix showed minimal false positives and false negatives, underscoring the model's robustness in real-world diagnostic scenarios. Given its performance, DenseNet121 is a strong candidate for deployment in clinical settings and serves as a benchmark for future improvements in artificial intelligence-assisted diagnostic tools. The study results underscore the potential of artificial intelligence-powered diagnostics in supporting early detection and global pandemic response. With careful optimization, deep learning models can address critical gaps in testing, particularly in settings constrained by limited resources or emerging variants.

Ultra-low-field MRI: a David versus Goliath challenge in modern imaging.

Gagliardo C, Feraco P, Contrino E, D'Angelo C, Geraci L, Salvaggio G, Gagliardo A, La Grutta L, Midiri M, Marrale M

pubmed logopapersSep 26 2025
Ultra-low-field magnetic resonance imaging (ULF-MRI), operating below 0.2 Tesla, is gaining renewed interest as a re-emerging diagnostic modality in a field dominated by high- and ultra-high-field systems. Recent advances in magnet design, RF coils, pulse sequences, and AI-based reconstruction have significantly enhanced image quality, mitigating traditional limitations such as low signal- and contrast-to-noise ratio and reduced spatial resolution. ULF-MRI offers distinct advantages: reduced susceptibility artifacts, safer imaging in patients with metallic implants, low power consumption, and true portability for point-of-care use. This narrative review synthesizes the physical foundations, technological advances, and emerging clinical applications of ULF-MRI. A focused literature search across PubMed, Scopus, IEEE Xplore, and Google Scholar was conducted up to August 11, 2025, using combined keywords targeting hardware, software, and clinical domains. Inclusion emphasized scientific rigor and thematic relevance. A comparative analysis with other imaging modalities highlights the specific niche ULF-MRI occupies within the broader diagnostic landscape. Future directions and challenges for clinical translation are explored. In a world increasingly polarized between the push for ultra-high-field excellence and the need for accessible imaging, ULF-MRI embodies a modern "David versus Goliath" theme, offering a sustainable, democratizing force capable of expanding MRI access to anyone, anywhere.
Page 2 of 3813802 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.