Sort by:
Page 112 of 7327315 results

Yasin P, Yimit Y, Abulimiti A, Luan H, Peng C, Yakufu M, Song X

pubmed logopapersOct 30 2025
Knee abnormalities, such as meniscus tears and ligament injuries, are common in clinical practice and pose significant diagnostic challenges. While traditional imaging techniques-X-ray, Computed Tomography (CT) scan, and Magnetic Resonance Imaging (MRI)-are vital for assessment. However, X-rays and CT scans often fail to adequately visualize soft tissue injuries, and MRIs can be costly and time-consuming. To overcome these limitations, we developed an innovative AI-driven approach that allows for the detection of soft tissue abnormalities directly from X-ray images-a capability traditionally reserved for MRI or arthroscopy. We conducted a retrospective study with 4,215 patients from two medical centers, utilizing knee X-ray images annotated by orthopedic surgeons. The YOLOv11 model automated knee localization, while five convolutional neural networks-ResNet152, DenseNet121, MobileNetV3, ShuffleNetV2, and VGG19-were adapted for multi-label classification of eight conditions: meniscus tears (MENI), anterior cruciate ligament tears (ACL), posterior cruciate ligament injuries (PCL), medial collateral ligament injuries (MCL), lateral collateral ligament injuries (LCL), joint effusion (EFFU), bone marrow edema or contusion (CONT), and soft tissue injuries (STI). Data preprocessing involved normalization and Region of Interest (ROI) extraction, with training enhanced by spatial augmentations. Performance was assessed using mean average precision (mAP), F1-scores, and area under the curve (AUC). We also developed a Windows-based PyQt application and a Flask Web application for clinical integration, incorporating explainable AI techniques (GradCAM, ScoreCAM) for interpretability. The YOLOv11 model achieved precise knee localization with a [email protected] of 0.995. In classification, ResNet152 outperformed others, recording a mAP of 90.1% in internal testing and AUCs up to 0.863 (EFFU) in external testing. End-to-end performance on the external set yielded a mAP of 86.1% and F1-scores of 84.0% with ResNet152. The Windows and web applications successfully processed imaging data, aligning with MRI and arthroscopic findings in cases like ACL and meniscus tears. Explainable AI visualizations clarified model decisions, highlighting key regions for complex injuries, such as concurrent ligament and soft tissue damage, enhancing clinical trust. This AI-driven model markedly improved the precision and efficiency of knee abnormality detection through X-ray analysis. By accurately identifying multiple coexisting conditions in a single pass, it offered a scalable tool to enhance diagnostic workflows and patient outcomes, especially in resource-constrained areas.

Javadi M, Griffin R, Tsiamyrtzis P, Leiss E, Webb AG, Tsekos NV

pubmed logopapersOct 30 2025
Deep learning (DL) methods are increasingly applied to address the low signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) of low-field MRI (LFMRI). This study evaluates the potential of diffusion models for LFMRI enhancement, comparing the Super-resolution via Repeated Refinement (SR3), a generative diffusion model, to traditional architectures such as CycleGAN and UNet for translating LFMRI to high-field MRI (HFMRI). Using synthetic LFMRI (64mT) FLAIR brain images generated from the BraTS 2019 dataset (3T), the models were assessed with traditional metrics, including structural similarity index (SSIM) and normalized root-mean-squared error (nRMSE), alongside specialized structural error measurements such as gradient entropy (gEn), gradient error (GE), and perception-based image quality evaluator (PIQE). SR3 significantly outperformed (p-value <  < 0.05) the other models across all metrics, achieving SSIM scores over 0.97 and excelling in preserving pathological structures such as necrotic core and edema, with lower gEn and GE values. These findings suggest diffusion models are a robust alternative to conventional DL approaches for LF-to-HF MRI translation. By preserving structural details and enhancing image quality, SR3 could improve the clinical utility of LFMRI systems, making high-quality MRI more accessible. This work demonstrates the potential of diffusion models in advancing medical image enhancement and translation.

Kim JH, Kim HS, Lee JH, Yoon YC, Lee SA, Chalian M, Choi BO

pubmed logopapersOct 30 2025
We evaluated the potential utility of imaging parameters derived by normalizing muscle signal intensity on T1-weighted lower leg MRIs in Charcot-Marie-Tooth disease type 1 A (CMT1A) patients, using a deep learning-based automated muscle segmentation model. We retrospectively analyzed lower leg MRI data of 107 CMT1A patients. An automated deep learning-based muscle segmentation model was employed to extract muscle signal intensities from four compartments (anterior, lateral, deep posterior, and superficial posterior) of the lower leg. Mean normalized signal intensities (MNSI) were calculated by dividing the mean signal intensity of each segmented muscle compartment by the reference signal intensity for each patient. Correlations between MNSIs and clinical parameters (Charcot-Marie-Tooth Neuropathy Score version 2, functional disability scale [FDS] score, 10-m walk test time, and 9-hole peg test time) were assessed using partial correlation analysis adjusting for age and body mass index. The MNSIs of the anterior, lateral, deep posterior, and superficial posterior compartments of the lower legs, as well as the total MNSI, showed significant positive correlations with all clinical measures, suggesting that higher MNSI values are associated with more severe disease (p < 0.05). The strongest correlation was observed between the MNSI of anterior compartment and FDS score (r = 0.57). MNSIs of the muscle compartments in lower leg MRI, obtained using an automated segmentation model, demonstrated significant correlations with clinical parameters in CMT1A patients.

Li Q, Zhao H, Zhai Y, Li F

pubmed logopapersOct 30 2025
To improve the precision of medical image segmentation for enhanced clinical diagnosis and treatment, this study focuses on overcoming the limitations of existing models in capturing multi-scale information under resolution constraints while maintaining efficiency without compromising accuracy. We developed a multi-scale dense dilated Transformer (MDFormer), which integrates the multi-scale dense dilated self-attention (MDDSA) module. This module dynamically adjusts the size of the dense dilated matrix based on the resolution characteristics, enabling spatial downsampling at a fixed resolution and cross-scale information aggregation to effectively reduce computational costs. Our model achieved DSC of 92.7% on the ACDC dataset and 86.88% on the Synapse dataset, with 37.7 M parameters and 47.39G FLOPs. Paired T-tests were performed, demonstrating statistically significant improvements in segmentation performance compared to other models. The proposed MDFormer significantly enhances medical image segmentation through the effective leveraging of multi-scale information extraction, demonstrating promising potential in various medical image segmentation tasks and laying a solid foundation for future advancements in transformer-based models.

Zheng B, Yu P, Zhu Z, Liang Y, Liu H

pubmed logopapersOct 30 2025
This study explores the use of radiomic features extracted from preoperative T2-weighted MRI and CT images, combined with machine learning models, to predict the risk of vertebral refracture after percutaneous kyphoplasty (PKP) in postmenopausal women. We retrospectively collect data from 156 postmenopausal women with osteoporotic vertebral compression fractures (OVCFs) who underwent PKP (35 refracture cases, 121 non-refracture controls). All patients had preoperative lumbar T2-weighted MRI and CT scans. We extract MRI and CT radiomic features and constructed radiomic signatures through feature selection. Key clinical factors (age, body mass index [BMI], vertebral CT Hounsfield unit [HU] values, smoking history, diabetes history, alcohol use, etc.) are used to build clinical prediction models. Various machine learning classifiers (Support Vector Machine [SVM], K-Nearest Neighbors [KNN], Random Forest [RF], ExtraTrees, XGBoost, LightGBM, Multi-layer Perceptron [MLP]) are trained on the radiomic signatures and clinical factors. Model performance was evaluated on an independent test set using area under the ROC curve (AUC) as the primary metric. Accuracy, sensitivity, specificity, and other measures on the test set were compared between radiomic models, clinical models, and a combined model. The refracture group (n = 35, 22.4%) is significantly older (72.09 ± 4.25 vs 70.11 ± 3.31 years, P = 0.002) with lower vertebral bone density (97.00 ± 6.31 vs 102.49 ± 4.68 HU, P < 0.001). Among individual algorithms, the KNN clinical model achieves optimal performance (AUC = 0.74), while the SVM radiomics model demonstrates the best accuracy (AUC = 0.798, accuracy = 0.839, sensitivity = 0.857, specificity = 0.833). The combined model achieves superior performance (AUC = 0.886), significantly outperforming both standalone models. Multi-modal radiomics combined with key clinical factors provides superior prediction of refracture risk after PKP. This approach offers clinicians an objective tool for individualized risk stratification, representing a meaningful step toward precision medicine in managing osteoporotic fractures.

Leke AZ, Sop Deffo LL, Wirsiy YS, Aldersley T, Day T, King AP, McAllister P, Maboh MN, Lawrenson J, Tantchou C, Kainz B, Casey F, Bond R, Finlay D, Kelson Tchinda N, Obale A, Mugri FN, Zühlke L, Dolk H

pubmed logopapersOct 30 2025
Sub-Saharan Africa (SSA) bears the highest global burden of under-5 mortality, with congenital heart disease (CHD) as a major contributor. Despite advancements in high-income countries, CHD-related mortality in SSA remains largely unchanged due to limited diagnostic capacity and centralized health care. While pulse oximetry aids early detection, confirmation typically relies on echocardiography, a procedure constrained by a shortage of specialized personnel. Artificial intelligence (AI) offers a promising solution to bridge this diagnostic gap. This study aims to develop an AI-assisted echocardiography system that enables nonexpert operators, such as nurses, midwives, and medical doctors, to perform basic cardiac ultrasound sweeps on neonates suspected of CHD and extract accurate cardiac images for remote interpretation by a pediatric cardiologist. The study will use a 2-phase approach to develop a deep learning model for real-time cardiac view detection in neonatal echocardiography, utilizing data from St. Padre Pio Hospital in Cameroon and the Red Cross War Memorial Children's Hospital in South Africa to ensure demographic diversity. In phase 1, the model will be pretrained on retrospective data from nearly 500 neonates (0-28 days old). Phase 2 will fine-tune the model using prospective data from 1000 neonates, which include background elements absent in the retrospective dataset, enabling adaptation to local clinical environments. The datasets will consist of short and continuous echocardiographic video clips covering 10 standard cardiac views, as defined by the American Society of Echocardiography. The model architecture will leverage convolutional neural networks and convolutional long short-term memory layers, inspired by the interleaved visual memory framework, which integrates fast and slow feature extractors via a shared temporal memory mechanism. Video preprocessing, annotation with predefined cardiac view codes using Labelbox, and training with TensorFlow and PyTorch will be performed. Reinforcement learning will guide the dynamic use of feature extractors during training. Iterative refinement, informed by clinical input, will ensure that the model effectively distinguishes correct from incorrect views in real time, enhancing its usability in resource-limited settings. Retrospective data collection for the project began in September 2024, and to date, data from 308 babies have been collected and labeled. In parallel, the initial model framework has been developed and training initiated using a subset of the labeled data. The project is currently in the intensive execution phase, with all objectives progressing in parallel and final results expected within 10 months. The AI-assisted echocardiography model developed in this project holds promise for improving early CHD diagnosis and care in SSA and other low-resource settings. DERR1-10.2196/75270.

Coffee E, Elshiekh C, Budhu JA

pubmed logopapersOct 30 2025
Brain tumors are a diverse group of neoplasms that vary widely in treatment and prognosis. Imaging serves as the cornerstone of diagnosis, monitoring response to treatment and identifying progression of disease in neuro-oncologic care. This review outlines current and emerging imaging modalities with a focus on clinical application in glioma, meningioma, and brain metastasis. We cover standard imaging modalities, advanced magnetic resonance techniques such as perfusion and spectroscopic imaging, and nuclear imaging with positron emission tomography (PET), including amino acid PET. We summarize the standardized Response Assessment in Neuro-Oncology (RANO) criteria, and explore innovations in radiomics, artificial intelligence, and targeted imaging biomarkers. Finally, we address challenges related to equitable access to advanced imaging. This review provides a practical, clinically focused guide to support neurologists in the imaging-based care of patients with primary or metastatic brain tumors.

Candito A, Blackledge MD, Holbrey R, Porta N, Ribeiro A, Zugni F, D'Erme L, Castagnoli F, Dragan AD, Donners R, Messiou C, Tunariu N, Koh DM

pubmed logopapersOct 30 2025
Quantitative assessment of treatment response in Advanced Prostate Cancer (APC) with bone metastases remains an unmet clinical need. Whole-Body Diffusion-Weighted MRI (WB-DWI) provides two response biomarkers: Total Diffusion Volume (TDV) and global Apparent Diffusion Coefficient (gADC). However, tracking post-treatment changes of TDV and gADC from manually delineated lesions is cumbersome and increases inter-reader variability. We developed a software to automate this process.&#xD;&#xD;Approach: Core technologies include: (i) a weakly-supervised Residual U-Net model generating a skeleton probability map to isolate bone; (ii) a statistical framework for WB-DWI intensity normalisation, obtaining a signal-normalised b=900s/mm² (b900) image; and (iii) a shallow convolutional neural network that processes outputs from (i) and (ii) to generate a mask of suspected bone lesions, characterised by higher b900 signal intensity due to restricted water diffusion. This mask is applied to the gADC map to extract TDV and gADC statistics. We tested the tool using expert-defined metastatic bone disease delineations on 66 datasets, assessed repeatability of imaging biomarkers (N=10), and compared software-based response assessment with a construct reference standard, defined as multidisciplinary consensus based on ≥12 months of imaging, clinical, and laboratory follow-up (N=118).&#xD;&#xD;Main results: Average dice score between manual and automated delineations was 0.6 for lesions within pelvis and spine, with an average surface distance of 2mm. Relative differences for log-transformed TDV (log-TDV) and median gADC were 8.8% and 5%, respectively. Repeatability analysis showed coefficients of variation of 4.6% for log-TDV and 3.5% for median gADC, with intraclass correlation coefficients of 0.94 or higher. The software achieved 80.5% accuracy, 84.3% sensitivity, and 85.7% specificity in assessing response to treatment. Average computation time was 90s per scan. &#xD;&#xD;Significance: Our software enables reproducible TDV and gADC quantification from WB-DWI scans for monitoring metastatic bone disease response, thus providing potentially useful measurements for clinical decision-making in APC patients&#xD.

Liu S, Bian Z, Tian J, Yang G, Hui H

pubmed logopapersOct 30 2025
Magnetic Particle Imaging (MPI) is an emerging imaging technique based on superparamagnetic iron oxide nanoparticles, offering high sensitivity and rapid imaging. However, in measurement-based MPI, image quality is degraded by noise arising during both the system matrix calibration procedure and the signal acquisition process.. This study aims to develop a deep learning-based model for efficient noise suppression to enhance MPI image quality. &#xD;Approach: We propose a hybrid encoder-decoder network integrating residual blocks (Res-Blocks) and swin transformer modules. The model employs a multi-scale feature extraction strategy to disentangle noise from valid signals, coupled with cross-level feature fusion to optimize frequency-domain recovery. &#xD;Main results: Model performance was evaluated on simulated dataset, OpenMPI dataset, and dataset acquired from in-house MPI systems. The denoised system matrix achieved an average 12 dB improvement in signal-to-noise ratio (SNR). Reconstructed images showed better visual quality, with a peak signal-to-noise ratio (PSNR) of 29.11 dB and a structural similarity index (SSIM) of 0.93, which outperformed the compared approaches. &#xD;Significance: This work provides a robust solution for noise suppression in system matrix to enhance MPI image quality. The noise suppression framework is extensible to other system matrix-based medical imaging modalities.

Yang X, Liu Z, Jiang H, Wang C, Xia X, Han T, Zheng R, Li X, Hao D, Cui J, Miao S

pubmed logopapersOct 30 2025
This study develops a deep learning model using the You Only Look Once (YOLO) framework for the automated diagnosis of supraspinatus tendon tears (ST) based on multicenter MRI data. In this retrospective study, 1698 patients from five hospitals were included and allocated to training (n=1047), validation (n=299), test (n=154), and external test (n=198) sets. A YOLOv9-based automated model was developed using coronal fat-suppressed T2-weighted images for lesion detection, localization, and classification. Model performance was assessed using Intersection over Union and confusion matrices. Comparisons between model outputs and radiologist interpretations were performed with McNemar's test, and interobserver agreement among radiologists was evaluated using Cohen's kappa. The YOLOv9 model successfully identified the supraspinatus tendon layer in all images across the validation, test, and external test sets, achieving 100% accuracy. For ST tear detection, the model achieved accuracies of 69.0% (755/1094) in the validation set, 73.9% (414/560) in the test set, and 75.64% (559/739) in the external test set. For classification of partial- and full-thickness tears on the test set, the model demonstrated a macro F1 score of 77.7% (95% CI: 67.4-90.5), outperforming all radiologists (all P<0.05). The MRI-based YOLOv9 model excelled in diagnosing supraspinatus tendon tears, surpassing radiologists with varying levels of experience.
Page 112 of 7327315 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,100+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.