Sort by:
Page 334 of 3423413 results

Multimodal MRI radiomics enhances epilepsy prediction in pediatric low-grade glioma patients.

Tang T, Wu Y, Dong X, Zhai X

pubmed logopapersMay 22 2025
Determining whether pediatric patients with low-grade gliomas (pLGGs) have tumor-related epilepsy (GAE) is a crucial aspect of preoperative evaluation. Therefore, we aim to propose an innovative, machine learning- and deep learning-based framework for the rapid and non-invasive preoperative assessment of GAE in pediatric patients using magnetic resonance imaging (MRI). In this study, we propose a novel radiomics-based approach that integrates tumor and peritumoral features extracted from preoperative multiparametric MRI scans to accurately and non-invasively predict the occurrence of tumor-related epilepsy in pediatric patients. Our study developed a multimodal MRI radiomics model to predict epilepsy in pLGGs patients, achieving an AUC of 0.969. The integration of multi-sequence MRI data significantly improved predictive performance, with Stochastic Gradient Descent (SGD) classifier showing robust results (sensitivity: 0.882, specificity: 0.956). Our model can accurately predict whether pLGGs patients have tumor-related epilepsy, which could guide surgical decision-making. Future studies should focus on similarly standardized preoperative evaluations in pediatric epilepsy centers to increase training data and enhance the generalizability of the model.

An X-ray bone age assessment method for hands and wrists of adolescents in Western China based on feature fusion deep learning models.

Wang YH, Zhou HM, Wan L, Guo YC, Li YZ, Liu TA, Guo JX, Li DY, Chen T

pubmed logopapersMay 22 2025
The epiphyses of the hand and wrist serve as crucial indicators for assessing skeletal maturity in adolescents. This study aimed to develop a deep learning (DL) model for bone age (BA) assessment using hand and wrist X-ray images, addressing the challenge of classifying BA in adolescents. The results of this DL-based classification were then compared and analyzed with those obtained from manual assessment. A retrospective analysis was conducted on 688 hand and wrist X-ray images of adolescents aged 11.00-23.99 years from western China, which were randomly divided into training set, validation set and test set. The BA assessment results were initially analyzed and compared using four DL network models: InceptionV3, InceptionV3 + SE + Sex, InceptionV3 + Bilinear and InceptionV3 + Bilinear. + SE + Sex, to identify the DL model with the best classification performance. Subsequently, the results of the top-performing model were compared with those of manual classification. The study findings revealed that the InceptionV3 + Bilinear + SE + Sex model exhibited the best performance, achieving classification accuracies of 96.15% and 90.48% for the training and test set, respectively. Furthermore, based on the InceptionV3 + Bilinear + SE + Sex model, classification accuracies were calculated for four age groups (< 14.0 years, 14.0 years ≤ age < 16.0 years, 16.0 years ≤ age < 18.0 years, ≥ 18.0 years), with notable accuracies of 100% for the age groups 16.0 years ≤ age < 18.0 years and ≥ 18.0 years. The BA classification, utilizing the feature fusion DL network model, holds significant reference value for determining the age of criminal responsibility of adolescents, particularly at the critical legal age boundaries of 14.0, 16.0, and 18.0 years.

High-resolution deep learning reconstruction to improve the accuracy of CT fractional flow reserve.

Tomizawa N, Fan R, Fujimoto S, Nozaki YO, Kawaguchi YO, Takamura K, Hiki M, Aikawa T, Takahashi N, Okai I, Okazaki S, Kumamaru KK, Minamino T, Aoki S

pubmed logopapersMay 22 2025
This study aimed to compare the diagnostic performance of CT-derived fractional flow reserve (CT-FFR) using model-based iterative reconstruction (MBIR) and high-resolution deep learning reconstruction (HR-DLR) images to detect functionally significant stenosis with invasive FFR as the reference standard. This single-center retrospective study included 79 consecutive patients (mean age, 70 ± 11 [SD] years; 57 male) who underwent coronary CT angiography followed by invasive FFR between February 2022 and March 2024. CT-FFR was calculated using a mesh-free simulation. The cutoff for functionally significant stenosis was defined as FFR ≤ 0.80. CT-FFR was compared with MBIR and HR-DLR using receiver operating characteristic curve analysis. The mean invasive FFR value was 0.81 ± 0.09, and 46 of 98 vessels (47%) had FFR ≤ 0.80. The mean noise of HR-DLR was lower than that of MBIR (14.4 ± 1.7 vs 23.5 ± 3.1, p < 0.001). The area under the receiver operating characteristic curve for the diagnosis of functionally significant stenosis of HR-DLR (0.88; 95% CI: 0.80, 0.95) was higher than that of MBIR (0.76; 95% CI: 0.67, 0.86; p = 0.003). The diagnostic accuracy of HR-DLR (88%; 86 of 98 vessels; 95% CI: 80, 94) was higher than that of MBIR (70%; 69 of 98 vessels; 95% CI: 60, 79; p < 0.001). HR-DLR improves image quality and the diagnostic performance of CT-FFR for the diagnosis of functionally significant stenosis. Question The effect of HR-DLR on the diagnostic performance of CT-FFR has not been investigated. Findings HR-DLR improved the diagnostic performance of CT-FFR over MBIR for the diagnosis of functionally significant stenosis as assessed by invasive FFR. Clinical relevance HR-DLR would further enhance the clinical utility of CT-FFR in diagnosing the functional significance of coronary stenosis.

Influence of content-based image retrieval on the accuracy and inter-reader agreement of usual interstitial pneumonia CT pattern classification.

Park S, Hwang HJ, Yun J, Chae EJ, Choe J, Lee SM, Lee HN, Shin SY, Park H, Jeong H, Kim MJ, Lee JH, Jo KW, Baek S, Seo JB

pubmed logopapersMay 22 2025
To investigate whether a content-based image retrieval (CBIR) of similar chest CT images can help usual interstitial pneumonia (UIP) CT pattern classifications among readers with varying levels of experience. This retrospective study included patients who underwent high-resolution chest CT between 2013 and 2015 for the initial workup for fibrosing interstitial lung disease. UIP classifications were assigned to CT images by three thoracic radiologists, which served as the ground truth. One hundred patients were selected as queries. The CBIR retrieved the top three similar CT images with UIP classifications using a deep learning algorithm. The diagnostic accuracies and inter-reader agreement of nine readers before and after CBIR were evaluated. Of 587 patients (mean age, 63 years; 356 men), 100 query cases (26 UIP patterns, 26 probable UIP patterns, 5 indeterminate for UIP, and 43 alternative diagnoses) were selected. After CBIR, the mean accuracy (61.3% to 67.1%; p = 0.011) and inter-reader agreement (Fleiss Kappa, 0.400 to 0.476; p = 0.003) were slightly improved. The accuracies of the radiologist group for all CT patterns except indeterminate for UIP increased after CBIR; however, they did not reach statistical significance. The resident and pulmonologist groups demonstrated mixed results: accuracy decreased for UIP pattern, increased for alternative diagnosis, and varied for others. CBIR slightly improved diagnostic accuracy and inter-reader agreement in UIP pattern classifications. However, its impact varied depending on the readers' level of experience, suggesting that the current CBIR system may be beneficial when used to complement the interpretations of experienced readers. Question CT pattern classification is important for the standardized assessment and management of idiopathic pulmonary fibrosis, but requires radiologic expertise and shows inter-reader variability. Findings CBIR slightly improved diagnostic accuracy and inter-reader agreement for UIP CT pattern classifications overall. Clinical relevance The proposed CBIR system may guide consistent work-up and treatment strategies by enhancing accuracy and inter-reader agreement in UIP CT pattern classifications by experienced readers whose expertise and experience can effectively interact with CBIR results.

HealthiVert-GAN: A Novel Framework of Pseudo-Healthy Vertebral Image Synthesis for Interpretable Compression Fracture Grading.

Zhang Q, Chuang C, Zhang S, Zhao Z, Wang K, Xu J, Sun J

pubmed logopapersMay 22 2025
Osteoporotic vertebral compression fractures (OVCFs) are prevalent in the elderly population, typically assessed on computed tomography (CT) scans by evaluating vertebral height loss. This assessment helps determine the fracture's impact on spinal stability and the need for surgical intervention. However, the absence of pre-fracture CT scans and standardized vertebral references leads to measurement errors and inter-observer variability, while irregular compression patterns further challenge the precise grading of fracture severity. While deep learning methods have shown promise in aiding OVCFs screening, they often lack interpretability and sufficient sensitivity, limiting their clinical applicability. To address these challenges, we introduce a novel vertebra synthesis-height loss quantification-OVCFs grading framework. Our proposed model, HealthiVert-GAN, utilizes a coarse-to-fine synthesis network designed to generate pseudo-healthy vertebral images that simulate the pre-fracture state of fractured vertebrae. This model integrates three auxiliary modules that leverage the morphology and height information of adjacent healthy vertebrae to ensure anatomical consistency. Additionally, we introduce the Relative Height Loss of Vertebrae (RHLV) as a quantification metric, which divides each vertebra into three sections to measure height loss between pre-fracture and post-fracture states, followed by fracture severity classification using a Support Vector Machine (SVM). Our approach achieves state-of-the-art classification performance on both the Verse2019 dataset and in-house dataset, and it provides cross-sectional distribution maps of vertebral height loss. This practical tool enhances diagnostic accuracy in clinical settings and assisting in surgical decision-making.

Cross-Scale Texture Supplementation for Reference-based Medical Image Super-Resolution.

Li Y, Hao W, Zeng H, Wang L, Xu J, Routray S, Jhaveri RH, Gadekallu TR

pubmed logopapersMay 22 2025
Magnetic Resonance Imaging (MRI) is a widely used medical imaging technique, but its resolution is often limited by acquisition time constraints, potentially compromising diagnostic accuracy. Reference-based Image Super-Resolution (RefSR) has shown promising performance in addressing such challenges by leveraging external high-resolution (HR) reference images to enhance the quality of low-resolution (LR) images. The core objective of RefSR is to accurately establish correspondences between the reference HR image and the LR images. In pursuit of this objective, this paper develops a Self-rectified Texture Supplementation network for RefSR (STS-SR) to enhance fine details in MRI images and support the expanding role of autonomous AI in healthcare. Our network comprises a texture-specified selfrectified feature transfer module and a cross-scale texture complementary network. The feature transfer module employs highfrequency filtering to facilitate the network concentrating on fine details. To better exploit the information from both the reference and LR images, our cross-scale texture complementary module incorporates the All-ViT and Swin Transformer layers to achieve feature aggregation at multiple scales, which enables high-quality image enhancement that is critical for autonomous AI systems in healthcare to make accurate decisions. Extensive experiments are performed across various benchmark datasets. The results validate the effectiveness of our method and demonstrate that the method produces state-of-the-art performance as compared to existing approaches. This advancement enables autonomous AI systems to utilize high-quality MRI images for more accurate diagnostics and reliable predictions.

A Deep Learning Vision-Language Model for Diagnosing Pediatric Dental Diseases

Pham, T.

medrxiv logopreprintMay 22 2025
This study proposes a deep learning vision-language model for the automated diagnosis of pediatric dental diseases, with a focus on differentiating between caries and periapical infections. The model integrates visual features extracted from panoramic radiographs using methods of non-linear dynamics and textural encoding with textual descriptions generated by a large language model. These multimodal features are concatenated and used to train a 1D-CNN classifier. Experimental results demonstrate that the proposed model outperforms conventional convolutional neural networks and standalone language-based approaches, achieving high accuracy (90%), sensitivity (92%), precision (92%), and an AUC of 0.96. This work highlights the value of combining structured visual and textual representations in improving diagnostic accuracy and interpretability in dental radiology. The approach offers a promising direction for the development of context-aware, AI-assisted diagnostic tools in pediatric dental care.

Reconsider the Template Mesh in Deep Learning-based Mesh Reconstruction

Fengting Zhang, Boxu Liang, Qinghao Liu, Min Liu, Xiang Chen, Yaonan Wang

arxiv logopreprintMay 21 2025
Mesh reconstruction is a cornerstone process across various applications, including in-silico trials, digital twins, surgical planning, and navigation. Recent advancements in deep learning have notably enhanced mesh reconstruction speeds. Yet, traditional methods predominantly rely on deforming a standardised template mesh for individual subjects, which overlooks the unique anatomical variations between them, and may compromise the fidelity of the reconstructions. In this paper, we propose an adaptive-template-based mesh reconstruction network (ATMRN), which generates adaptive templates from the given images for the subsequent deformation, moving beyond the constraints of a singular, fixed template. Our approach, validated on cortical magnetic resonance (MR) images from the OASIS dataset, sets a new benchmark in voxel-to-cortex mesh reconstruction, achieving an average symmetric surface distance of 0.267mm across four cortical structures. Our proposed method is generic and can be easily transferred to other image modalities and anatomical structures.

Update on the detection of frailty in older adults: a multicenter cohort machine learning-based study protocol.

Fernández-Carnero S, Martínez-Pozas O, Pecos-Martín D, Pardo-Gómez A, Cuenca-Zaldívar JN, Sánchez-Romero EA

pubmed logopapersMay 21 2025
This study aims to investigate the relationship between muscle activation variables assessed via ultrasound and the comprehensive assessment of geriatric patients, as well as to analyze ultrasound images to determine their correlation with morbimortality factors in frail patients. The present cohort study will be conducted in 500 older adults diagnosed with frailty. A multicenter study will be conducted among the day care centers and nursing homes. This will be achieved through the evaluation of frail older adults via instrumental and functional tests, along with specific ultrasound images to study sarcopenia and nutrition, followed by a detailed analysis of the correlation between all collected variables. This study aims to investigate the correlation between ultrasound-assessed muscle activation variables and the overall health of geriatric patients. It addresses the limitations of previous research by including a large sample size of 500 patients and measuring various muscle parameters beyond thickness. Additionally, it aims to analyze ultrasound images to identify markers associated with higher risk of complications in frail patients. The study involves frail older adults undergoing functional tests and specific ultrasound examinations. A comprehensive analysis of functional, ultrasound, and nutritional variables will be conducted to understand their correlation with overall health and risk of complications in frail older patients. The study was approved by the Research Ethics Committee of the Hospital Universitario Puerta de Hierro, Madrid, Spain (Act nº 18/2023). In addition, the study was registered with https://clinicaltrials.gov/ (NCT06218121).

Cardiac Magnetic Resonance Imaging in the German National Cohort: Automated Segmentation of Short-Axis Cine Images and Post-Processing Quality Control

Full, P. M., Schirrmeister, R. T., Hein, M., Russe, M. F., Reisert, M., Ammann, C., Greiser, K. H., Niendorf, T., Pischon, T., Schulz-Menger, J., Maier-Hein, K. H., Bamberg, F., Rospleszcz, S., Schlett, C. L., Schuppert, C.

medrxiv logopreprintMay 21 2025
PurposeTo develop a segmentation and quality control pipeline for short-axis cardiac magnetic resonance (CMR) cine images from the prospective, multi-center German National Cohort (NAKO). Materials and MethodsA deep learning model for semantic segmentation, based on the nnU-Net architecture, was applied to full-cycle short-axis cine images from 29,908 baseline participants. The primary objective was to determine data on structure and function for both ventricles (LV, RV), including end diastolic volumes (EDV), end systolic volumes (ESV), and LV myocardial mass. Quality control measures included a visual assessment of outliers in morphofunctional parameters, inter- and intra-ventricular phase differences, and LV time-volume curves (TVC). These were adjudicated using a five-point rating scale, ranging from five (excellent) to one (non-diagnostic), with ratings of three or lower subject to exclusion. The predictive value of outlier criteria for inclusion and exclusion was analyzed using receiver operating characteristics. ResultsThe segmentation model generated complete data for 29,609 participants (incomplete in 1.0%) and 5,082 cases (17.0 %) were visually assessed. Quality assurance yielded a sample of 26,899 participants with excellent or good quality (89.9%; exclusion of 1,875 participants due to image quality issues and 835 cases due to segmentation quality issues). TVC was the strongest single discriminator between included and excluded participants (AUC: 0.684). Of the two-category combinations, the pairing of TVC and phases provided the greatest improvement over TVC alone (AUC difference: 0.044; p<0.001). The best performance was observed when all three categories were combined (AUC: 0.748). Extending the quality-controlled sample to include acceptable quality ratings, a total of 28,413 (95.0%) participants were available. ConclusionThe implemented pipeline facilitated the automated segmentation of an extensive CMR dataset, integrating quality control measures. This methodology ensures that ensuing quantitative analyses are conducted with a diminished risk of bias.
Page 334 of 3423413 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.