Sort by:
Page 89 of 1321316 results

HealthiVert-GAN: A Novel Framework of Pseudo-Healthy Vertebral Image Synthesis for Interpretable Compression Fracture Grading.

Zhang Q, Chuang C, Zhang S, Zhao Z, Wang K, Xu J, Sun J

pubmed logopapersMay 22 2025
Osteoporotic vertebral compression fractures (OVCFs) are prevalent in the elderly population, typically assessed on computed tomography (CT) scans by evaluating vertebral height loss. This assessment helps determine the fracture's impact on spinal stability and the need for surgical intervention. However, the absence of pre-fracture CT scans and standardized vertebral references leads to measurement errors and inter-observer variability, while irregular compression patterns further challenge the precise grading of fracture severity. While deep learning methods have shown promise in aiding OVCFs screening, they often lack interpretability and sufficient sensitivity, limiting their clinical applicability. To address these challenges, we introduce a novel vertebra synthesis-height loss quantification-OVCFs grading framework. Our proposed model, HealthiVert-GAN, utilizes a coarse-to-fine synthesis network designed to generate pseudo-healthy vertebral images that simulate the pre-fracture state of fractured vertebrae. This model integrates three auxiliary modules that leverage the morphology and height information of adjacent healthy vertebrae to ensure anatomical consistency. Additionally, we introduce the Relative Height Loss of Vertebrae (RHLV) as a quantification metric, which divides each vertebra into three sections to measure height loss between pre-fracture and post-fracture states, followed by fracture severity classification using a Support Vector Machine (SVM). Our approach achieves state-of-the-art classification performance on both the Verse2019 dataset and in-house dataset, and it provides cross-sectional distribution maps of vertebral height loss. This practical tool enhances diagnostic accuracy in clinical settings and assisting in surgical decision-making.

Influence of content-based image retrieval on the accuracy and inter-reader agreement of usual interstitial pneumonia CT pattern classification.

Park S, Hwang HJ, Yun J, Chae EJ, Choe J, Lee SM, Lee HN, Shin SY, Park H, Jeong H, Kim MJ, Lee JH, Jo KW, Baek S, Seo JB

pubmed logopapersMay 22 2025
To investigate whether a content-based image retrieval (CBIR) of similar chest CT images can help usual interstitial pneumonia (UIP) CT pattern classifications among readers with varying levels of experience. This retrospective study included patients who underwent high-resolution chest CT between 2013 and 2015 for the initial workup for fibrosing interstitial lung disease. UIP classifications were assigned to CT images by three thoracic radiologists, which served as the ground truth. One hundred patients were selected as queries. The CBIR retrieved the top three similar CT images with UIP classifications using a deep learning algorithm. The diagnostic accuracies and inter-reader agreement of nine readers before and after CBIR were evaluated. Of 587 patients (mean age, 63 years; 356 men), 100 query cases (26 UIP patterns, 26 probable UIP patterns, 5 indeterminate for UIP, and 43 alternative diagnoses) were selected. After CBIR, the mean accuracy (61.3% to 67.1%; p = 0.011) and inter-reader agreement (Fleiss Kappa, 0.400 to 0.476; p = 0.003) were slightly improved. The accuracies of the radiologist group for all CT patterns except indeterminate for UIP increased after CBIR; however, they did not reach statistical significance. The resident and pulmonologist groups demonstrated mixed results: accuracy decreased for UIP pattern, increased for alternative diagnosis, and varied for others. CBIR slightly improved diagnostic accuracy and inter-reader agreement in UIP pattern classifications. However, its impact varied depending on the readers' level of experience, suggesting that the current CBIR system may be beneficial when used to complement the interpretations of experienced readers. Question CT pattern classification is important for the standardized assessment and management of idiopathic pulmonary fibrosis, but requires radiologic expertise and shows inter-reader variability. Findings CBIR slightly improved diagnostic accuracy and inter-reader agreement for UIP CT pattern classifications overall. Clinical relevance The proposed CBIR system may guide consistent work-up and treatment strategies by enhancing accuracy and inter-reader agreement in UIP CT pattern classifications by experienced readers whose expertise and experience can effectively interact with CBIR results.

High-resolution deep learning reconstruction to improve the accuracy of CT fractional flow reserve.

Tomizawa N, Fan R, Fujimoto S, Nozaki YO, Kawaguchi YO, Takamura K, Hiki M, Aikawa T, Takahashi N, Okai I, Okazaki S, Kumamaru KK, Minamino T, Aoki S

pubmed logopapersMay 22 2025
This study aimed to compare the diagnostic performance of CT-derived fractional flow reserve (CT-FFR) using model-based iterative reconstruction (MBIR) and high-resolution deep learning reconstruction (HR-DLR) images to detect functionally significant stenosis with invasive FFR as the reference standard. This single-center retrospective study included 79 consecutive patients (mean age, 70 ± 11 [SD] years; 57 male) who underwent coronary CT angiography followed by invasive FFR between February 2022 and March 2024. CT-FFR was calculated using a mesh-free simulation. The cutoff for functionally significant stenosis was defined as FFR ≤ 0.80. CT-FFR was compared with MBIR and HR-DLR using receiver operating characteristic curve analysis. The mean invasive FFR value was 0.81 ± 0.09, and 46 of 98 vessels (47%) had FFR ≤ 0.80. The mean noise of HR-DLR was lower than that of MBIR (14.4 ± 1.7 vs 23.5 ± 3.1, p < 0.001). The area under the receiver operating characteristic curve for the diagnosis of functionally significant stenosis of HR-DLR (0.88; 95% CI: 0.80, 0.95) was higher than that of MBIR (0.76; 95% CI: 0.67, 0.86; p = 0.003). The diagnostic accuracy of HR-DLR (88%; 86 of 98 vessels; 95% CI: 80, 94) was higher than that of MBIR (70%; 69 of 98 vessels; 95% CI: 60, 79; p < 0.001). HR-DLR improves image quality and the diagnostic performance of CT-FFR for the diagnosis of functionally significant stenosis. Question The effect of HR-DLR on the diagnostic performance of CT-FFR has not been investigated. Findings HR-DLR improved the diagnostic performance of CT-FFR over MBIR for the diagnosis of functionally significant stenosis as assessed by invasive FFR. Clinical relevance HR-DLR would further enhance the clinical utility of CT-FFR in diagnosing the functional significance of coronary stenosis.

ESR Essentials: a step-by-step guide of segmentation for radiologists-practice recommendations by the European Society of Medical Imaging Informatics.

Chupetlovska K, Akinci D'Antonoli T, Bodalal Z, Abdelatty MA, Erenstein H, Santinha J, Huisman M, Visser JJ, Trebeschi S, Groot Lipman KBW

pubmed logopapersMay 22 2025
High-quality segmentation is important for AI-driven radiological research and clinical practice, with the potential to play an even more prominent role in the future. As medical imaging advances, accurately segmenting anatomical and pathological structures is increasingly used to obtain quantitative data and valuable insights. Segmentation and volumetric analysis could enable more precise diagnosis, treatment planning, and patient monitoring. These guidelines aim to improve segmentation accuracy and consistency, allowing for better decision-making in both research and clinical environments. Practical advice on planning and organization is provided, focusing on quality, precision, and communication among clinical teams. Additionally, tips and strategies for improving segmentation practices in radiology and radiation oncology are discussed, as are potential pitfalls to avoid. KEY POINTS: As AI continues to advance, volumetry will become more integrated into clinical practice, making it essential for radiologists to stay informed about its applications in diagnosis and treatment planning. There is a significant lack of practical guidelines and resources tailored specifically for radiologists on technical topics like segmentation and volumetric analysis. Establishing clear rules and best practices for segmentation can streamline volumetric assessment in clinical settings, making it easier to manage and leading to more accurate decision-making for patient care.

An X-ray bone age assessment method for hands and wrists of adolescents in Western China based on feature fusion deep learning models.

Wang YH, Zhou HM, Wan L, Guo YC, Li YZ, Liu TA, Guo JX, Li DY, Chen T

pubmed logopapersMay 22 2025
The epiphyses of the hand and wrist serve as crucial indicators for assessing skeletal maturity in adolescents. This study aimed to develop a deep learning (DL) model for bone age (BA) assessment using hand and wrist X-ray images, addressing the challenge of classifying BA in adolescents. The results of this DL-based classification were then compared and analyzed with those obtained from manual assessment. A retrospective analysis was conducted on 688 hand and wrist X-ray images of adolescents aged 11.00-23.99 years from western China, which were randomly divided into training set, validation set and test set. The BA assessment results were initially analyzed and compared using four DL network models: InceptionV3, InceptionV3 + SE + Sex, InceptionV3 + Bilinear and InceptionV3 + Bilinear. + SE + Sex, to identify the DL model with the best classification performance. Subsequently, the results of the top-performing model were compared with those of manual classification. The study findings revealed that the InceptionV3 + Bilinear + SE + Sex model exhibited the best performance, achieving classification accuracies of 96.15% and 90.48% for the training and test set, respectively. Furthermore, based on the InceptionV3 + Bilinear + SE + Sex model, classification accuracies were calculated for four age groups (< 14.0 years, 14.0 years ≤ age < 16.0 years, 16.0 years ≤ age < 18.0 years, ≥ 18.0 years), with notable accuracies of 100% for the age groups 16.0 years ≤ age < 18.0 years and ≥ 18.0 years. The BA classification, utilizing the feature fusion DL network model, holds significant reference value for determining the age of criminal responsibility of adolescents, particularly at the critical legal age boundaries of 14.0, 16.0, and 18.0 years.

Multimodal MRI radiomics enhances epilepsy prediction in pediatric low-grade glioma patients.

Tang T, Wu Y, Dong X, Zhai X

pubmed logopapersMay 22 2025
Determining whether pediatric patients with low-grade gliomas (pLGGs) have tumor-related epilepsy (GAE) is a crucial aspect of preoperative evaluation. Therefore, we aim to propose an innovative, machine learning- and deep learning-based framework for the rapid and non-invasive preoperative assessment of GAE in pediatric patients using magnetic resonance imaging (MRI). In this study, we propose a novel radiomics-based approach that integrates tumor and peritumoral features extracted from preoperative multiparametric MRI scans to accurately and non-invasively predict the occurrence of tumor-related epilepsy in pediatric patients. Our study developed a multimodal MRI radiomics model to predict epilepsy in pLGGs patients, achieving an AUC of 0.969. The integration of multi-sequence MRI data significantly improved predictive performance, with Stochastic Gradient Descent (SGD) classifier showing robust results (sensitivity: 0.882, specificity: 0.956). Our model can accurately predict whether pLGGs patients have tumor-related epilepsy, which could guide surgical decision-making. Future studies should focus on similarly standardized preoperative evaluations in pediatric epilepsy centers to increase training data and enhance the generalizability of the model.

DP-MDM: detail-preserving MR reconstruction via multiple diffusion models.

Geng M, Zhu J, Hong R, Liu Q, Liang D, Liu Q

pubmed logopapersMay 22 2025
<i>Objective.</i>Magnetic resonance imaging (MRI) is critical in medical diagnosis and treatment by capturing detailed features, such as subtle tissue changes, which help clinicians make precise diagnoses. However, the widely used single diffusion model has limitations in accurately capturing more complex details. This study aims to address these limitations by proposing an efficient method to enhance the reconstruction of detailed features in MRI.<i>Approach.</i>We present a detail-preserving reconstruction method that leverages multiple diffusion models (DP-MDM) to extract structural and detailed features in the k-space domain, which complements the image domain. Since high-frequency information in k-space is more systematically distributed around the periphery compared to the irregular distribution of detailed features in the image domain, this systematic distribution allows for more efficient extraction of detailed features. To further reduce redundancy and enhance model performance, we introduce virtual binary masks with adjustable circular center windows that selectively focus on high-frequency regions. These masks align with the frequency distribution of k-space data, enabling the model to focus more efficiently on high-frequency information. The proposed method employs a cascaded architecture, where the first diffusion model recovers low-frequency structural components, with subsequent models enhancing high-frequency details during the iterative reconstruction stage.<i>Main results.</i>Experimental results demonstrate that DP-MDM achieves superior performance across multiple datasets. On the<i>T1-GE brain</i>dataset with 2D random sampling at<i>R</i>= 15, DP-MDM achieved 35.14 dB peak signal-to-noise ratio (PSNR) and 0.8891 structural similarity (SSIM), outperforming other methods. The proposed method also showed robust performance on the<i>Fast-MRI</i>and<i>Cardiac MR</i>datasets, achieving the highest PSNR and SSIM values.<i>Significance.</i>DP-MDM significantly advances MRI reconstruction by balancing structural integrity and detail preservation. It not only enhances diagnostic accuracy through improved image quality but also offers a versatile framework that can potentially be extended to other imaging modalities, thereby broadening its clinical applicability.

On factors that influence deep learning-based dose prediction of head and neck tumors.

Gao R, Mody P, Rao C, Dankers F, Staring M

pubmed logopapersMay 22 2025
<i>Objective.</i>This study investigates key factors influencing deep learning-based dose prediction models for head and neck cancer radiation therapy. The goal is to evaluate model accuracy, robustness, and computational efficiency, and to identify key components necessary for optimal performance.<i>Approach.</i>We systematically analyze the impact of input and dose grid resolution, input type, loss function, model architecture, and noise on model performance. Two datasets are used: a public dataset (OpenKBP) and an in-house clinical dataset. Model performance is primarily evaluated using two metrics: dose score and dose-volume histogram (DVH) score.<i>Main results.</i>High-resolution inputs improve prediction accuracy (dose score and DVH score) by 8.6%-13.5% compared to low resolution. Using a combination of CT, planning target volumes, and organs-at-risk as input significantly enhances accuracy, with improvements of 57.4%-86.8% over using CT alone. Integrating mean absolute error (MAE) loss with value-based and criteria-based DVH loss functions further boosts DVH score by 7.2%-7.5% compared to MAE loss alone. In the robustness analysis, most models show minimal degradation under Poisson noise (0-0.3 Gy) but are more susceptible to adversarial noise (0.2-7.8 Gy). Notably, certain models, such as SwinUNETR, demonstrate superior robustness against adversarial perturbations.<i>Significance.</i>These findings highlight the importance of optimizing deep learning models and provide valuable guidance for achieving more accurate and reliable radiotherapy dose prediction.

Predictive value of machine learning for PD-L1 expression in NSCLC: a systematic review and meta-analysis.

Zheng T, Li X, Zhou L, Jin J

pubmed logopapersMay 22 2025
As machine learning (ML) continuously develops in cancer diagnosis and treatment, some researchers have attempted to predict the expression of programmed death ligand-1 (PD-L1) in non-small cell lung cancer (NSCLC) by ML. However, there is a lack of systematic evidence on the effectiveness of ML. We conducted a thorough search across Embase, PubMed, the Cochrane Library, and Web of Science from inception to December 14th, 2023.A systematic review and meta-analysis was conducted to assess the value of ML for predicting PD-L1 expression in NSCLC. Totally 30 studies with 12,898 NSCLC patients were included. The thresholds of PD-L1 expression level were < 1%, 1-49%, and ≥ 50%. In the validation set, in the binary classification for PD-L1 ≥ 1%, the pooled C-index was 0.646 (95%CI: 0.587-0.705), 0.799 (95%CI: 0.782-0.817), 0.806 (95%CI: 0.753-0.858), and 0.800 (95%CI: 0.717-0.883), respectively, for the clinical feature-, radiomics-, radiomics + clinical feature-, and pathomics-based ML models; in the binary classification for PD-L1 ≥ 50%, the pooled C-index was 0.649 (95%CI: 0.553-0.744), 0.771 (95%CI: 0.728-0.814), and 0.826 (95%CI: 0.783-0.869), respectively, for the clinical feature-, radiomics-, and radiomics + clinical feature-based ML models. At present, radiomics- or pathomics-based ML methods are applied for the prediction of PD-L1 expression in NSCLC, which both achieve satisfactory accuracy. In particular, the radiomics-based ML method seems to have wider clinical applicability as a non-invasive diagnostic tool. Both radiomics and pathomics serve as processing methods for medical images. In the future, we expect to develop medical image-based DL methods for intelligently predicting PD-L1 expression.

Leveraging deep learning-based kernel conversion for more precise airway quantification on CT.

Choe J, Yun J, Kim MJ, Oh YJ, Bae S, Yu D, Seo JB, Lee SM, Lee HY

pubmed logopapersMay 22 2025
To evaluate the variability of fully automated airway quantitative CT (QCT) measures caused by different kernels and the effect of kernel conversion. This retrospective study included 96 patients who underwent non-enhanced chest CT at two centers. CT scans were reconstructed using four kernels (medium soft, medium sharp, sharp, very sharp) from three vendors. Kernel conversion targeting the medium soft kernel as reference was applied to sharp kernel images. Fully automated airway quantification was performed before and after conversion. The effects of kernel type and conversion on airway quantification were evaluated using analysis of variance, paired t-tests, and concordance correlation coefficient (CCC). Airway QCT measures (e.g., Pi10, wall thickness, wall area percentage, lumen diameter) decreased with sharper kernels (all, p < 0.001), with varying degrees of variability across variables and vendors. Kernel conversion substantially reduced variability between medium soft and sharp kernel images for vendors A (pooled CCC: 0.59 vs. 0.92) and B (0.40 vs. 0.91) and lung-dedicated sharp kernels of vendor C (0.26 vs. 0.71). However, it was ineffective for non-lung-dedicated sharp kernels of vendor C (0.81 vs. 0.43) and showed limited improvement in variability of QCT measures at the subsegmental level. Consistent airway segmentation and identical anatomic labeling improved subsegmental airway variability in theoretical tests. Deep learning-based kernel conversion reduced the measurement variability of airway QCT across various kernels and vendors but was less effective for non-lung-dedicated kernels and subsegmental airways. Consistent airway segmentation and precise anatomic labeling can further enhance reproducibility for reliable automated quantification. Question How do different CT reconstruction kernels affect the measurement variability of automated airway measurements, and can deep learning-based kernel conversion reduce this variability? Findings Kernel conversion improved measurement consistency across vendors for lung-dedicated kernels, but showed limited effectiveness for non-lung-dedicated kernels and subsegmental airways. Clinical relevance Understanding kernel-related variability in airway quantification and mitigating it through deep learning enables standardized analysis, but further refinements are needed for robust airway segmentation, particularly for improving measurement variability in subsegmental airways and specific kernels.
Page 89 of 1321316 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.