Sort by:
Page 91 of 2262251 results

Artificial intelligence-based diagnosis of hallux valgus interphalangeus using anteroposterior foot radiographs.

Kwolek K, Gądek A, Kwolek K, Lechowska-Liszka A, Malczak M, Liszka H

pubmed logopapersJun 18 2025
A recently developed method enables automated measurement of the hallux valgus angle (HVA) and the first intermetatarsal angle (IMA) from weight-bearing foot radiographs. This approach employs bone segmentation to identify anatomical landmarks and provides standardized angle measurements based on established guidelines. While effective for HVA and IMA, preoperative radiograph analysis remains complex and requires additional measurements, such as the hallux interphalangeal angle (IPA), which has received limited research attention. To expand the previous method, which measured HVA and IMA, by incorporating the automatic measurement of IPA, evaluating its accuracy and clinical relevance. A preexisting database of manually labeled foot radiographs was used to train a U-Net neural network for segmenting bones and identifying landmarks necessary for IPA measurement. Of the 265 radiographs in the dataset, 161 were selected for training and 20 for validation. The U-Net neural network achieves a high mean Sørensen-Dice index (> 0.97). The remaining 84 radiographs were used to assess the reliability of automated IPA measurements against those taken manually by two orthopedic surgeons (O<sub>A</sub> and O<sub>B</sub>) using computer-based tools. Each measurement was repeated to assess intraobserver (O<sub>A1</sub> and O<sub>A2</sub>) and interobserver (O<sub>A2</sub> and O<sub>B</sub>) reliability. Agreement between automated and manual methods was evaluated using the Intraclass Correlation Coefficient (ICC), and Bland-Altman analysis identified systematic differences. Standard error of measurement (SEM) and Pearson correlation coefficients quantified precision and linearity, and measurement times were recorded to evaluate efficiency. The artificial intelligence (AI)-based system demonstrated excellent reliability, with ICC3.1 values of 0.92 (AI <i>vs</i> O<sub>A2</sub>) and 0.88 (AI <i>vs</i> O<sub>B</sub>), both statistically significant (<i>P</i> < 0.001). For manual measurements, ICC values were 0.95 (O<sub>A2</sub> <i>vs</i> O<sub>A1</sub>) and 0.95 (O<sub>A2</sub> <i>vs</i> O<sub>B</sub>), supporting both intraobserver and interobserver reliability. Bland-Altman analysis revealed minimal biases of: (1) 1.61° (AI <i>vs</i> O<sub>A2</sub>); and (2) 2.54° (AI <i>vs</i> O<sub>B</sub>), with clinically acceptable limits of agreement. The AI system also showed high precision, as evidenced by low SEM values: (1) 1.22° (O<sub>A2</sub> <i>vs</i> O<sub>B</sub>); (2) 1.77° (AI <i>vs</i> O<sub>A2</sub>); and (3) 2.09° (AI <i>vs</i> O<sub>B</sub>). Furthermore, Pearson correlation coefficients confirmed strong linear relationships between automated and manual measurements, with <i>r</i> = 0.85 (AI <i>vs</i> O<sub>A2</sub>) and <i>r</i> = 0.90 (AI <i>vs</i> O<sub>B</sub>). The AI method significantly improved efficiency, completing all 84 measurements 8 times faster than manual methods, reducing the time required from an average 36 minutes to just 4.5 minutes. The proposed AI-assisted IPA measurement method shows strong clinical potential, effectively corresponding with manual measurements. Integrating IPA with HVA and IMA assessments provides a comprehensive tool for automated forefoot deformity analysis, supporting hallux valgus severity classification and preoperative planning, while offering substantial time savings in high-volume clinical settings.

Development and interpretation of machine learning-based prognostic models for predicting high-risk prognostic pathological components in pulmonary nodules: integrating clinical features, serum tumor marker and imaging features.

Wang D, Qiu J, Li R, Tian H

pubmed logopapersJun 17 2025
With the improvement of imaging, the screening rate of Pulmonary nodules (PNs) has further increased, but their identification of High-Risk Prognostic Pathological Components (HRPPC) is still a major challenge. In this study, we aimed to build a multi-parameter machine learning predictive model to improve the discrimination accuracy of HRPPC. This study included 816 patients with ≤ 3 cm pulmonary nodules with clear pathology and underwent pulmonary resection. High-resolution chest CT images, clinicopathological characteristics were collected from patients. Lasso regression was utilized in order to identify key features, and a machine learning prediction model was constructed based on the screened key features. The recognition ability of the prediction model was evaluated using (ROC) curves and confusion matrices. Model calibration ability was evaluated using calibration curves. Decision curve analysis (DCA) was used to evaluate the value of the model for clinical applications. Use SHAP values for interpreting predictive models. A total of 816 patients were included in this study, of which 112 (13.79%) had HRPPC of pulmonary nodules. By selecting key variables through Lasso recursive feature elimination, we finally identified 13 key relevant features. The XGB model performed the best, with an area under the ROC curve (AUC) of 0.930 (95% CI: 0.906-0.954) in the training cohort and 0.835 (95% CI: 0.774-0.895) in the validation cohort, indicating that the XGB model had excellent predictive performance. In addition, the calibration curves of the XGB model showed good calibration in both cohorts. DCA demonstrated that the predictive model had a positive benefit in general clinical decision-making. The SHAP values identified the top 3 predictors affecting the HRPPC of PNs as CT Value, Nodule Long Diameter, and PRO-GRP. Our prediction model for identifying HRPPC in PNs has excellent discrimination, calibration and clinical utility. Thoracic surgeons could make relatively reliable predictions of HRPPC in PNs without the possibility of invasive testing.

DiffM<sup>4</sup>RI: A Latent Diffusion Model with Modality Inpainting for Synthesizing Missing Modalities in MRI Analysis.

Ye W, Guo Z, Ren Y, Tian Y, Shen Y, Chen Z, He J, Ke J, Shen Y

pubmed logopapersJun 17 2025
Foundation Models (FMs) have shown great promise for multimodal medical image analysis such as Magnetic Resonance Imaging (MRI). However, certain MRI sequences may be unavailable due to various constraints, such as limited scanning time, patient discomfort, or scanner limitations. The absence of certain modalities can hinder the performance of FMs in clinical applications, making effective missing modality imputation crucial for ensuring their applicability. Previous approaches, including generative adversarial networks (GANs), have been employed to synthesize missing modalities in either a one-to-one or many-to-one manner. However, these methods have limitations, as they require training a new model for different missing scenarios and are prone to mode collapse, generating limited diversity in the synthesized images. To address these challenges, we propose DiffM<sup>4</sup>RI, a diffusion model for many-to-many missing modality imputation in MRI. DiffM<sup>4</sup>RI innovatively formulates the missing modality imputation as a modality-level inpainting task, enabling it to handle arbitrary missing modality situations without the need for training multiple networks. Experiments on the BraTs datasets demonstrate DiffM<sup>4</sup>RI can achieve an average SSIM improvement of 0.15 over MustGAN, 0.1 over SynDiff, and 0.02 over VQ-VAE-2. These results highlight the potential of DiffM<sup>4</sup>RI in enhancing the reliability of FMs in clinical applications. The code is available at https://github.com/27yw/DiffM4RI.

Enhancing Ultrasound-Based Diagnosis of Unilateral Diaphragmatic Paralysis with a Visual Transformer-Based Model.

Kalkanis A, Bakalis D, Testelmans D, Buyse B, Simos YV, Tsamis KI, Manis G

pubmed logopapersJun 17 2025
This paper presents a novel methodology that combines a pre-trained Visual Transformer-Based Deep Model (ViT) with a custom denoising image filter for the diagnosis of Unilateral Diaphragmatic Paralysis (UDP) using Ultrasound (US) images. The ViT is employed to extract complex features from US images of 17 volunteers, capturing intricate patterns and details that are critical for accurate diagnosis. The extracted features are then fed into an ensemble learning model to determine the presence of UDP. The proposed framework achieves an average accuracy of 93.8% on a stratified 5-fold cross-validation, surpassing relevant state-of-the-art (SOTA) image classifiers. This high level of performance underscores the robustness and effectiveness of the framework, highlighting its potential as a prominent diagnostic tool in medical imaging.

Effects of patient and imaging factors on small bowel motility scores derived from deep learning-based segmentation of cine MRI.

Heo S, Yun J, Kim DW, Park SY, Choi SH, Kim K, Jung KW, Myung SJ, Park SH

pubmed logopapersJun 17 2025
Small bowel motility can be quantified using cine MRI, but the influence of patient and imaging factors on motility scores remains unclear. This study evaluated whether patient and imaging factors affect motility scores derived from deep learning-based segmentation of cine MRI. Fifty-four patients (mean age 53.6 ± 16.4 years; 34 women) with chronic constipation or suspected colonic pseudo-obstruction who underwent cine MRI covering the entire small bowel between 2022 and 2023 were included. A deep learning algorithm was developed to segment small bowel regions, and motility was quantified with an optical flow-based algorithm, producing a motility score for each slice. Associations of motility scores with patient factors (age, sex, body mass index, symptoms, and bowel distension) and MRI slice-related factors (anatomical location, bowel area, and anteroposterior position) were analyzed using linear mixed models. Deep learning-based small bowel segmentation achieved a mean volumetric Dice similarity coefficient of 75.4 ± 18.9%, with a manual correction time of 26.5 ± 13.5 s. Median motility scores per patient ranged from 26.4 to 64.4, with an interquartile range of 3.1-26.6. Multivariable analysis revealed that MRI slice-related factors, including anatomical location with mixed ileum and jejunum (β = -4.9; p = 0.01, compared with ileum dominant), bowel area (first order β = -0.2, p < 0.001; second order β = 5.7 × 10<sup>-4</sup>, p < 0.001), and anteroposterior position (first order β = -51.5, p < 0.001; second order β = 28.8, p = 0.004) were significantly associated with motility scores. Patient factors showed no association with motility scores. Small bowel motility scores were significantly associated with MRI slice-related factors. Determining global motility without adjusting for these factors may be limited. Question Global small bowel motility can be quantified from cine MRI; however, the confounding factors affecting motility scores remain unclear. Findings Motility scores were significantly influenced by MRI slice-related factors, including anatomical location, bowel area, and anteroposterior position. Clinical relevance Adjusting for slice-related factors is essential for accurate interpretation of small bowel motility scores on cine MRI.

Enhancing cerebral infarct classification by automatically extracting relevant fMRI features.

Dobromyslin VI, Zhou W

pubmed logopapersJun 17 2025
Accurate detection of cortical infarct is critical for timely treatment and improved patient outcomes. Current brain imaging methods often require invasive procedures that primarily assess blood vessel and structural white matter damage. There is a need for non-invasive approaches, such as functional MRI (fMRI), that better reflect neuronal viability. This study utilized automated machine learning (auto-ML) techniques to identify novel infarct-specific fMRI biomarkers specifically related to chronic cortical infarcts. We analyzed resting-state fMRI data from the multi-center ADNI dataset, which included 20 chronic infarct patients and 30 cognitively normal (CN) controls. This study utilized automated machine learning (auto-ML) techniques to identify novel fMRI biomarkers specifically related to chronic cortical infarcts. Surface-based registration methods were applied to minimize partial-volume effects typically associated with lower resolution fMRI data. We evaluated the performance of 7 previously known fMRI biomarkers alongside 107 new auto-generated fMRI biomarkers across 33 different classification models. Our analysis identified 6 new fMRI biomarkers that substantially improved infarct detection performance compared to previously established metrics. The best-performing combination of biomarkers and classifiers achieved a cross-validation ROC score of 0.791, closely matching the accuracy of diffusion-weighted imaging methods used in acute stroke detection. Our proposed auto-ML fMRI infarct-detection technique demonstrated robustness across diverse imaging sites and scanner types, highlighting the potential of automated feature extraction to significantly enhance non-invasive infarct detection.

Transformer-augmented lightweight U-Net (UAAC-Net) for accurate MRI brain tumor segmentation.

Varghese NE, John A, C UDA, Pillai MJ

pubmed logopapersJun 17 2025
Accurate segmentation of brain tumor images, particularly gliomas in MRI scans, is crucial for early diagnosis, monitoring progression, and evaluating tumor structure and therapeutic response. A novel lightweight, transformer-based U-Net model for brain tumor segmentation, integrating attention mechanisms and multi-layer feature extraction via atrous convolution to capture long-range relationships and contextual information across image regions is proposed in this work. The model performance is evaluated on the publicly accessible BraTS 2020 dataset using evaluation metrics such as the Dice coefficient, accuracy, mean Intersection over Union (IoU), sensitivity, and specificity. The proposed model outperforms many of the existing methods, such as MimicNet, Swin Transformer-based UNet and hybrid multiresolution-based UNet, and is capable of handling a variety of segmentation issues. The experimental results demonstrate that the proposed model acheives an accuracy of 98.23%, a Dice score of 0.9716, and a mean IoU of 0.8242 during training when compared to the current state-of-the-art methods.

Risk factors and prognostic indicators for progressive fibrosing interstitial lung disease: a deep learning-based CT quantification approach.

Lee K, Lee JH, Koh SY, Park H, Goo JM

pubmed logopapersJun 17 2025
To investigate the value of deep learning-based quantitative CT (QCT) in predicting progressive fibrosing interstitial lung disease (PF-ILD) and assessing prognosis. This single-center retrospective study included ILD patients with CT examinations between January 2015 and June 2021. Each ILD finding (ground-glass opacity (GGO), reticular opacity (RO), honeycombing) and fibrosis (sum of RO and honeycombing) was quantified from baseline and follow-up CTs. Logistic regression was performed to identify predictors of PF-ILD, defined as radiologic progression along with forced vital capacity (FVC) decline ≥ 5% predicted. Cox proportional hazard regression was used to assess mortality. The added value of incorporating QCT into FVC was evaluated using C-index. Among 465 ILD patients (median age [IQR], 65 [58-71] years; 238 men), 148 had PF-ILD. After adjusting for clinico-radiological variables, baseline RO (OR: 1.096, 95% CI: 1.042, 1.152, p < 0.001) and fibrosis extent (OR: 1.035, 95% CI: 1.004, 1.067, p = 0.025) were PF-ILD predictors. Baseline RO (HR: 1.063, 95% CI: 1.013, 1.115, p = 0.013), honeycombing (HR: 1.074, 95% CI: 1.034, 1.116, p < 0.001), and fibrosis extent (HR: 1.067, 95% CI: 1.043, 1.093, p < 0.001) predicted poor prognosis. The Cox models combining baseline percent predicted FVC with QCT (each ILD finding, C-index: 0.714, 95% CI: 0.660, 0.764; fibrosis, C-index: 0.703, 95% CI: 0.649, 0.752; both p-values < 0.001) outperformed the model without QCT (C-index: 0.545, 95% CI: 0.500, 0.599). Deep learning-based QCT for ILD findings is useful for predicting PF-ILD and its prognosis. Question Does deep learning-based CT quantification of interstitial lung disease (ILD) findings have value in predicting progressive fibrosing ILD (PF-ILD) and improving prognostication? Findings Deep learning-based CT quantification of baseline reticular opacity and fibrosis predicted the development of PF-ILD. In addition, CT quantification demonstrated value in predicting all-cause mortality. Clinical relevance Deep learning-based CT quantification of ILD findings is useful for predicting PF-ILD and its prognosis. Identifying patients at high risk of PF-ILD through CT quantification enables closer monitoring and earlier treatment initiation, which may lead to improved clinical outcomes.

A Robust Residual Three-dimensional Convolutional Neural Networks Model for Prediction of Amyloid-β Positivity by Using FDG-PET.

Ardakani I, Yamada T, Iwano S, Kumar Maurya S, Ishii K

pubmed logopapersJun 17 2025
Widely used in oncology PET, 2-deoxy-2-18F-FDG PET is more accessible and affordable than amyloid PET, which is a crucial tool to determine amyloid positivity in diagnosis of Alzheimer disease (AD). This study aimed to leverage deep learning with residual 3D convolutional neural networks (3DCNN) to develop a robust model that predicts amyloid-β positivity by using FDG-PET. In this study, a cohort of 187 patients was used for model development. It consisted of patients ranging from cognitively normal to those with dementia and other cognitive impairments who underwent T1-weighted MRI, 18F-FDG, and 11C-Pittsburgh compound B (PiB) PET scans. A residual 3DCNN model was configured using nonexhaustive grid search and trained on repeated random splits of our development data set. We evaluated the performance of our model, and particularly its robustness, using a multisite data set of 99 patients of different ethnicities with images at different site harmonization levels. Our model achieved mean AUC scores of 0.815 and 0.840 on images without and with site harmonization correspondingly. Respectively, it achieved higher AUC scores of 0.801 and 0.834 in the cognitively normal (CN) group compared with 0.777 and 0.745 in the dementia group. As for F1 score, the corresponding mean scores were 0.770 and 0.810 on images without and with site harmonization. In the CN group, it achieved lower F1 scores of 0.580 and 0.658 compared with 0.907 and 0.931 in the dementia group. We demonstrated that residual 3DCNN can learn complex 3D spatial patterns in FDG-PET images and robustly predict amyloid-β positivity with significantly less reliance on site harmonization preprocessing.
Page 91 of 2262251 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.