Sort by:
Page 122 of 4003995 results

Gout Diagnosis From Ultrasound Images Using a Patch-Wise Attention Deep Network.

Zhao Y, Xiao L, Liu H, Li Y, Ning C, Liu M

pubmed logopapersJul 29 2025
The rising global prevalence of gout necessitates advancements in diagnostic methodologies. Ultrasonographic imaging of the foot has become an important diagnostic modality for gout because of its non-invasiveness, cost-effectiveness, and real-time imaging capabilities. This study aims to develop and validate a deep learning-based artificial intelligence (AI) model for automated gout diagnosis using ultrasound images. In this study, ultrasound images were primarily acquired at the first metatarsophalangeal joint (MTP1) from 598 cases in two institutions: 520 from Institution 1 and 78 from Institution 2. From Institution 1's dataset, 66% of cases were randomly allocated for model training, while the remaining 34% constitute the internal test set. The dataset from Institution 2 served as an independent external validation cohort. A novel deep learning model integrating a patch-wise attention mechanism and multi-scale feature extraction was developed to enhance the detection of subtle sonographic features and optimize diagnostic performance. The proposed model demonstrated robust diagnostic efficacy, achieving an accuracy of 87.88%, a sensitivity of 87.85%, a specificity of 87.93%, and an area under the curve (AUC) of 93.43%. Additionally, the model generates interpretable visual heatmaps to localize gout-related pathological features, thereby facilitating interpretation for clinical decision-making. In this paper, a deep learning-based artificial intelligence (AI) model was developed for the automated detection of gout using ultrasound images, which achieved better performance than other models. Furthermore, the features highlighted by the model align closely with expert assessments, demonstrating its potential to assist in the ultrasound-based diagnosis of gout.

Deep sensorless tracking of ultrasound probe orientation during freehand transperineal biopsy with spatial context for symmetry disambiguation.

Soormally C, Beitone C, Troccaz J, Voros S

pubmed logopapersJul 29 2025
Diagnosis of prostate cancer requires histopathology of tissue samples. Following an MRI to identify suspicious areas, a biopsy is performed under ultrasound (US) guidance. In existing assistance systems, 3D US information is generally available (taken before the biopsy session and/or in between samplings). However, without registration between 2D images and 3D volumes, the urologist must rely on cognitive navigation. This work introduces a deep learning model to track the orientation of real-time US slices relative to a reference 3D US volume using only image and volume data. The dataset comprises 515 3D US volumes collected from 51 patients during routine transperineal biopsy. To generate 2D images streams, volumes are resampled to simulate three degrees of freedom rotational movements around the rectal entrance. The proposed model comprises two ResNet-based sub-modules to address the symmetry ambiguity arising from complex out-of-plane movement of the probe. The first sub-module predicts the unsigned relative orientation between consecutive slices, while the second leverages a custom similarity model and a spatial context volume to determine the sign of this relative orientation. From the sub-modules predictions, slices orientations along the navigated trajectory can then be derived in real-time. Results demonstrate that registration error remains below 2.5 mm in 92% of cases over a 5-second trajectory, and 80% over a 25-second trajectory. These findings show that accurate, sensorless 2D/3D US registration given a spatial context is achievable with limited drift over extended navigation. This highlights the potential of AI-driven biopsy assistance to increase the accuracy of freehand biopsy.

Feature Selection in Healthcare Datasets: Towards a Generalizable Solution.

Maruotto I, Ciliberti FK, Gargiulo P, Recenti M

pubmed logopapersJul 29 2025
The increasing dimensionality of healthcare datasets presents major challenges for clinical data analysis and interpretation. This study introduces a scalable ensemble feature selection (FS) strategy optimized for multi-biometric healthcare datasets aiming to: address the need for dimensionality reduction, identify the most significant features, improve machine learning models' performance, and enhance interpretability in a clinical context. The novel waterfall selection, that integrates sequentially (a) tree-based feature ranking and (b) greedy backward feature elimination, produces as output several sets of features. These subsets are then combined using a specific merging strategy to produce a single set of clinically relevant features. The overall method is applied to two healthcare datasets: the biosignal-based BioVRSea dataset, containing electromyography, electroencephalography, and center-of-pressure data for postural control and motion sickness assessment, and the image-based SinPain dataset, which includes MRI and CT-scan data to study knee osteoarthritis. Our ensemble FS approach demonstrated effective dimensionality reduction, achieving over a 50% decrease in certain feature subsets. The new reduced feature set maintained or improved the model classification metrics when tested with Support Vector Machine and Random Forest models. The proposed ensemble FS method retains selected features essential for distinguishing clinical outcomes, leading to models that are both computationally efficient and clinically interpretable. Furthermore, the adaptability of this method across two heterogeneous healthcare datasets and the scalability of the algorithm indicates its potential as a generalizable tool in healthcare studies. This approach can advance clinical decision support systems, making high-dimensional healthcare datasets more accessible and clinically interpretable.

Time-series X-ray image prediction of dental skeleton treatment progress via neural networks.

Kwon SW, Moon JK, Song SC, Cha JY, Kim YW, Choi YJ, Lee JS

pubmed logopapersJul 29 2025
Accurate prediction of skeletal changes during orthodontic treatment in growing patients remains challenging due to significant individual variability in craniofacial growth and treatment responses. Conventional methods, such as support vector regression and multilayer perceptrons, require multiple sequential radiographs to achieve acceptable accuracy. However, they are limited by increased radiation exposure, susceptibility to landmark identification errors, and the lack of visually interpretable predictions. To overcome these limitations, this study explored advanced generative approaches, including denoising diffusion probabilistic models (DDPMs), latent diffusion models (LDMs), and ControlNet, to predict future cephalometric radiographs using minimal input data. We evaluated three diffusion-based models-a DDPM utilizing three sequential cephalometric images (3-input DDPM), a single-image DDPM (1-input DDPM), and a single-image LDM-and a vision-based generative model, ControlNet, conditioned on patient-specific attributes such as age, sex, and orthodontic treatment type. Quantitative evaluations demonstrated that the 3-input DDPM achieved the highest numerical accuracy, whereas the single-image LDM delivered comparable predictive performance with significantly reduced clinical requirements. ControlNet also exhibited competitive accuracy, highlighting its potential effectiveness in clinical scenarios. These findings indicate that the single-image LDM and ControlNet offer practical solutions for personalized orthodontic treatment planning, reducing patient visits and radiation exposure while maintaining robust predictive accuracy.

Radiomics, machine learning, and deep learning for hippocampal sclerosis identification: a systematic review and diagnostic meta-analysis.

Baptista JM, Brenner LO, Koga JV, Ohannesian VA, Ito LA, Nabarro PH, Santos LP, Henrique A, de Oliveira Almeida G, Berbet LU, Paranhos T, Nespoli V, Bertani R

pubmed logopapersJul 29 2025
Hippocampal sclerosis (HS) is the primary pathological finding in temporal lobe epilepsy (TLE) and a common cause of refractory seizures. Conventional diagnostic methods, such as EEG and MRI, have limitations. Artificial intelligence (AI) and radiomics, utilizing machine learning and deep learning, offer a non-invasive approach to enhance diagnostic accuracy. This study synthesized recent AI and radiomics research to improve HS detection in TLE. PubMed/Medline, Embase, and Web of Science were systematically searched following PRISMA-DTA guidelines until May 2024. Statistical analysis was conducted using STATA 14. A bivariate model was used to pool sensitivity (SEN) and specificity (SPE) for HS detection, with I2 assessing heterogeneity. Six studies were included. The pooled sensitivity and specificity of AI-based models for HS detection in medial temporal lobe epilepsy (MTLE) were 0.91 (95 % CI: 0.83-0.96; I2 = 71.48 %) and 0.9 (95 % CI: 0.83-0.94; I2 = 69.62 %), with an AUC of 0.96. AI alone showed higher sensitivity (0.92) and specificity (0.93) than AI combined with radiomics (sensitivity: 0.88; specificity: 0.9). Among algorithms, support vector machine (SVM) had the highest performance (SEN: 0.92; SPE: 0.95), followed by convolutional neural networks (CNN) and logistic regression (LR). AI models, particularly SVM, demonstrate high accuracy in detecting HS, with AI alone outperforming its combination with radiomics. These findings support the integration of AI into non-invasive diagnostic workflows, potentially enabling earlier detection and more personalized clinical decision-making in epilepsy care-ultimately contributing to improved patient outcomes and behavioral management.

Neural Autoregressive Modeling of Brain Aging

Ridvan Yesiloglu, Wei Peng, Md Tauhidul Islam, Ehsan Adeli

arxiv logopreprintJul 29 2025
Brain aging synthesis is a critical task with broad applications in clinical and computational neuroscience. The ability to predict the future structural evolution of a subject's brain from an earlier MRI scan provides valuable insights into aging trajectories. Yet, the high-dimensionality of data, subtle changes of structure across ages, and subject-specific patterns constitute challenges in the synthesis of the aging brain. To overcome these challenges, we propose NeuroAR, a novel brain aging simulation model based on generative autoregressive transformers. NeuroAR synthesizes the aging brain by autoregressively estimating the discrete token maps of a future scan from a convenient space of concatenated token embeddings of a previous and future scan. To guide the generation, it concatenates into each scale the subject's previous scan, and uses its acquisition age and the target age at each block via cross-attention. We evaluate our approach on both the elderly population and adolescent subjects, demonstrating superior performance over state-of-the-art generative models, including latent diffusion models (LDM) and generative adversarial networks, in terms of image fidelity. Furthermore, we employ a pre-trained age predictor to further validate the consistency and realism of the synthesized images with respect to expected aging patterns. NeuroAR significantly outperforms key models, including LDM, demonstrating its ability to model subject-specific brain aging trajectories with high fidelity.

Evaluation of GPT-4o for multilingual translation of radiology reports across imaging modalities.

Terzis R, Salam B, Nowak S, Mueller PT, Mesropyan N, Oberlinkels L, Efferoth AF, Kravchenko D, Voigt M, Ginzburg D, Pieper CC, Hayawi M, Kuetting D, Afat S, Maintz D, Luetkens JA, Kaya K, Isaak A

pubmed logopapersJul 29 2025
Large language models (LLMs) like GPT-4o offer multilingual and real-time translation capabilities. This study aims to evaluate GPT-4o's effectiveness in translating radiology reports into different languages. In this experimental two-center study, 100 real-world radiology reports from four imaging modalities (X-ray, ultrasound, CT, MRI) were randomly selected and fully anonymized. Reports were translated using GPT-4o with zero-shot prompting from German into four languages including English, French, Spanish, and Russian (n = 400 translations). Eight bilingual radiologists (two per language) evaluated the translations for general readability, overall quality, and utility for translators using 5-point Likert scales (ranging from 5 [best score] to 1 [worst score]). Binary questions (yes/no) were conducted to evaluate potential harmful errors, completeness, and factual correctness. The average processing time of GPT-4o for translating reports ranged from 9 to 24 s. The overall quality of translations achieved a median of 4.5 (IQR 4-5), with English (5 [4,5]), French and Spanish (each 4.5 [4,5]) significantly outperforming Russian (4 [3.5-4]; each p < 0.05). Usefulness for translators was rated highest for English (5 [5-5], p < 0.05 against other languages). Readability scores and translation completeness were significantly higher for translations into Spanish, English and French compared to Russian (each p < 0.05). Factual correctness averaged 79 %, with English (84 %) and French (83 %) outperforming Russian (69 %) (each p < 0.05). Potentially harmful errors were identified in 4 % of translations, primarily in Russian (9 %). GPT-4o demonstrated robust performance in translating radiology reports across multiple languages, with limitations observed in Russian translations.

Multi-Faceted Consistency learning with active cross-labeling for barely-supervised 3D medical image segmentation.

Wu X, Xu Z, Tong RK

pubmed logopapersJul 29 2025
Deep learning-driven 3D medical image segmentation generally necessitates dense voxel-wise annotations, which are expensive and labor-intensive to acquire. Cross-annotation, which labels only a few orthogonal slices per scan, has recently emerged as a cost-effective alternative that better preserves the shape and precise boundaries of the 3D object than traditional weak labeling methods such as bounding boxes and scribbles. However, learning from such sparse labels, referred to as barely-supervised learning (BSL), remains challenging due to less fine-grained object perception, less compact class features and inferior generalizability. To tackle these challenges and foster collaboration between model training and human expertise, we propose a Multi-Faceted ConSistency learning (MF-ConS) framework with a Diversity and Uncertainty Sampling-based Active Learning (DUS-AL) strategy, specifically designed for the active BSL scenario. This framework combines a cross-annotation BSL strategy, where only three orthogonal slices are labeled per scan, with an AL paradigm guided by DUS to direct human-in-the-loop annotation toward the most informative volumes under a fixed budget. Built upon a teacher-student architecture, MF-ConS integrates three complementary consistency regularization modules: (i) neighbor-informed object prediction consistency for advancing fine-grained object perception by encouraging the student model to infer complete segmentation from masked inputs; (ii) prototype-driven consistency, which enhances intra-class compactness and discriminativeness by aligning latent feature and decision spaces using fused prototypes; and (iii) stability constraint that promotes model robustness against input perturbations. Extensive experiments on three benchmark datasets demonstrate that MF-ConS (DUS-AL) consistently outperforms state-of-the-art methods under extremely limited annotation.

segcsvdPVS: A convolutional neural network-based tool for quantification of enlarged perivascular spaces (PVS) on T1-weighted images

Gibson, E., Ramirez, J., Woods, L. A., Berberian, S., Ottoy, J., Scott, C., Yhap, V., Gao, F., Coello, r. D., Valdes-Hernandez, m., Lange, A., Tartaglia, C., Kumar, S., Binns, M. A., Bartha, R., Symons, S., Swartz, R. H., Masellis, M., Singh, N., MacIntosh, B. J., Wardlaw, J. M., Black, S. E., Lim, A. S., Goubran, M.

medrxiv logopreprintJul 29 2025
IntroductionEnlarged perivascular spaces (PVS) are imaging markers of cerebral small vessel disease (CSVD) that are associated with age, disease phenotypes, and overall health. Quantification of PVS is challenging but necessary to expand an understanding of their role in cerebrovascular pathology. Accurate and automated segmentation of PVS on T1-weighted images would be valuable given the widespread use of T1-weighted imaging protocols in multisite clinical and research datasets. MethodsWe introduce segcsvdPVS, a convolutional neural network (CNN)-based tool for automated PVS segmentation on T1-weighted images. segcsvdPVS was developed using a novel hierarchical approach that builds on existing tools and incorporates robust training strategies to enhance the accuracy and consistency of PVS segmentation. Performance was evaluated using a comprehensive evaluation strategy that included comparison to existing benchmark methods, ablation-based validation, accuracy validation against manual ground truth annotations, correlation with age-related PVS burden as a biological benchmark, and extensive robustness testing. ResultssegcsvdPVS achieved strong object-level performance for basal ganglia PVS (DSC = 0.78), exhibiting both high sensitivity (SNS = 0.80) and precision (PRC = 0.78). Although voxel-level precision was lower (PRC = 0.57), manual correction improved this by only ~3%, indicating that the additional voxels reflected primary boundary- or extent-related differences rather than correctable false positive error. For non-basal ganglia PVS, segcsvdPVS outperformed benchmark methods, exhibiting higher voxel-level performance across several metrics (DSC = 0.60, SNS = 0.67, PRC = 0.57, NSD = 0.77), despite overall lower performance relative to basal ganglia PVS. Additionally, the association between age and segmentation-derived measures of PVS burden were consistently stronger and more reliable for segcsvdPVS compared to benchmark methods across three cohorts (test6, ADNI, CAHHM), providing further evidence of the accuracy and consistency of its segmentation output. ConclusionssegcsvdPVS demonstrates robust performance across diverse imaging conditions and improved sensitivity to biologically meaningful associations, supporting its utility as a T1-based PVS segmentation tool.

Deep learning aging marker from retinal images unveils sex-specific clinical and genetic signatures

Trofimova, O., Böttger, L., Bors, S., Pan, Y., Liefers, B., Beyeler, M. J., Presby, D. M., Bontempi, D., Hastings, J., Klaver, C. C. W., Bergmann, S.

medrxiv logopreprintJul 29 2025
Retinal fundus images offer a non-invasive window into systemic aging. Here, we fine-tuned a foundation model (RETFound) to predict chronological age from color fundus images in 71,343 participants from the UK Biobank, achieving a mean absolute error of 2.85 years. The resulting retinal age gap (RAG), i.e., the difference between predicted and chronological age, was associated with cardiometabolic traits, inflammation, cognitive performance, mortality, dementia, cancer, and incident cardiovascular disease. Genome-wide analyses identified genes related to longevity, metabolism, neurodegeneration, and age-related eye diseases. Sex-stratified models revealed consistent performance but divergent biological signatures: males had younger-appearing retinas and stronger links to metabolic syndrome, while in females, both model attention and genetic associations pointed to a greater involvement of retinal vasculature. Our study positions retinal aging as a biologically meaningful and sex-sensitive biomarker that can support more personalized approaches to risk assessment and aging-related healthcare.
Page 122 of 4003995 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.