Sort by:
Page 328 of 6636627 results

Maruotto I, Ciliberti FK, Gargiulo P, Recenti M

pubmed logopapersJul 29 2025
The increasing dimensionality of healthcare datasets presents major challenges for clinical data analysis and interpretation. This study introduces a scalable ensemble feature selection (FS) strategy optimized for multi-biometric healthcare datasets aiming to: address the need for dimensionality reduction, identify the most significant features, improve machine learning models' performance, and enhance interpretability in a clinical context. The novel waterfall selection, that integrates sequentially (a) tree-based feature ranking and (b) greedy backward feature elimination, produces as output several sets of features. These subsets are then combined using a specific merging strategy to produce a single set of clinically relevant features. The overall method is applied to two healthcare datasets: the biosignal-based BioVRSea dataset, containing electromyography, electroencephalography, and center-of-pressure data for postural control and motion sickness assessment, and the image-based SinPain dataset, which includes MRI and CT-scan data to study knee osteoarthritis. Our ensemble FS approach demonstrated effective dimensionality reduction, achieving over a 50% decrease in certain feature subsets. The new reduced feature set maintained or improved the model classification metrics when tested with Support Vector Machine and Random Forest models. The proposed ensemble FS method retains selected features essential for distinguishing clinical outcomes, leading to models that are both computationally efficient and clinically interpretable. Furthermore, the adaptability of this method across two heterogeneous healthcare datasets and the scalability of the algorithm indicates its potential as a generalizable tool in healthcare studies. This approach can advance clinical decision support systems, making high-dimensional healthcare datasets more accessible and clinically interpretable.

Kwon SW, Moon JK, Song SC, Cha JY, Kim YW, Choi YJ, Lee JS

pubmed logopapersJul 29 2025
Accurate prediction of skeletal changes during orthodontic treatment in growing patients remains challenging due to significant individual variability in craniofacial growth and treatment responses. Conventional methods, such as support vector regression and multilayer perceptrons, require multiple sequential radiographs to achieve acceptable accuracy. However, they are limited by increased radiation exposure, susceptibility to landmark identification errors, and the lack of visually interpretable predictions. To overcome these limitations, this study explored advanced generative approaches, including denoising diffusion probabilistic models (DDPMs), latent diffusion models (LDMs), and ControlNet, to predict future cephalometric radiographs using minimal input data. We evaluated three diffusion-based models-a DDPM utilizing three sequential cephalometric images (3-input DDPM), a single-image DDPM (1-input DDPM), and a single-image LDM-and a vision-based generative model, ControlNet, conditioned on patient-specific attributes such as age, sex, and orthodontic treatment type. Quantitative evaluations demonstrated that the 3-input DDPM achieved the highest numerical accuracy, whereas the single-image LDM delivered comparable predictive performance with significantly reduced clinical requirements. ControlNet also exhibited competitive accuracy, highlighting its potential effectiveness in clinical scenarios. These findings indicate that the single-image LDM and ControlNet offer practical solutions for personalized orthodontic treatment planning, reducing patient visits and radiation exposure while maintaining robust predictive accuracy.

Baptista JM, Brenner LO, Koga JV, Ohannesian VA, Ito LA, Nabarro PH, Santos LP, Henrique A, de Oliveira Almeida G, Berbet LU, Paranhos T, Nespoli V, Bertani R

pubmed logopapersJul 29 2025
Hippocampal sclerosis (HS) is the primary pathological finding in temporal lobe epilepsy (TLE) and a common cause of refractory seizures. Conventional diagnostic methods, such as EEG and MRI, have limitations. Artificial intelligence (AI) and radiomics, utilizing machine learning and deep learning, offer a non-invasive approach to enhance diagnostic accuracy. This study synthesized recent AI and radiomics research to improve HS detection in TLE. PubMed/Medline, Embase, and Web of Science were systematically searched following PRISMA-DTA guidelines until May 2024. Statistical analysis was conducted using STATA 14. A bivariate model was used to pool sensitivity (SEN) and specificity (SPE) for HS detection, with I2 assessing heterogeneity. Six studies were included. The pooled sensitivity and specificity of AI-based models for HS detection in medial temporal lobe epilepsy (MTLE) were 0.91 (95 % CI: 0.83-0.96; I2 = 71.48 %) and 0.9 (95 % CI: 0.83-0.94; I2 = 69.62 %), with an AUC of 0.96. AI alone showed higher sensitivity (0.92) and specificity (0.93) than AI combined with radiomics (sensitivity: 0.88; specificity: 0.9). Among algorithms, support vector machine (SVM) had the highest performance (SEN: 0.92; SPE: 0.95), followed by convolutional neural networks (CNN) and logistic regression (LR). AI models, particularly SVM, demonstrate high accuracy in detecting HS, with AI alone outperforming its combination with radiomics. These findings support the integration of AI into non-invasive diagnostic workflows, potentially enabling earlier detection and more personalized clinical decision-making in epilepsy care-ultimately contributing to improved patient outcomes and behavioral management.

Ridvan Yesiloglu, Wei Peng, Md Tauhidul Islam, Ehsan Adeli

arxiv logopreprintJul 29 2025
Brain aging synthesis is a critical task with broad applications in clinical and computational neuroscience. The ability to predict the future structural evolution of a subject's brain from an earlier MRI scan provides valuable insights into aging trajectories. Yet, the high-dimensionality of data, subtle changes of structure across ages, and subject-specific patterns constitute challenges in the synthesis of the aging brain. To overcome these challenges, we propose NeuroAR, a novel brain aging simulation model based on generative autoregressive transformers. NeuroAR synthesizes the aging brain by autoregressively estimating the discrete token maps of a future scan from a convenient space of concatenated token embeddings of a previous and future scan. To guide the generation, it concatenates into each scale the subject's previous scan, and uses its acquisition age and the target age at each block via cross-attention. We evaluate our approach on both the elderly population and adolescent subjects, demonstrating superior performance over state-of-the-art generative models, including latent diffusion models (LDM) and generative adversarial networks, in terms of image fidelity. Furthermore, we employ a pre-trained age predictor to further validate the consistency and realism of the synthesized images with respect to expected aging patterns. NeuroAR significantly outperforms key models, including LDM, demonstrating its ability to model subject-specific brain aging trajectories with high fidelity.

Terzis R, Salam B, Nowak S, Mueller PT, Mesropyan N, Oberlinkels L, Efferoth AF, Kravchenko D, Voigt M, Ginzburg D, Pieper CC, Hayawi M, Kuetting D, Afat S, Maintz D, Luetkens JA, Kaya K, Isaak A

pubmed logopapersJul 29 2025
Large language models (LLMs) like GPT-4o offer multilingual and real-time translation capabilities. This study aims to evaluate GPT-4o's effectiveness in translating radiology reports into different languages. In this experimental two-center study, 100 real-world radiology reports from four imaging modalities (X-ray, ultrasound, CT, MRI) were randomly selected and fully anonymized. Reports were translated using GPT-4o with zero-shot prompting from German into four languages including English, French, Spanish, and Russian (n = 400 translations). Eight bilingual radiologists (two per language) evaluated the translations for general readability, overall quality, and utility for translators using 5-point Likert scales (ranging from 5 [best score] to 1 [worst score]). Binary questions (yes/no) were conducted to evaluate potential harmful errors, completeness, and factual correctness. The average processing time of GPT-4o for translating reports ranged from 9 to 24 s. The overall quality of translations achieved a median of 4.5 (IQR 4-5), with English (5 [4,5]), French and Spanish (each 4.5 [4,5]) significantly outperforming Russian (4 [3.5-4]; each p < 0.05). Usefulness for translators was rated highest for English (5 [5-5], p < 0.05 against other languages). Readability scores and translation completeness were significantly higher for translations into Spanish, English and French compared to Russian (each p < 0.05). Factual correctness averaged 79 %, with English (84 %) and French (83 %) outperforming Russian (69 %) (each p < 0.05). Potentially harmful errors were identified in 4 % of translations, primarily in Russian (9 %). GPT-4o demonstrated robust performance in translating radiology reports across multiple languages, with limitations observed in Russian translations.

Wu X, Xu Z, Tong RK

pubmed logopapersJul 29 2025
Deep learning-driven 3D medical image segmentation generally necessitates dense voxel-wise annotations, which are expensive and labor-intensive to acquire. Cross-annotation, which labels only a few orthogonal slices per scan, has recently emerged as a cost-effective alternative that better preserves the shape and precise boundaries of the 3D object than traditional weak labeling methods such as bounding boxes and scribbles. However, learning from such sparse labels, referred to as barely-supervised learning (BSL), remains challenging due to less fine-grained object perception, less compact class features and inferior generalizability. To tackle these challenges and foster collaboration between model training and human expertise, we propose a Multi-Faceted ConSistency learning (MF-ConS) framework with a Diversity and Uncertainty Sampling-based Active Learning (DUS-AL) strategy, specifically designed for the active BSL scenario. This framework combines a cross-annotation BSL strategy, where only three orthogonal slices are labeled per scan, with an AL paradigm guided by DUS to direct human-in-the-loop annotation toward the most informative volumes under a fixed budget. Built upon a teacher-student architecture, MF-ConS integrates three complementary consistency regularization modules: (i) neighbor-informed object prediction consistency for advancing fine-grained object perception by encouraging the student model to infer complete segmentation from masked inputs; (ii) prototype-driven consistency, which enhances intra-class compactness and discriminativeness by aligning latent feature and decision spaces using fused prototypes; and (iii) stability constraint that promotes model robustness against input perturbations. Extensive experiments on three benchmark datasets demonstrate that MF-ConS (DUS-AL) consistently outperforms state-of-the-art methods under extremely limited annotation.

Gibson, E., Ramirez, J., Woods, L. A., Berberian, S., Ottoy, J., Scott, C., Yhap, V., Gao, F., Coello, r. D., Valdes-Hernandez, m., Lange, A., Tartaglia, C., Kumar, S., Binns, M. A., Bartha, R., Symons, S., Swartz, R. H., Masellis, M., Singh, N., MacIntosh, B. J., Wardlaw, J. M., Black, S. E., Lim, A. S., Goubran, M.

medrxiv logopreprintJul 29 2025
IntroductionEnlarged perivascular spaces (PVS) are imaging markers of cerebral small vessel disease (CSVD) that are associated with age, disease phenotypes, and overall health. Quantification of PVS is challenging but necessary to expand an understanding of their role in cerebrovascular pathology. Accurate and automated segmentation of PVS on T1-weighted images would be valuable given the widespread use of T1-weighted imaging protocols in multisite clinical and research datasets. MethodsWe introduce segcsvdPVS, a convolutional neural network (CNN)-based tool for automated PVS segmentation on T1-weighted images. segcsvdPVS was developed using a novel hierarchical approach that builds on existing tools and incorporates robust training strategies to enhance the accuracy and consistency of PVS segmentation. Performance was evaluated using a comprehensive evaluation strategy that included comparison to existing benchmark methods, ablation-based validation, accuracy validation against manual ground truth annotations, correlation with age-related PVS burden as a biological benchmark, and extensive robustness testing. ResultssegcsvdPVS achieved strong object-level performance for basal ganglia PVS (DSC = 0.78), exhibiting both high sensitivity (SNS = 0.80) and precision (PRC = 0.78). Although voxel-level precision was lower (PRC = 0.57), manual correction improved this by only ~3%, indicating that the additional voxels reflected primary boundary- or extent-related differences rather than correctable false positive error. For non-basal ganglia PVS, segcsvdPVS outperformed benchmark methods, exhibiting higher voxel-level performance across several metrics (DSC = 0.60, SNS = 0.67, PRC = 0.57, NSD = 0.77), despite overall lower performance relative to basal ganglia PVS. Additionally, the association between age and segmentation-derived measures of PVS burden were consistently stronger and more reliable for segcsvdPVS compared to benchmark methods across three cohorts (test6, ADNI, CAHHM), providing further evidence of the accuracy and consistency of its segmentation output. ConclusionssegcsvdPVS demonstrates robust performance across diverse imaging conditions and improved sensitivity to biologically meaningful associations, supporting its utility as a T1-based PVS segmentation tool.

Trofimova, O., Böttger, L., Bors, S., Pan, Y., Liefers, B., Beyeler, M. J., Presby, D. M., Bontempi, D., Hastings, J., Klaver, C. C. W., Bergmann, S.

medrxiv logopreprintJul 29 2025
Retinal fundus images offer a non-invasive window into systemic aging. Here, we fine-tuned a foundation model (RETFound) to predict chronological age from color fundus images in 71,343 participants from the UK Biobank, achieving a mean absolute error of 2.85 years. The resulting retinal age gap (RAG), i.e., the difference between predicted and chronological age, was associated with cardiometabolic traits, inflammation, cognitive performance, mortality, dementia, cancer, and incident cardiovascular disease. Genome-wide analyses identified genes related to longevity, metabolism, neurodegeneration, and age-related eye diseases. Sex-stratified models revealed consistent performance but divergent biological signatures: males had younger-appearing retinas and stronger links to metabolic syndrome, while in females, both model attention and genetic associations pointed to a greater involvement of retinal vasculature. Our study positions retinal aging as a biologically meaningful and sex-sensitive biomarker that can support more personalized approaches to risk assessment and aging-related healthcare.

Bhaskara R, Oderinde OM

pubmed logopapersJul 28 2025
This study introduces a novel approach to improve Cone Beam CT (CBCT) image quality by developing a synthetic CT (sCT) generation method using CycleGAN with a Vision Transformer (ViT) and an Adaptive Fourier Neural Operator (AFNO). &#xD;&#xD;Approach: A dataset of 20 prostate cancer patients who received stereotactic body radiation therapy (SBRT) was used, consisting of paired CBCT and planning CT (pCT) images. The dataset was preprocessed by registering pCTs to CBCTs using deformation registration techniques, such as B-spline, followed by resampling to uniform voxel sizes and normalization. The model architecture integrates a CycleGAN with bidirectional generators, where the UNet generator is enhanced with a ViT at the bottleneck. AFNO functions as the attention mechanism for the ViT, operating on the input data in the Fourier domain. AFNO's innovations handle varying resolutions, mesh invariance, and efficient long-range dependency capture.&#xD;&#xD;Main Results: Our model improved significantly in preserving anatomical details and capturing complex image dependencies. The AFNO mechanism processed global image information effectively, adapting to interpatient variations for accurate sCT generation. Evaluation metrics like Mean Absolute Error (MAE), Peak Signal to Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Normalized Cross Correlation (NCC), demonstrated the superiority of our method. Specifically, the model achieved an MAE of 9.71, PSNR of 37.08 dB, SSIM of 0.97, and NCC of 0.99, confirming its efficacy. &#xD;&#xD;Significance: The integration of AFNO within the CycleGAN UNet framework addresses Cone Beam CT image quality limitations. The model generates synthetic CTs that allow adaptive treatment planning during SBRT, enabling adjustments to the dose based on tumor response, thus reducing radiotoxicity from increased doses. This method's ability to preserve both global and local anatomical features shows potential for improving tumor targeting, adaptive radiotherapy planning, and clinical decision-making.

Li L, Ma Q, Oyang C, Paetzold JC, Rueckert D, Kainz B

pubmed logopapersJul 28 2025
Deep learning-based medical image segmentation techniques have shown promising results when evaluated based on conventional metrics such as the Dice score or Intersection-over-Union. However, these fully automatic methods often fail to meet clinically acceptable accuracy, especially when topological constraints should be observed, e.g., continuous boundaries or closed surfaces. In medical image segmentation, the correctness of a segmentation in terms of the required topological genus sometimes is even more important than the pixel-wise accuracy. Existing topology-aware approaches commonly estimate and constrain the topological structure via the concept of persistent homology (PH). However, these methods are difficult to implement for high dimensional data due to their polynomial computational complexity. To overcome this problem, we propose a novel and fast approach for topology-aware segmentation based on the Euler Characteristic (χ). First, we propose a fast formulation for χ computation in both 2D and 3D. The scalar χ error between the prediction and ground-truth serves as the topological evaluation metric. Then we estimate the spatial topology correctness of any segmentation network via a so-called topological violation map, i.e., a detailed map that highlights regions with χ errors. Finally, the segmentation results from the arbitrary network are refined based on the topological violation maps by a topology-aware correction network. Our experiments are conducted on both 2D and 3D datasets and show that our method can significantly improve topological correctness while preserving pixel-wise segmentation accuracy.
Page 328 of 6636627 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.