Sort by:
Page 5 of 90894 results

Three-dimensional pulp chamber volume quantification in first molars using CBCT: Implications for machine learning-assisted age estimation

Ding, Y., Zhong, T., He, Y., Wang, W., Zhang, S., Zhang, X., Shi, W., jin, b.

medrxiv logopreprintAug 8 2025
Accurate adult age estimation represents a critical component of forensic individual identification. However, traditional methods relying on skeletal developmental characteristics are susceptible to preservation status and developmental variation. Teeth, owing to their exceptional taphonomic resistance and minimal postmortem alteration, emerge as premier biological samples. Utilizing the high-resolution capabilities of Cone Beam Computed Tomography (CBCT), this study retrospectively analyzed 1,857 right first molars obtained from Han Chinese adults in Sichuan Province (883 males, 974 females; aged 18-65 years). Pulp chamber volume (PCV) was measured using semi-automatic segmentation in Mimics software (v21.0). Statistically significant differences in PCV were observed based on sex and tooth position (maxillary vs. mandibular). Significant negative correlations existed between PCV and age (r = -0.86 to -0.81). The strongest correlation (r = -0.88) was identified in female maxillary first molars. Eleven curvilinear regression models and six machine learning models (Linear Regression, Lasso Regression, Neural Network, Random Forest, Gradient Boosting, and XGBoost) were developed. Among the curvilinear regression models, the cubic model demonstrated the best performance, with the female maxillary-specific model achieving a mean absolute error (MAE) of 4.95 years. Machine learning models demonstrated superior accuracy. Specifically, the sex- and tooth position-specific XGBoost model for female maxillary first molars achieved an MAE of 3.14 years (R{superscript 2} = 0.87). This represents a significant 36.5% reduction in error compared to the optimal cubic regression model. These findings demonstrate that PCV measurements in first molars, combined with machine learning algorithms (specifically XGBoost), effectively overcome the limitations of traditional methods, providing a highly precise and reproducible approach for forensic age estimation.

Multivariate Fields of Experts

Stanislas Ducotterd, Michael Unser

arxiv logopreprintAug 8 2025
We introduce the multivariate fields of experts, a new framework for the learning of image priors. Our model generalizes existing fields of experts methods by incorporating multivariate potential functions constructed via Moreau envelopes of the $\ell_\infty$-norm. We demonstrate the effectiveness of our proposal across a range of inverse problems that include image denoising, deblurring, compressed-sensing magnetic-resonance imaging, and computed tomography. The proposed approach outperforms comparable univariate models and achieves performance close to that of deep-learning-based regularizers while being significantly faster, requiring fewer parameters, and being trained on substantially fewer data. In addition, our model retains a relatively high level of interpretability due to its structured design.

Explainable Cryobiopsy AI Model, CRAI, to Predict Disease Progression for Transbronchial Lung Cryobiopsies with Interstitial Pneumonia

Uegami, W., Okoshi, E. N., Lami, K., Nei, Y., Ozasa, M., Kataoka, K., Kitamura, Y., Kohashi, Y., Cooper, L. A. D., Sakanashi, H., Saito, Y., Kondoh, Y., the study group on CRYOSOLUTION,, Fukuoka, J.

medrxiv logopreprintAug 8 2025
BackgroundInterstitial lung disease (ILD) encompasses diverse pulmonary disorders with varied prognoses. Current pathological diagnoses suffer from inter-observer variability,necessitating more standardized approaches. We developed an ensemble model AI for cryobiopsy, CRAI, an artificial intelligence model to analyze transbronchial lung cryobiopsy (TBLC) specimens and predict patient outcomes. MethodsWe developed an explainable AI model, CRAI, to analyze TBLC. CRAI comprises seven modules for detecting histological features, generating 19 pathologically significant findings. A downstream XGBoost classifier was developed to predict disease progression using these findings. The models performance was evaluated using respiratory function changes and survival analysis in cross-validation and external test cohorts. FindingsIn the internal cross-validation (135 cases), the model predicted 105 cases without disease progression and 30 with disease progression. The annual {Delta}%FVC was -1.293 in the non-progressive group versus -5.198 in the progressive group, outperforming most pathologists diagnoses. In the external test cohort (48 cases), the model predicted 38 non-progressive and 10 progressive cases. Survival analysis demonstrated significantly shorter survival times in the progressive group (p=0.034). InterpretationCRAI provides a comprehensive, interpretable approach to analyzing TBLC specimens, offering potential for standardizing ILD diagnosis and predicting disease progression. The model could facilitate early identification of progressive cases and guide personalized therapeutic interventions. FundingNew Energy and Industrial Technology Development Organization (NEDO) and Japanese Ministry of Health, Labor, and Welfare.

Application of Artificial Intelligence in Bone Quality and Quantity Assessment for Dental Implant Planning: A Scoping Review.

Qiu S, Yu X, Wu Y

pubmed logopapersAug 8 2025
To assess how artificial intelligence (AI) models perform in evaluating bone quality and quantity in the preoperative planning process for dental implants. This review included studies that utilized AI-based assessments of bone quality and/or quantity based on radiographic images in the preoperative phase. Studies published in English before April 2025 were used in this review, which were obtained from searches in PubMed/MEDLINE, Embase, Web of Science, Scopus, and the Cochrane Library, as well as from manual searches. Eleven studies met the inclusion criteria. Five studies focused on bone quality evaluation and six studies included volumetric assessments using AI models. The performance measures included accuracy, sensitivity, specificity, precision, F1 score, and Dice coefficient, and were compared with human expert evaluations. AI models demonstrated high accuracy (76.2%-99.84%), high sensitivity (78.9%-100%), and high specificity (66.2%-99%). AI models have potential for the evaluation of bone quality and quantity, although standardization and external validation studies are lacking. Future studies should propose multicenter datasets, integration into clinical workflows, and the development of refined models to better reflect real-life conditions. AI has the potential to offer clinicians with reliable automated evaluations of bone quality and quantity, with the promise of a fully automated system of implant planning. It may also support preoperative workflows for clinical decision-making based on evidence more efficiently.

Vision-Language Model-Based Semantic-Guided Imaging Biomarker for Lung Nodule Malignancy Prediction.

Zhuang L, Tabatabaei SMH, Salehi-Rad R, Tran LM, Aberle DR, Prosper AE, Hsu W

pubmed logopapersAug 8 2025
Machine learning models have utilized semantic features, deep features, or both to assess lung nodule malignancy. However, their reliance on manual annotation during inference, limited interpretability, and sensitivity to imaging variations hinder their application in real-world clinical settings. Thus, this research aims to integrate semantic features derived from radiologists' assessments of nodules, guiding the model to learn clinically relevant, robust, and explainable imaging features for predicting lung cancer. We obtained 938 low-dose CT scans from the National Lung Screening Trial (NLST) with 1,246 nodules and semantic features. Additionally, the Lung Image Database Consortium dataset contains 1,018 CT scans, with 2,625 lesions annotated for nodule characteristics. Three external datasets were obtained from UCLA Health, the LUNGx Challenge, and the Duke Lung Cancer Screening. We fine-tuned a pretrained Contrastive Language-Image Pretraining (CLIP) model with a parameter-efficient fine-tuning approach to align imaging and semantic text features and predict the one-year lung cancer diagnosis. Our model outperformed state-of-the-art (SOTA) models in the NLST test set with an AUROC of 0.901 and AUPRC of 0.776. It also showed robust results in external datasets. Using CLIP, we also obtained predictions on semantic features through zero-shot inference, such as nodule margin (AUROC: 0.812), nodule consistency (0.812), and pleural attachment (0.840). Our approach surpasses the SOTA models in predicting lung cancer across datasets collected from diverse clinical settings, providing explainable outputs, aiding clinicians in comprehending the underlying meaning of model predictions. This approach also prevents the model from learning shortcuts and generalizes across clinical settings. The code is available at https://github.com/luotingzhuang/CLIP_nodule.

SPARSE Data, Rich Results: Few-Shot Semi-Supervised Learning via Class-Conditioned Image Translation

Guido Manni, Clemente Lauretti, Loredana Zollo, Paolo Soda

arxiv logopreprintAug 8 2025
Deep learning has revolutionized medical imaging, but its effectiveness is severely limited by insufficient labeled training data. This paper introduces a novel GAN-based semi-supervised learning framework specifically designed for low labeled-data regimes, evaluated across settings with 5 to 50 labeled samples per class. Our approach integrates three specialized neural networks -- a generator for class-conditioned image translation, a discriminator for authenticity assessment and classification, and a dedicated classifier -- within a three-phase training framework. The method alternates between supervised training on limited labeled data and unsupervised learning that leverages abundant unlabeled images through image-to-image translation rather than generation from noise. We employ ensemble-based pseudo-labeling that combines confidence-weighted predictions from the discriminator and classifier with temporal consistency through exponential moving averaging, enabling reliable label estimation for unlabeled data. Comprehensive evaluation across eleven MedMNIST datasets demonstrates that our approach achieves statistically significant improvements over six state-of-the-art GAN-based semi-supervised methods, with particularly strong performance in the extreme 5-shot setting where the scarcity of labeled data is most challenging. The framework maintains its superiority across all evaluated settings (5, 10, 20, and 50 shots per class). Our approach offers a practical solution for medical imaging applications where annotation costs are prohibitive, enabling robust classification performance even with minimal labeled data. Code is available at https://github.com/GuidoManni/SPARSE.

Generative Artificial Intelligence in Medical Imaging: Foundations, Progress, and Clinical Translation

Xuanru Zhou, Cheng Li, Shuqiang Wang, Ye Li, Tao Tan, Hairong Zheng, Shanshan Wang

arxiv logopreprintAug 7 2025
Generative artificial intelligence (AI) is rapidly transforming medical imaging by enabling capabilities such as data synthesis, image enhancement, modality translation, and spatiotemporal modeling. This review presents a comprehensive and forward-looking synthesis of recent advances in generative modeling including generative adversarial networks (GANs), variational autoencoders (VAEs), diffusion models, and emerging multimodal foundation architectures and evaluates their expanding roles across the clinical imaging continuum. We systematically examine how generative AI contributes to key stages of the imaging workflow, from acquisition and reconstruction to cross-modality synthesis, diagnostic support, and treatment planning. Emphasis is placed on both retrospective and prospective clinical scenarios, where generative models help address longstanding challenges such as data scarcity, standardization, and integration across modalities. To promote rigorous benchmarking and translational readiness, we propose a three-tiered evaluation framework encompassing pixel-level fidelity, feature-level realism, and task-level clinical relevance. We also identify critical obstacles to real-world deployment, including generalization under domain shift, hallucination risk, data privacy concerns, and regulatory hurdles. Finally, we explore the convergence of generative AI with large-scale foundation models, highlighting how this synergy may enable the next generation of scalable, reliable, and clinically integrated imaging systems. By charting technical progress and translational pathways, this review aims to guide future research and foster interdisciplinary collaboration at the intersection of AI, medicine, and biomedical engineering.

FedGIN: Federated Learning with Dynamic Global Intensity Non-linear Augmentation for Organ Segmentation using Multi-modal Images

Sachin Dudda Nagaraju, Ashkan Moradi, Bendik Skarre Abrahamsen, Mattijs Elschot

arxiv logopreprintAug 7 2025
Medical image segmentation plays a crucial role in AI-assisted diagnostics, surgical planning, and treatment monitoring. Accurate and robust segmentation models are essential for enabling reliable, data-driven clinical decision making across diverse imaging modalities. Given the inherent variability in image characteristics across modalities, developing a unified model capable of generalizing effectively to multiple modalities would be highly beneficial. This model could streamline clinical workflows and reduce the need for modality-specific training. However, real-world deployment faces major challenges, including data scarcity, domain shift between modalities (e.g., CT vs. MRI), and privacy restrictions that prevent data sharing. To address these issues, we propose FedGIN, a Federated Learning (FL) framework that enables multimodal organ segmentation without sharing raw patient data. Our method integrates a lightweight Global Intensity Non-linear (GIN) augmentation module that harmonizes modality-specific intensity distributions during local training. We evaluated FedGIN using two types of datasets: an imputed dataset and a complete dataset. In the limited dataset scenario, the model was initially trained using only MRI data, and CT data was added to assess its performance improvements. In the complete dataset scenario, both MRI and CT data were fully utilized for training on all clients. In the limited-data scenario, FedGIN achieved a 12 to 18% improvement in 3D Dice scores on MRI test cases compared to FL without GIN and consistently outperformed local baselines. In the complete dataset scenario, FedGIN demonstrated near-centralized performance, with a 30% Dice score improvement over the MRI-only baseline and a 10% improvement over the CT-only baseline, highlighting its strong cross-modality generalization under privacy constraints.

HiFi-Mamba: Dual-Stream W-Laplacian Enhanced Mamba for High-Fidelity MRI Reconstruction

Hongli Chen, Pengcheng Fang, Yuxia Chen, Yingxuan Ren, Jing Hao, Fangfang Tang, Xiaohao Cai, Shanshan Shan, Feng Liu

arxiv logopreprintAug 7 2025
Reconstructing high-fidelity MR images from undersampled k-space data remains a challenging problem in MRI. While Mamba variants for vision tasks offer promising long-range modeling capabilities with linear-time complexity, their direct application to MRI reconstruction inherits two key limitations: (1) insensitivity to high-frequency anatomical details; and (2) reliance on redundant multi-directional scanning. To address these limitations, we introduce High-Fidelity Mamba (HiFi-Mamba), a novel dual-stream Mamba-based architecture comprising stacked W-Laplacian (WL) and HiFi-Mamba blocks. Specifically, the WL block performs fidelity-preserving spectral decoupling, producing complementary low- and high-frequency streams. This separation enables the HiFi-Mamba block to focus on low-frequency structures, enhancing global feature modeling. Concurrently, the HiFi-Mamba block selectively integrates high-frequency features through adaptive state-space modulation, preserving comprehensive spectral details. To eliminate the scanning redundancy, the HiFi-Mamba block adopts a streamlined unidirectional traversal strategy that preserves long-range modeling capability with improved computational efficiency. Extensive experiments on standard MRI reconstruction benchmarks demonstrate that HiFi-Mamba consistently outperforms state-of-the-art CNN-based, Transformer-based, and other Mamba-based models in reconstruction accuracy while maintaining a compact and efficient model design.

RegionMed-CLIP: A Region-Aware Multimodal Contrastive Learning Pre-trained Model for Medical Image Understanding

Tianchen Fang, Guiru Liu

arxiv logopreprintAug 7 2025
Medical image understanding plays a crucial role in enabling automated diagnosis and data-driven clinical decision support. However, its progress is impeded by two primary challenges: the limited availability of high-quality annotated medical data and an overreliance on global image features, which often miss subtle but clinically significant pathological regions. To address these issues, we introduce RegionMed-CLIP, a region-aware multimodal contrastive learning framework that explicitly incorporates localized pathological signals along with holistic semantic representations. The core of our method is an innovative region-of-interest (ROI) processor that adaptively integrates fine-grained regional features with the global context, supported by a progressive training strategy that enhances hierarchical multimodal alignment. To enable large-scale region-level representation learning, we construct MedRegion-500k, a comprehensive medical image-text corpus that features extensive regional annotations and multilevel clinical descriptions. Extensive experiments on image-text retrieval, zero-shot classification, and visual question answering tasks demonstrate that RegionMed-CLIP consistently exceeds state-of-the-art vision language models by a wide margin. Our results highlight the critical importance of region-aware contrastive pre-training and position RegionMed-CLIP as a robust foundation for advancing multimodal medical image understanding.
Page 5 of 90894 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.