Sort by:
Page 9 of 46453 results

Automated segmentation of soft X-ray tomography: native cellular structure with sub-micron resolution at high throughput for whole-cell quantitative imaging in yeast.

Chen J, Mirvis M, Ekman A, Vanslembrouck B, Gros ML, Larabell C, Marshall WF

pubmed logopapersAug 28 2025
Soft X-ray tomography (SXT) is an invaluable tool for quantitatively analyzing cellular structures at sub-optical isotropic resolution. However, it has traditionally depended on manual segmentation, limiting its scalability for large datasets. Here, we leverage a deep learning-based auto-segmentation pipeline to segment and label cellular structures in hundreds of cells across three <i>Saccharomyces cerevisiae</i> strains. This task-based pipeline employs manual iterative refinement to improve segmentation accuracy for key structures, including the cell body, nucleus, vacuole, and lipid droplets, enabling high-throughput and precise phenotypic analysis. Using this approach, we quantitatively compared the 3D whole-cell morphometric characteristics of wild-type, VPH1-GFP, and <i>vac14</i> strains, uncovering detailed strain-specific cell and organelle size and shape variations. We show the utility of SXT data for precise 3D curvature analysis of entire organelles and cells and detection of fine morphological features using surface meshes. Our approach facilitates comparative analyses with high spatial precision and statistical throughput, uncovering subtle morphological features at the single-cell and population level. This workflow significantly enhances our ability to characterize cell anatomy and supports scalable studies on the mesoscale, with applications in investigating cellular architecture, organelle biology, and genetic research across diverse biological contexts. [Media: see text] [Media: see text] [Media: see text] [Media: see text] [Media: see text] [Media: see text] [Media: see text] [Media: see text] [Media: see text] [Media: see text].

Ontology-Based Concept Distillation for Radiology Report Retrieval and Labeling

Felix Nützel, Mischa Dombrowski, Bernhard Kainz

arxiv logopreprintAug 27 2025
Retrieval-augmented learning based on radiology reports has emerged as a promising direction to improve performance on long-tail medical imaging tasks, such as rare disease detection in chest X-rays. Most existing methods rely on comparing high-dimensional text embeddings from models like CLIP or CXR-BERT, which are often difficult to interpret, computationally expensive, and not well-aligned with the structured nature of medical knowledge. We propose a novel, ontology-driven alternative for comparing radiology report texts based on clinically grounded concepts from the Unified Medical Language System (UMLS). Our method extracts standardised medical entities from free-text reports using an enhanced pipeline built on RadGraph-XL and SapBERT. These entities are linked to UMLS concepts (CUIs), enabling a transparent, interpretable set-based representation of each report. We then define a task-adaptive similarity measure based on a modified and weighted version of the Tversky Index that accounts for synonymy, negation, and hierarchical relationships between medical entities. This allows efficient and semantically meaningful similarity comparisons between reports. We demonstrate that our approach outperforms state-of-the-art embedding-based retrieval methods in a radiograph classification task on MIMIC-CXR, particularly in long-tail settings. Additionally, we use our pipeline to generate ontology-backed disease labels for MIMIC-CXR, offering a valuable new resource for downstream learning tasks. Our work provides more explainable, reliable, and task-specific retrieval strategies in clinical AI systems, especially when interpretability and domain knowledge integration are essential. Our code is available at https://github.com/Felix-012/ontology-concept-distillation

Artificial intelligence system for predicting areal bone mineral density from plain X-rays.

Nguyen HG, Nguyen DT, Tran TS, Ling SH, Ho-Pham LT, Van Nguyen T

pubmed logopapersAug 27 2025
Dual-energy X-ray absorptiometry (DXA) is the standard method for assessing areal bone mineral density (aBMD), diagnosing osteoporosis, and predicting fracture risk. However, DXA's availability is limited in resource-poor areas. This study aimed to develop an artificial intelligence (AI) system capable of estimating aBMD from standard radiographs. The study was part of the Vietnam Osteoporosis Study, a prospective population-based research involving 3783 participants aged 18 years and older. A total of 7060 digital radiographs of the frontal pelvis and lateral spine were taken using the FCR Capsula XLII system (Fujifilm Corp., Tokyo, Japan). aBMD at the femoral neck and lumbar spine was measured with DXA (Hologic Horizon, Hologic Corp., Bedford, MA, USA). An ensemble of seven deep-learning models was used to analyze the X-rays and predict bone mineral density, termed "xBMD". The correlation between xBMD and aBMD was evaluated using Pearson's correlation coefficients. The correlation between xBMD and aBMD at the femoral neck was strong ( <math xmlns="http://www.w3.org/1998/Math/MathML"><mi>r</mi></math> = 0.90; 95% CI, 0.88-0.91), and similarly high at the lumbar spine ( <math xmlns="http://www.w3.org/1998/Math/MathML"><mi>r</mi></math> = 0.87; 95% CI, 0.85-0.88). This correlation remained consistent across different age groups and genders. The AI system demonstrated excellent performance in identifying individuals at high risk for hip fractures, with area under the ROC curve (AUC) values of 0.96 (95% CI, 0.95-0.98) at the femoral neck and 0.97 (95% CI, 0.96-0.99) at the lumbar spine. These findings indicate that AI can accurately predict aBMD and identify individuals at high risk of fractures. This AI system could provide an efficient alternative to DXA for osteoporosis screening in settings with limited resources and high patient demand. An AI system developed to predict aBMD from X-rays showed strong correlations with DXA ( <math xmlns="http://www.w3.org/1998/Math/MathML"><mi>r</mi></math> = 0.90 at femoral neck; =  <math xmlns="http://www.w3.org/1998/Math/MathML"><mi>r</mi></math> 0.87 at lumbar spine) and high accuracy in identifying individuals at high risk for fractures (AUC = 0.96 at femoral neck; AUC = 0.97 at lumbar spine).

Automatic opportunistic osteoporosis screening using chest X-ray images via deep neural networks.

Tang J, Yin X, Lai J, Luo K, Wu D

pubmed logopapersAug 27 2025
Osteoporosis is a bone disease characterized by reduced bone mineral density and quality, which increases the risk of fragility fractures. The current diagnostic gold standard, dual-energy X-ray absorptiometry (DXA), faces limitations such as low equipment penetration, high testing costs, and radiation exposure, restricting its feasibility as a screening tool. To address these limitations, We retrospectively collected data from 1995 patients who visited Daping Hospital in Chongqing from January 2019 to August 2024. We developed an opportunistic screening method using chest X-rays. Furthermore, we designed three innovative deep neural network models using transfer learning: Inception v3, VGG16, and ResNet50. These models were evaluated based on their classification performance for osteoporosis using chest X-ray images, with external validation via multi-center data. The ResNet50 model demonstrated superior performance, achieving average accuracies of 87.85 % and 90.38 % in the internal test dataset across two experiments, with AUC values of 0.945 and 0.957, respectively. These results outperformed traditional convolutional neural networks. In the external validation, the ResNet50 model achieved an AUC of 0.904, accuracy of 89 %, sensitivity of 90 %, and specificity of 88.57 %, demonstrating strong generalization ability. And the model shows robust performance despite concurrent pulmonary pathologies. This study provides an automatic screening method for osteoporosis using chest X-rays, without additional radiation exposure or cost. The ResNet50 model's high performance supports clinicians in the early identification and treatment of osteoporosis patients.

Activating Associative Disease-Aware Vision Token Memory for LLM-Based X-ray Report Generation.

Wang X, Wang F, Wang H, Jiang B, Li C, Wang Y, Tian Y, Tang J

pubmed logopapersAug 27 2025
X-ray image based medical report generation achieves significant progress in recent years with the help of large language models, however, these models have not fully exploited the effective information in visual image regions, resulting in reports that are linguistically sound but insufficient in describing key diseases. In this paper, we propose a novel associative memory-enhanced X-ray report generation model that effectively mimics the process of professional doctors writing medical reports. It considers both the mining of global and local visual information and associates historical report information to better complete the writing of the current report. Specifically, given an X-ray image, we first utilize a classification model along with its activation maps to accomplish the mining of visual regions highly associated with diseases and the learning of disease query tokens. Then, we employ a visual Hopfield network to establish memory associations for disease-related tokens, and a report Hopfield network to retrieve report memory information. This process facilitates the generation of high-quality reports based on a large language model and achieves state-of-the-art performance on multiple benchmark datasets, including the IU X-ray, MIMIC-CXR, and Chexpert Plus. The source code and pre-trained models of this work have been released on https://github.com/Event-AHU/Medical_Image_Analysis.

Advanced Deep Learning Techniques for Classifying Dental Conditions Using Panoramic X-Ray Images

Alireza Golkarieh, Kiana Kiashemshaki, Sajjad Rezvani Boroujeni

arxiv logopreprintAug 27 2025
This study investigates deep learning methods for automated classification of dental conditions in panoramic X-ray images. A dataset of 1,512 radiographs with 11,137 expert-verified annotations across four conditions fillings, cavities, implants, and impacted teeth was used. After preprocessing and class balancing, three approaches were evaluated: a custom convolutional neural network (CNN), hybrid models combining CNN feature extraction with traditional classifiers, and fine-tuned pre-trained architectures. Experiments employed 5 fold cross validation with accuracy, precision, recall, and F1 score as evaluation metrics. The hybrid CNN Random Forest model achieved the highest performance with 85.4% accuracy, surpassing the custom CNN baseline of 74.3%. Among pre-trained models, VGG16 performed best at 82.3% accuracy, followed by Xception and ResNet50. Results show that hybrid models improve discrimination of morphologically similar conditions and provide efficient, reliable performance. These findings suggest that combining CNN-based feature extraction with ensemble classifiers offers a practical path toward automated dental diagnostic support, while also highlighting the need for larger datasets and further clinical validation.

SWiFT: Soft-Mask Weight Fine-tuning for Bias Mitigation

Junyu Yan, Feng Chen, Yuyang Xue, Yuning Du, Konstantinos Vilouras, Sotirios A. Tsaftaris, Steven McDonagh

arxiv logopreprintAug 26 2025
Recent studies have shown that Machine Learning (ML) models can exhibit bias in real-world scenarios, posing significant challenges in ethically sensitive domains such as healthcare. Such bias can negatively affect model fairness, model generalization abilities and further risks amplifying social discrimination. There is a need to remove biases from trained models. Existing debiasing approaches often necessitate access to original training data and need extensive model retraining; they also typically exhibit trade-offs between model fairness and discriminative performance. To address these challenges, we propose Soft-Mask Weight Fine-Tuning (SWiFT), a debiasing framework that efficiently improves fairness while preserving discriminative performance with much less debiasing costs. Notably, SWiFT requires only a small external dataset and only a few epochs of model fine-tuning. The idea behind SWiFT is to first find the relative, and yet distinct, contributions of model parameters to both bias and predictive performance. Then, a two-step fine-tuning process updates each parameter with different gradient flows defined by its contribution. Extensive experiments with three bias sensitive attributes (gender, skin tone, and age) across four dermatological and two chest X-ray datasets demonstrate that SWiFT can consistently reduce model bias while achieving competitive or even superior diagnostic accuracy under common fairness and accuracy metrics, compared to the state-of-the-art. Specifically, we demonstrate improved model generalization ability as evidenced by superior performance on several out-of-distribution (OOD) datasets.

AT-CXR: Uncertainty-Aware Agentic Triage for Chest X-rays

Xueyang Li, Mingze Jiang, Gelei Xu, Jun Xia, Mengzhao Jia, Danny Chen, Yiyu Shi

arxiv logopreprintAug 26 2025
Agentic AI is advancing rapidly, yet truly autonomous medical-imaging triage, where a system decides when to stop, escalate, or defer under real constraints, remains relatively underexplored. To address this gap, we introduce AT-CXR, an uncertainty-aware agent for chest X-rays. The system estimates per-case confidence and distributional fit, then follows a stepwise policy to issue an automated decision or abstain with a suggested label for human intervention. We evaluate two router designs that share the same inputs and actions: a deterministic rule-based router and an LLM-decided router. Across five-fold evaluation on a balanced subset of NIH ChestX-ray14 dataset, both variants outperform strong zero-shot vision-language models and state-of-the-art supervised classifiers, achieving higher full-coverage accuracy and superior selective-prediction performance, evidenced by a lower area under the risk-coverage curve (AURC) and a lower error rate at high coverage, while operating with lower latency that meets practical clinical constraints. The two routers provide complementary operating points, enabling deployments to prioritize maximal throughput or maximal accuracy. Our code is available at https://github.com/XLIAaron/uncertainty-aware-cxr-agent.

Development and evaluation of a convolutional neural network model for sex prediction using cephalometric radiographs and cranial photographs.

Handayani VW, Margareth Amiatun Ruth MS, Rulaningtyas R, Caesarardhi MR, Yudhantorro BA, Yudianto A

pubmed logopapersAug 25 2025
Accurately determining sex using features like facial bone profiles and teeth is crucial for identifying unknown victims. Lateral cephalometric radiographs effectively depict the lateral cranial structure, aiding the development of computational identification models. This study develops and evaluates a sex prediction model using cephalometric radiographs with several convolutional neural network (CNN) architectures. The primary goal is to evaluate the model's performance on standardized radiographic data and real-world cranial photographs to simulate forensic applications. Six CNN architectures-VGG16, VGG19, MobileNetV2, ResNet50V2, InceptionV3, and InceptionResNetV2-were employed to train and validate 340 cephalometric images of Indonesian individuals aged 18 to 40 years. The data were divided into training (70%), validation (15%), and testing (15%) subsets. Data augmentation was implemented to mitigate class imbalance. Additionally, a set of 40 cranial images from anatomical specimens was employed to evaluate the model's generalizability. Model performance metrics included accuracy, precision, recall, and F1-score. CNN models were trained and evaluated on 340 cephalometric images (255 females and 85 males). VGG19 and ResNet50V2 achieved high F1-scores of 95% (females) and 83% (males), respectively, using cephalometric data, highlighting their strong class-specific performance. Although the overall accuracy exceeded 90%, the F1-score better reflected model performance in this imbalanced dataset. In contrast, performance notably decreased with cranial photographs, particularly when classifying female samples. That is, while InceptionResNetV2 achieved the highest F1-score for cranial photographs (62%), misclassification of females remained significant. Confusion matrices and per-class metrics further revealed persistent issues related to data imbalance and generalization across imaging modalities. Basic CNN models perform well on standardized cephalometric images but less effectively on photographic cranial images, indicating a domain shift between image types that limits generalizability. Improving real-world forensic performance will require further optimization and more diverse training data. Not applicable.

FCR: Investigating Generative AI models for Forensic Craniofacial Reconstruction

Ravi Shankar Prasad, Dinesh Singh

arxiv logopreprintAug 25 2025
Craniofacial reconstruction in forensics is one of the processes to identify victims of crime and natural disasters. Identifying an individual from their remains plays a crucial role when all other identification methods fail. Traditional methods for this task, such as clay-based craniofacial reconstruction, require expert domain knowledge and are a time-consuming process. At the same time, other probabilistic generative models like the statistical shape model or the Basel face model fail to capture the skull and face cross-domain attributes. Looking at these limitations, we propose a generic framework for craniofacial reconstruction from 2D X-ray images. Here, we used various generative models (i.e., CycleGANs, cGANs, etc) and fine-tune the generator and discriminator parts to generate more realistic images in two distinct domains, which are the skull and face of an individual. This is the first time where 2D X-rays are being used as a representation of the skull by generative models for craniofacial reconstruction. We have evaluated the quality of generated faces using FID, IS, and SSIM scores. Finally, we have proposed a retrieval framework where the query is the generated face image and the gallery is the database of real faces. By experimental results, we have found that this can be an effective tool for forensic science.
Page 9 of 46453 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.