Sort by:
Page 73 of 1421416 results

A Multi-Centric Anthropomorphic 3D CT Phantom-Based Benchmark Dataset for Harmonization

Mohammadreza Amirian, Michael Bach, Oscar Jimenez-del-Toro, Christoph Aberle, Roger Schaer, Vincent Andrearczyk, Jean-Félix Maestrati, Maria Martin Asiain, Kyriakos Flouris, Markus Obmann, Clarisse Dromain, Benoît Dufour, Pierre-Alexandre Alois Poletti, Hendrik von Tengg-Kobligk, Rolf Hügli, Martin Kretzschmar, Hatem Alkadhi, Ender Konukoglu, Henning Müller, Bram Stieltjes, Adrien Depeursinge

arxiv logopreprintJul 2 2025
Artificial intelligence (AI) has introduced numerous opportunities for human assistance and task automation in medicine. However, it suffers from poor generalization in the presence of shifts in the data distribution. In the context of AI-based computed tomography (CT) analysis, significant data distribution shifts can be caused by changes in scanner manufacturer, reconstruction technique or dose. AI harmonization techniques can address this problem by reducing distribution shifts caused by various acquisition settings. This paper presents an open-source benchmark dataset containing CT scans of an anthropomorphic phantom acquired with various scanners and settings, which purpose is to foster the development of AI harmonization techniques. Using a phantom allows fixing variations attributed to inter- and intra-patient variations. The dataset includes 1378 image series acquired with 13 scanners from 4 manufacturers across 8 institutions using a harmonized protocol as well as several acquisition doses. Additionally, we present a methodology, baseline results and open-source code to assess image- and feature-level stability and liver tissue classification, promoting the development of AI harmonization strategies.

PanTS: The Pancreatic Tumor Segmentation Dataset

Wenxuan Li, Xinze Zhou, Qi Chen, Tianyu Lin, Pedro R. A. S. Bassi, Szymon Plotka, Jaroslaw B. Cwikla, Xiaoxi Chen, Chen Ye, Zheren Zhu, Kai Ding, Heng Li, Kang Wang, Yang Yang, Yucheng Tang, Daguang Xu, Alan L. Yuille, Zongwei Zhou

arxiv logopreprintJul 2 2025
PanTS is a large-scale, multi-institutional dataset curated to advance research in pancreatic CT analysis. It contains 36,390 CT scans from 145 medical centers, with expert-validated, voxel-wise annotations of over 993,000 anatomical structures, covering pancreatic tumors, pancreas head, body, and tail, and 24 surrounding anatomical structures such as vascular/skeletal structures and abdominal/thoracic organs. Each scan includes metadata such as patient age, sex, diagnosis, contrast phase, in-plane spacing, slice thickness, etc. AI models trained on PanTS achieve significantly better performance in pancreatic tumor detection, localization, and segmentation compared to those trained on existing public datasets. Our analysis indicates that these gains are directly attributable to the 16x larger-scale tumor annotations and indirectly supported by the 24 additional surrounding anatomical structures. As the largest and most comprehensive resource of its kind, PanTS offers a new benchmark for developing and evaluating AI models in pancreatic CT analysis.

A deep learning-based computed tomography reading system for the diagnosis of lung cancer associated with cystic airspaces.

Hu Z, Zhang X, Yang J, Zhang B, Chen H, Shen W, Li H, Zhou Y, Zhang J, Qiu K, Xie Z, Xu G, Tan J, Pang C

pubmed logopapersJul 2 2025
To propose a deep learning model and explore its performance in the auxiliary diagnosis of lung cancer associated with cystic airspaces (LCCA) in computed tomography (CT) images. This study is a retrospective analysis that incorporated a total of 342 CT series, comprising 272 series from patients diagnosed with LCCA and 70 series from patients with pulmonary bulla. A deep learning model named LungSSFNet, developed based on nnUnet, was utilized for image recognition and segmentation by experienced thoracic surgeons. The dataset was divided into a training set (245 series), a validation set (62 series), and a test set (35 series). The performance of LungSSFNet was compared with other models such as UNet, M2Snet, TANet, MADGNet, and nnUnet to evaluate its effectiveness in recognizing and segmenting LCCA and pulmonary bulla. LungSSFNet achieved an intersection over union of 81.05% and a Dice similarity coefficient of 75.15% for LCCA, and 93.03% and 92.04% for pulmonary bulla, respectively. These outcomes demonstrate that LungSSFNet outperformed many existing models in segmentation tasks. Additionally, it attained an accuracy of 96.77%, a precision of 100%, and a sensitivity of 96.15%. LungSSFNet, a new deep-learning model, substantially improved the diagnosis of early-stage LCCA and is potentially valuable for auxiliary clinical decision-making. Our LungSSFNet code is available at https://github.com/zx0412/LungSSFNet .

Deep learning-based sex estimation of 3D hyoid bone models in a Croatian population using adapted PointNet++ network.

Jerković I, Bašić Ž, Kružić I

pubmed logopapersJul 2 2025
This study investigates a deep learning approach for sex estimation using 3D hyoid bone models derived from computed tomography (CT) scans of a Croatian population. We analyzed 202 hyoid samples (101 male, 101 female), converting CT-derived meshes into 2048-point clouds for processing with an adapted PointNet++ network. The model, optimized for small datasets with 1D convolutional layers and global size features, was first applied in an unsupervised framework. Unsupervised clustering achieved 87.10% accuracy, identifying natural sex-based morphological patterns. Subsequently, supervised classification with a support vector machine yielded an accuracy of 88.71% (Matthews Correlation Coefficient, MCC = 0.7746) on a test set (n = 62). Interpretability analysis highlighted key regions influencing classification, with males exhibiting larger, U-shaped hyoids and females showing smaller, more open structures. Despite the modest sample size, the method effectively captured sex differences, providing a data-efficient and interpretable tool. This flexible approach, combining computational efficiency with practical insights, demonstrates potential for aiding sex estimation in cases with limited skeletal remains and may support broader applications in forensic anthropology.

AI-driven genetic algorithm-optimized lung segmentation for precision in early lung cancer diagnosis.

Said Y, Ayachi R, Afif M, Saidani T, Alanezi ST, Saidani O, Algarni AD

pubmed logopapersJul 2 2025
Lung cancer remains the leading cause of cancer-related mortality worldwide, necessitating accurate and efficient diagnostic tools to improve patient outcomes. Lung segmentation plays a pivotal role in the diagnostic pipeline, directly impacting the accuracy of disease detection and treatment planning. This study presents an advanced AI-driven framework, optimized through genetic algorithms, for precise lung segmentation in early cancer diagnosis. The proposed model builds upon the UNET3 + architecture and integrates multi-scale feature extraction with enhanced optimization strategies to improve segmentation accuracy while significantly reducing computational complexity. By leveraging genetic algorithms, the framework identifies optimal neural network configurations within a defined search space, ensuring high segmentation performance with minimal parameters. Extensive experiments conducted on publicly available lung segmentation datasets demonstrated superior results, achieving a dice similarity coefficient of 99.17% with only 26% of the parameters required by the baseline UNET3 + model. This substantial reduction in model size and computational cost makes the system highly suitable for resource-constrained environments, including point-of-care diagnostic devices. The proposed approach exemplifies the transformative potential of AI in medical imaging, enabling earlier and more precise lung cancer diagnosis while reducing healthcare disparities in resource-limited settings.

Automatic detection of orthodontically induced external root resorption based on deep convolutional neural networks using CBCT images.

Xu S, Peng H, Yang L, Zhong W, Gao X

pubmed logopapersJul 2 2025
Orthodontically-induced external root resorption (OIERR) is among the most common risks in orthodontic treatment. Traditional OIERR diagnosis is limited by subjective judgement as well as cumbersome manual measurement. The research aims to develop an intelligent detection model for OIERR based on deep convolutional neural networks (CNNs) through cone-beam computed tomography (CBCT) images, thus providing auxiliary diagnosis support for orthodontists. Six pretrained CNN architectures were adopted and 1717 CBCT slices were used for training to construct OIERR detection models. The performance of the models was tested on 429 CBCT slices and the activated regions during decision-making were visualized through heatmaps. The model performance was then compared with that of two orthodontists. The EfficientNet-B1 model, trained through hold-out cross-validation, proved to be the most effective for detecting OIERR. Its accuracy, precision, sensitivity, specificity as well as F1-score were 0.97, 0.98, 0.97, 0.98 and 0.98, respectively. The metrics remarkably outperformed those of orthodontists, whose accuracy, recall and F1-score were 0.86, 0.78, and 0.87 respectively (P < 0.01). The heatmaps suggested that the OIERR detection model primarily relied on root features for decision-making. Automatic detection of OIERR through CNNs as well as CBCT images is both accurate and efficient. The method outperforms orthodontists and is anticipated to serve as a clinical tool for the rapid screening and diagnosis of OIERR.

Developing an innovative lung cancer detection model for accurate diagnosis in AI healthcare systems.

Jian W, Haq AU, Afzal N, Khan S, Alsolai H, Alanazi SM, Zamani AT

pubmed logopapersJul 2 2025
Accurate Lung cancer (LC) identification is a big medical problem in the AI-based healthcare systems. Various deep learning-based methods have been proposed for Lung cancer diagnosis. In this study, we proposed a Deep learning techniques-based integrated model (CNN-GRU) for Lung cancer detection. In the proposed model development Convolutional neural networks (CNNs), and gated recurrent units (GRU) models are integrated to design an intelligent model for lung cancer detection. The CNN model extracts spatial features from lung CT images through convolutional and pooling layers. The extracted features from data are embedded in the GRUs model for the final prediction of LC. The model (CNN-GRU) was validated using LC data using the holdout validation technique. Data augmentation techniques such as rotation, and brightness were used to enlarge the data set size for effective training of the model. The optimization techniques Stochastic Gradient Descent(SGD) and Adaptive Moment Estimation(ADAM) were applied during model training for model training parameters optimization. Additionally, evaluation metrics were used to test the model performance. The experimental results of the model presented that the model achieved 99.77% accuracy as compared to previous models. The (CNN-GRU) model is recommended for accurate LC detection in AI-based healthcare systems due to its improved diagnosis accuracy.

Multitask Deep Learning Based on Longitudinal CT Images Facilitates Prediction of Lymph Node Metastasis and Survival in Chemotherapy-Treated Gastric Cancer.

Qiu B, Zheng Y, Liu S, Song R, Wu L, Lu C, Yang X, Wang W, Liu Z, Cui Y

pubmed logopapersJul 2 2025
Accurate preoperative assessment of lymph node metastasis (LNM) and overall survival (OS) status is essential for patients with locally advanced gastric cancer receiving neoadjuvant chemotherapy, providing timely guidance for clinical decision-making. However, current approaches to evaluate LNM and OS have limited accuracy. In this study, we used longitudinal CT images from 1,021 patients with locally advanced gastric cancer to develop and validate a multitask deep learning model, named co-attention tri-oriented spatial Mamba (CTSMamba), to simultaneously predict LNM and OS. CTSMamba was trained and validated on 398 patients, and the performance was further validated on 623 patients at two additional centers. Notably, CTSMamba exhibited significantly more robust performance than a clinical model in predicting LNM across all of the cohorts. Additionally, integrating CTSMamba survival scores with clinical predictors further improved personalized OS prediction. These results support the potential of CTSMamba to accurately predict LNM and OS from longitudinal images, potentially providing clinicians with a tool to inform individualized treatment approaches and optimized prognostic strategies. CTSMamba is a multitask deep learning model trained on longitudinal CT images of neoadjuvant chemotherapy-treated locally advanced gastric cancer that accurately predicts lymph node metastasis and overall survival to inform clinical decision-making. This article is part of a special series: Driving Cancer Discoveries with Computational Research, Data Science, and Machine Learning/AI.

A computationally frugal open-source foundation model for thoracic disease detection in lung cancer screening programs

Niccolò McConnell, Pardeep Vasudev, Daisuke Yamada, Daryl Cheng, Mehran Azimbagirad, John McCabe, Shahab Aslani, Ahmed H. Shahin, Yukun Zhou, The SUMMIT Consortium, Andre Altmann, Yipeng Hu, Paul Taylor, Sam M. Janes, Daniel C. Alexander, Joseph Jacob

arxiv logopreprintJul 2 2025
Low-dose computed tomography (LDCT) imaging employed in lung cancer screening (LCS) programs is increasing in uptake worldwide. LCS programs herald a generational opportunity to simultaneously detect cancer and non-cancer-related early-stage lung disease. Yet these efforts are hampered by a shortage of radiologists to interpret scans at scale. Here, we present TANGERINE, a computationally frugal, open-source vision foundation model for volumetric LDCT analysis. Designed for broad accessibility and rapid adaptation, TANGERINE can be fine-tuned off the shelf for a wide range of disease-specific tasks with limited computational resources and training data. Relative to models trained from scratch, TANGERINE demonstrates fast convergence during fine-tuning, thereby requiring significantly fewer GPU hours, and displays strong label efficiency, achieving comparable or superior performance with a fraction of fine-tuning data. Pretrained using self-supervised learning on over 98,000 thoracic LDCTs, including the UK's largest LCS initiative to date and 27 public datasets, TANGERINE achieves state-of-the-art performance across 14 disease classification tasks, including lung cancer and multiple respiratory diseases, while generalising robustly across diverse clinical centres. By extending a masked autoencoder framework to 3D imaging, TANGERINE offers a scalable solution for LDCT analysis, departing from recent closed, resource-intensive models by combining architectural simplicity, public availability, and modest computational requirements. Its accessible, open-source lightweight design lays the foundation for rapid integration into next-generation medical imaging tools that could transform LCS initiatives, allowing them to pivot from a singular focus on lung cancer detection to comprehensive respiratory disease management in high-risk populations.

Calibrated Self-supervised Vision Transformers Improve Intracranial Arterial Calcification Segmentation from Clinical CT Head Scans

Benjamin Jin, Grant Mair, Joanna M. Wardlaw, Maria del C. Valdés Hernández

arxiv logopreprintJul 2 2025
Vision Transformers (ViTs) have gained significant popularity in the natural image domain but have been less successful in 3D medical image segmentation. Nevertheless, 3D ViTs are particularly interesting for large medical imaging volumes due to their efficient self-supervised training within the masked autoencoder (MAE) framework, which enables the use of imaging data without the need for expensive manual annotations. intracranial arterial calcification (IAC) is an imaging biomarker visible on routinely acquired CT scans linked to neurovascular diseases such as stroke and dementia, and automated IAC quantification could enable their large-scale risk assessment. We pre-train ViTs with MAE and fine-tune them for IAC segmentation for the first time. To develop our models, we use highly heterogeneous data from a large clinical trial, the third International Stroke Trial (IST-3). We evaluate key aspects of MAE pre-trained ViTs in IAC segmentation, and analyse the clinical implications. We show: 1) our calibrated self-supervised ViT beats a strong supervised nnU-Net baseline by 3.2 Dice points, 2) low patch sizes are crucial for ViTs for IAC segmentation and interpolation upsampling with regular convolutions is preferable to transposed convolutions for ViT-based models, and 3) our ViTs increase robustness to higher slice thicknesses and improve risk group classification in a clinical scenario by 46%. Our code is available online.
Page 73 of 1421416 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.