Sort by:
Page 27 of 1331328 results

Random forest-based out-of-distribution detection for robust lung cancer segmentation

Aneesh Rangnekar, Harini Veeraraghavan

arxiv logopreprintAug 26 2025
Accurate detection and segmentation of cancerous lesions from computed tomography (CT) scans is essential for automated treatment planning and cancer treatment response assessment. Transformer-based models with self-supervised pretraining can produce reliably accurate segmentation from in-distribution (ID) data but degrade when applied to out-of-distribution (OOD) datasets. We address this challenge with RF-Deep, a random forest classifier that utilizes deep features from a pretrained transformer encoder of the segmentation model to detect OOD scans and enhance segmentation reliability. The segmentation model comprises a Swin Transformer encoder, pretrained with masked image modeling (SimMIM) on 10,432 unlabeled 3D CT scans covering cancerous and non-cancerous conditions, with a convolution decoder, trained to segment lung cancers in 317 3D scans. Independent testing was performed on 603 3D CT public datasets that included one ID dataset and four OOD datasets comprising chest CTs with pulmonary embolism (PE) and COVID-19, and abdominal CTs with kidney cancers and healthy volunteers. RF-Deep detected OOD cases with a FPR95 of 18.26%, 27.66%, and less than 0.1% on PE, COVID-19, and abdominal CTs, consistently outperforming established OOD approaches. The RF-Deep classifier provides a simple and effective approach to enhance reliability of cancer segmentation in ID and OOD scenarios.

Bronchiectasis in patients with chronic obstructive pulmonary disease: AI-based CT quantification using the bronchial tapering ratio.

Park H, Choe J, Lee SM, Lim S, Lee JS, Oh YM, Lee JB, Hwang HJ, Yun J, Bae S, Yu D, Loh LC, Ong CK, Seo JB

pubmed logopapersAug 26 2025
Although chest CT is the primary tool for evaluating bronchiectasis, accurately measuring its extent poses challenges. This study aimed to automatically quantify bronchiectasis using an artificial intelligence (AI)-based analysis of the bronchial tapering ratio on chest CT and assess its association with clinical outcomes in patients with chronic obstructive pulmonary disease (COPD). COPD patients from two prospective multicenter cohorts were included. AI-based airway quantification was performed on baseline CT, measuring the tapering ratio for each bronchus in the whole lung. The bronchiectasis score accounting for the extent of bronchi with abnormal tapering (inner lumen tapering ratio ≥ 1.1, indicating airway dilatation) in the whole lung was calculated. Associations between the bronchiectasis score and all-cause mortality and acute exacerbation (AE) were assessed using multivariable models. The discovery and validation cohorts included 361 (mean age, 67 years; 97.5% men) and 112 patients (mean age, 67 years; 93.7% men), respectively. In the discovery cohort, 220 (60.9%) had a history of at least one AE and 59 (16.3%) died during follow-up, and 18 (16.1%) died in the validation cohort. Bronchiectasis score was independently associated with increased mortality (discovery: adjusted HR, 1.86 [95% CI: 1.08-3.18]; validation: HR, 5.42 [95% CI: 1.97-14.92]). The score was also associated with risk of any AE, severe AE, and shorter time to first AE (for all, p < 0.05). In patients with COPD, the quantified extent of bronchiectasis using AI-based CT quantification of the bronchial tapering ratio was associated with all-cause mortality and the risk of AE over time. Question Can AI-based CT quantification of bronchial tapering reliably assess bronchiectasis relevant to clinical outcomes in patients with COPD? Findings Scores from this AI-based method of automatically quantifying the extent of whole lung bronchiectasis were independently associated with all-cause mortality and risk of AEs in COPD patients. Clinical relevance AI-based bronchiectasis analysis on CT may shift clinical research toward more objective, quantitative assessment methods and support risk stratification and management in COPD, highlighting its potential to enhance clinically relevant imaging evaluation.

Clinical Evaluation of AI-Based Three-Dimensional Dental Implant Planning: A Multicenter Study.

Che SA, Yang BE, Park SY, On SW, Lim HK, Lee CU, Kim MK, Byun SH

pubmed logopapersAug 26 2025
Dental implants have become more straightforward and convenient with advancements of digital technology in dentistry. Implant planning utilizing artificial intelligence (AI) has been attempted, yet its clinical efficacy remains underexplored. We aimed to assess the clinical applicability of AI-based implant planning software as a decision-support tool in comparison with those placed by clinicians which were clinically appropriate in their three-dimensional positions. Overall, 350 implants from 228 patients treated at four university hospitals were analyzed. The AI algorithm was developed using enhanced deep convolutional neural networks. Implant positions planned by the AI were compared with those placed freehand by clinicians. Three-dimensional deviations were measured and analyzed according to clinical factors, including the presence of opposing or contralateral teeth, jaw, and side. Independent sample t-test and two-way ANOVA were employed for statistical analysis. The mean coronal, apical, and angular deviations were 2.99 ± 1.56 mm, 3.66 ± 1.68 mm, and 7.56 ± 4.67°, respectively. Angular deviation was significantly greater in the absence of contralateral teeth (p=0.039), and apical deviation was significantly greater in the mandible (p<0.001). The AI-based 3D implant planning tool demonstrated potential as a decision-support system by providing valuable guidance in clinical scenarios. However, discrepancies between AI-generated and actual implant positions indicate that further research and development are needed to enhance its predictive accuracy. AI-based implant planning may serve as a supportive tool under clinician supervision, potentially improving workflow efficiency and contributing to more standardized implant treatment planning as the technology advances.

Stress-testing cross-cancer generalizability of 3D nnU-Net for PET-CT tumor segmentation: multi-cohort evaluation with novel oesophageal and lung cancer datasets

Soumen Ghosh, Christine Jestin Hannan, Rajat Vashistha, Parveen Kundu, Sandra Brosda, Lauren G. Aoude, James Lonie, Andrew Nathanson, Jessica Ng, Andrew P. Barbour, Viktor Vegh

arxiv logopreprintAug 26 2025
Robust generalization is essential for deploying deep learning based tumor segmentation in clinical PET-CT workflows, where anatomical sites, scanners, and patient populations vary widely. This study presents the first cross cancer evaluation of nnU-Net on PET-CT, introducing two novel, expert-annotated whole-body datasets. 279 patients with oesophageal cancer (Australian cohort) and 54 with lung cancer (Indian cohort). These cohorts complement the public AutoPET dataset and enable systematic stress-testing of cross domain performance. We trained and tested 3D nnUNet models under three paradigms. Target only (oesophageal), public only (AutoPET), and combined training. For the tested sets, the oesophageal only model achieved the best in-domain accuracy (mean DSC, 57.8) but failed on external Indian lung cohort (mean DSC less than 3.4), indicating severe overfitting. The public only model generalized more broadly (mean DSC, 63.5 on AutoPET, 51.6 on Indian lung cohort) but underperformed in oesophageal Australian cohort (mean DSC, 26.7). The combined approach provided the most balanced results (mean DSC, lung (52.9), oesophageal (40.7), AutoPET (60.9)), reducing boundary errors and improving robustness across all cohorts. These findings demonstrate that dataset diversity, particularly multi demographic, multi center and multi cancer integration, outweighs architectural novelty as the key driver of robust generalization. This work presents the demography based cross cancer deep learning segmentation evaluation and highlights dataset diversity, rather than model complexity, as the foundation for clinically robust segmentation.

A Machine Learning Approach to Volumetric Computations of Solid Pulmonary Nodules

Yihan Zhou, Haocheng Huang, Yue Yu, Jianhui Shang

arxiv logopreprintAug 26 2025
Early detection of lung cancer is crucial for effective treatment and relies on accurate volumetric assessment of pulmonary nodules in CT scans. Traditional methods, such as consolidation-to-tumor ratio (CTR) and spherical approximation, are limited by inconsistent estimates due to variability in nodule shape and density. We propose an advanced framework that combines a multi-scale 3D convolutional neural network (CNN) with subtype-specific bias correction for precise volume estimation. The model was trained and evaluated on a dataset of 364 cases from Shanghai Chest Hospital. Our approach achieved a mean absolute deviation of 8.0 percent compared to manual nonlinear regression, with inference times under 20 seconds per scan. This method outperforms existing deep learning and semi-automated pipelines, which typically have errors of 25 to 30 percent and require over 60 seconds for processing. Our results show a reduction in error by over 17 percentage points and a threefold acceleration in processing speed. These advancements offer a highly accurate, efficient, and scalable tool for clinical lung nodule screening and monitoring, with promising potential for improving early lung cancer detection.

GReAT: leveraging geometric artery data to improve wall shear stress assessment

Julian Suk, Jolanda J. Wentzel, Patryk Rygiel, Joost Daemen, Daniel Rueckert, Jelmer M. Wolterink

arxiv logopreprintAug 26 2025
Leveraging big data for patient care is promising in many medical fields such as cardiovascular health. For example, hemodynamic biomarkers like wall shear stress could be assessed from patient-specific medical images via machine learning algorithms, bypassing the need for time-intensive computational fluid simulation. However, it is extremely challenging to amass large-enough datasets to effectively train such models. We could address this data scarcity by means of self-supervised pre-training and foundations models given large datasets of geometric artery models. In the context of coronary arteries, leveraging learned representations to improve hemodynamic biomarker assessment has not yet been well studied. In this work, we address this gap by investigating whether a large dataset (8449 shapes) consisting of geometric models of 3D blood vessels can benefit wall shear stress assessment in coronary artery models from a small-scale clinical trial (49 patients). We create a self-supervised target for the 3D blood vessels by computing the heat kernel signature, a quantity obtained via Laplacian eigenvectors, which captures the very essence of the shapes. We show how geometric representations learned from this datasets can boost segmentation of coronary arteries into regions of low, mid and high (time-averaged) wall shear stress even when trained on limited data.

Whole-genome sequencing analysis of left ventricular structure and sphericity in 80,000 people

Pirruccello, J.

medrxiv logopreprintAug 26 2025
BackgroundSphericity is a measurement of how closely an object approximates a globe. The sphericity of the blood pool of the left ventricle (LV), is an emerging measure linked to myocardial dysfunction. MethodsVideo-based deep learning models were trained for semantic segmentation (pixel labeling) in cardiac magnetic resonance imaging in 84,327 UK Biobank participants. These labeled pixels were co-oriented in 3D and used to construct surface meshes. LV ejection fraction, mass, volume, surface area, and sphericity were calculated. Epidemiologic and genetic analyses were conducted. Polygenic score validation was performed in All of Us. Results3D LV sphericity was found to be more strongly associated (HR 10.3 per SD, 95% CI 6.1-17.3) than LV ejection fraction (HR 2.9 per SD reduction, 95% CI 2.4-3.6) with dilated cardiomyopathy (DCM). Paired with whole genome sequencing, these measurements linked LV structure and function to 366 distinct common and low-frequency genetic loci--and 17 genes with rare variant burden--spanning a 25-fold range of effect size. The discoveries included 22 out of the 26 loci that were recently associated with DCM. LV genome-wide polygenic scores were equivalent to, or outperformed, dedicated hypertrophic cardiomyopathy (HCM) and DCM polygenic scores for disease prediction. In All of Us, those in the polygenic extreme 1% had an estimated 6.6% risk of DCM by age 80, compared to 33% for carriers of rare truncating variants in the gene TTN. Conclusions3D sphericity is a distinct, heritable LV measurement that is intricately linked to risk for HCM and DCM. The genetic findings from this study raise the possibility that the majority of common genetic loci that will be discovered in future large-scale DCM analyses are present in the current results.

A Fully Automated 3D CT U-Net Framework for Segmentation and Measurement of the Masseter Muscle, Innovatively Incorporating a Self-Supervised Algorithm to Effectively Reduce Sample Size: A Validation Study in East Asian Populations.

Qiu X, Han W, Wang L, Chai G

pubmed logopapersAug 26 2025
The segmentation and volume measurement of the masseter muscle play an important role in radiological evaluation. Manual segmentation is considered the gold standard, but it has limited efficiency. This study aims to develop and evaluate a U-Net-based coarse-to-fine learning framework for automated segmentation and volume measurement of the masseter muscle, providing baseline data on muscle characteristics in 840 healthy East Asian volunteers, while introducing a self-supervised algorithm to reduce the sample size required for deep learning. A database of 840 individuals (253 males, 587 females) with negative head CT scans was utilized. Following G. Power's sample size calculation, 15 cases were randomly chosen for clinical validation. Masseter segmentation was conducted manually in manual group and automatically in Auto-Seg group. The primary endpoint was the masseter muscle volume, while the secondary endpoints included morphological score and runtime, benchmarked against manual segmentation. Reliability tests and paired t tests analyzed intra- and inter-group differences. Additionally, automatic volumetric measurements and asymmetry, calculated as (L - R)/(L±R) × 100%, were evaluated, with the clinical parameter correlation analyzed via Pearson's correlation test. The volume accuracy of automatic segmentation matched that of manual delineation (P > 0.05), demonstrating equivalence. Manual segmentation's runtime (937.3 ± 95.9 s) significantly surpassed the algorithm's (<1 s, p < 0.001). Among 840 patients, masseter asymmetry was 4.6% ± 4.6%, with volumes of (35.5 ± 9.6) cm<sup>3</sup> for adult males and (26.6 ± 7.5) cm3 for adult females. The U-Net-based algorithm demonstrates high concordance with manual segmentation in delineating the masseter muscle, establishing it as a reliable and efficient tool for CT-based assessments in healthy East Asian populations. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online  Instructions to Authors   www.springer.com/00266 .

Diffusion-Based Data Augmentation for Medical Image Segmentation

Maham Nazir, Muhammad Aqeel, Francesco Setti

arxiv logopreprintAug 25 2025
Medical image segmentation models struggle with rare abnormalities due to scarce annotated pathological data. We propose DiffAug a novel framework that combines textguided diffusion-based generation with automatic segmentation validation to address this challenge. Our proposed approach uses latent diffusion models conditioned on medical text descriptions and spatial masks to synthesize abnormalities via inpainting on normal images. Generated samples undergo dynamic quality validation through a latentspace segmentation network that ensures accurate localization while enabling single-step inference. The text prompts, derived from medical literature, guide the generation of diverse abnormality types without requiring manual annotation. Our validation mechanism filters synthetic samples based on spatial accuracy, maintaining quality while operating efficiently through direct latent estimation. Evaluated on three medical imaging benchmarks (CVC-ClinicDB, Kvasir-SEG, REFUGE2), our framework achieves state-of-the-art performance with 8-10% Dice improvements over baselines and reduces false negative rates by up to 28% for challenging cases like small polyps and flat lesions critical for early detection in screening applications.

Efficient 3D Biomedical Image Segmentation by Parallelly Multiscale Transformer-CNN Aggregation Network.

Liu W, He Y, Man T, Zhu F, Chen Q, Huang Y, Feng X, Li B, Wan Y, He J, Deng S

pubmed logopapersAug 25 2025
Accurate and automated segmentation of 3D biomedical images is a sophisticated imperative in clinical diagnosis, imaging-guided surgery, and prognosis judgment. Although the burgeoning of deep learning technologies has fostered smart segmentators, the successive and simultaneous garnering global and local features still remains challenging, which is essential for an exact and efficient imageological assay. To this end, a segmentation solution dubbed the mixed parallel shunted transformer (MPSTrans) is developed here, highlighting 3D-MPST blocks in a U-form framework. It enabled not only comprehensive characteristic capture and multiscale slice synchronization but also deep supervision in the decoder to facilitate the fetching of hierarchical representations. Performing on an unpublished colon cancer data set, this model achieved an impressive increase in dice similarity coefficient (DSC) and a 1.718 mm decease in Hausdorff distance at 95% (HD95), alongside a substantial shrink of computational load of 56.7% in giga floating-point operations per second (GFLOPs). Meanwhile, MPSTrans outperforms other mainstream methods (Swin UNETR, UNETR, nnU-Net, PHTrans, and 3D U-Net) on three public multiorgan (aorta, gallbladder, kidney, liver, pancreas, spleen, stomach, etc.) and multimodal (CT, PET-CT, and MRI) data sets of medical segmentation decathlon (MSD) brain tumor, multiatlas labeling beyond cranial vault (BCV), and automated cardiac diagnosis challenge (ACDC), accentuating its adaptability. These results reflect the potential of MPSTrans to advance the state-of-the-art in biomedical imaging analysis, which would offer a robust tool for enhanced diagnostic capacity.
Page 27 of 1331328 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.