Sort by:
Page 63 of 1341333 results

Explainable AI for Precision Oncology: A Task-Specific Approach Using Imaging, Multi-omics, and Clinical Data

Park, Y., Park, S., Bae, E.

medrxiv logopreprintJul 14 2025
Despite continued advances in oncology, cancer remains a leading cause of global mortality, highlighting the need for diagnostic and prognostic tools that are both accurate and interpretable. Unimodal approaches often fail to capture the biological and clinical complexity of tumors. In this study, we present a suite of task-specific AI models that leverage CT imaging, multi-omics profiles, and structured clinical data to address distinct challenges in segmentation, classification, and prognosis. We developed three independent models across large public datasets. Task 1 applied a 3D U-Net to segment pancreatic tumors from CT scans, achieving a Dice Similarity Coefficient (DSC) of 0.7062. Task 2 employed a hierarchical ensemble of omics-based classifiers to distinguish tumor from normal tissue and classify six major cancer types with 98.67% accuracy. Task 3 benchmarked classical machine learning models on clinical data for prognosis prediction across three cancers (LIHC, KIRC, STAD), achieving strong performance (e.g., C-index of 0.820 in KIRC, AUC of 0.978 in LIHC). Across all tasks, explainable AI methods such as SHAP and attention-based visualization enabled transparent interpretation of model outputs. These results demonstrate the value of tailored, modality-aware models and underscore the clinical potential of applying such tailored AI systems for precision oncology. Technical FoundationsO_LISegmentation (Task 1): A custom 3D U-Net was trained using the Task07_Pancreas dataset from the Medical Segmentation Decathlon (MSD). CT images were preprocessed with MONAI-based pipelines, resampled to (64, 96, 96) voxels, and intensity-windowed to HU ranges of -100 to 240. C_LIO_LIClassification (Task 2): Multi-omics data from TCGA--including gene expression, methylation, miRNA, CNV, and mutation profiles--were log-transformed and normalized. Five modality-specific LightGBM classifiers generated meta-features for a late-fusion ensemble. Stratified 5-fold cross-validation was used for evaluation. C_LIO_LIPrognosis (Task 3): Clinical variables from TCGA were curated and imputed (median/mode), with high-missing-rate columns removed. Survival models (e.g., Cox-PH, Random Forest, XGBoost) were trained with early stopping. No omics or imaging data were used in this task. C_LIO_LIInterpretability: SHAP values were computed for all tree-based models, and attention-based overlays were used in imaging tasks to visualize salient regions. C_LI

The MSA Atrophy Index (MSA-AI): An Imaging Marker for Diagnosis and Clinical Progression in Multiple System Atrophy.

Trujillo P, Hett K, Cooper A, Brown AE, Iregui J, Donahue MJ, Landman ME, Biaggioni I, Bradbury M, Wong C, Stamler D, Claassen DO

pubmed logopapersJul 14 2025
Reliable biomarkers are essential for tracking disease progression and advancing treatments for multiple system atrophy (MSA). In this study, we propose the MSA Atrophy Index (MSA-AI), a novel composite volumetric measure to distinguish MSA from related disorders and monitor disease progression. Seventeen participants with an initial diagnosis of probable MSA were enrolled in the longitudinal bioMUSE study and underwent 3T MRI, biofluid analysis, and clinical assessments at baseline, 6, and 12 months. Final diagnoses were determined after 12 months using clinical progression, imaging, and fluid biomarkers. Ten participants retained an MSA diagnosis, while five were reclassified as either Parkinson disease (PD, n = 4) or dementia with Lewy bodies (DLB, n = 1). Cross-sectional comparisons included additional MSA cases (n = 26), healthy controls (n = 23), pure autonomic failure (n = 23), PD (n = 56), and DLB (n = 8). Lentiform nucleus, cerebellum, and brainstem volumes were extracted using deep learning-based segmentation. Z-scores were computed using a normative dataset (n = 469) and integrated into the MSA-AI. Group differences were tested with linear regression; longitudinal changes and clinical correlations were assessed using mixed-effects models and Spearman correlations. MSA patients exhibited significantly lower MSA-AI scores compared to all other diagnostic groups (p < 0.001). The MSA-AI effectively distinguished MSA from related synucleinopathies, correlated with baseline clinical severity (ρ = -0.57, p < 0.001), and predicted disease progression (ρ = -0.55, p = 0.03). Longitudinal reductions in MSA-AI were associated with worsening clinical scores over 12 months (ρ = -0.61, p = 0.01). The MSA-AI is a promising imaging biomarker for diagnosis and monitoring disease progression in MSA. These findings require validation in larger, independent cohorts.

A generative model uses healthy and diseased image pairs for pixel-level chest X-ray pathology localization.

Dong K, Cheng Y, He K, Suo J

pubmed logopapersJul 14 2025
Medical artificial intelligence (AI) offers potential for automatic pathological interpretation, but a practicable AI model demands both pixel-level accuracy and high explainability for diagnosis. The construction of such models relies on substantial training data with fine-grained labelling, which is impractical in real applications. To circumvent this barrier, we propose a prompt-driven constrained generative model to produce anatomically aligned healthy and diseased image pairs and learn a pathology localization model in a supervised manner. This paradigm provides high-fidelity labelled data and addresses the lack of chest X-ray images with labelling at fine scales. Benefitting from the emerging text-driven generative model and the incorporated constraint, our model presents promising localization accuracy of subtle pathologies, high explainability for clinical decisions, and good transferability to many unseen pathological categories such as new prompts and mixed pathologies. These advantageous features establish our model as a promising solution to assist chest X-ray analysis. In addition, the proposed approach is also inspiring for other tasks lacking massive training data and time-consuming manual labelling.

Generative AI enables medical image segmentation in ultra low-data regimes.

Zhang L, Jindal B, Alaa A, Weinreb R, Wilson D, Segal E, Zou J, Xie P

pubmed logopapersJul 14 2025
Semantic segmentation of medical images is pivotal in applications like disease diagnosis and treatment planning. While deep learning automates this task effectively, it struggles in ultra low-data regimes for the scarcity of annotated segmentation masks. To address this, we propose a generative deep learning framework that produces high-quality image-mask pairs as auxiliary training data. Unlike traditional generative models that separate data generation from model training, ours uses multi-level optimization for end-to-end data generation. This allows segmentation performance to guide the generation process, producing data tailored to improve segmentation outcomes. Our method demonstrates strong generalization across 11 medical image segmentation tasks and 19 datasets, covering various diseases, organs, and modalities. It improves performance by 10-20% (absolute) in both same- and out-of-domain settings and requires 8-20 times less training data than existing approaches. This greatly enhances the feasibility and cost-effectiveness of deep learning in data-limited medical imaging scenarios.

A Brain Tumor Segmentation Method Based on CLIP and 3D U-Net with Cross-Modal Semantic Guidance and Multi-Level Feature Fusion

Mingda Zhang

arxiv logopreprintJul 14 2025
Precise segmentation of brain tumors from magnetic resonance imaging (MRI) is essential for neuro-oncology diagnosis and treatment planning. Despite advances in deep learning methods, automatic segmentation remains challenging due to tumor morphological heterogeneity and complex three-dimensional spatial relationships. Current techniques primarily rely on visual features extracted from MRI sequences while underutilizing semantic knowledge embedded in medical reports. This research presents a multi-level fusion architecture that integrates pixel-level, feature-level, and semantic-level information, facilitating comprehensive processing from low-level data to high-level concepts. The semantic-level fusion pathway combines the semantic understanding capabilities of Contrastive Language-Image Pre-training (CLIP) models with the spatial feature extraction advantages of 3D U-Net through three mechanisms: 3D-2D semantic bridging, cross-modal semantic guidance, and semantic-based attention mechanisms. Experimental validation on the BraTS 2020 dataset demonstrates that the proposed model achieves an overall Dice coefficient of 0.8567, representing a 4.8% improvement compared to traditional 3D U-Net, with a 7.3% Dice coefficient increase in the clinically important enhancing tumor (ET) region.

Automated multiclass segmentation of liver vessel structures in CT images using deep learning approaches: a liver surgery pre-planning tool.

Sarkar S, Rahmani M, Farnia P, Ahmadian A, Mozayani N

pubmed logopapersJul 14 2025
Accurate liver vessel segmentation is essential for effective liver surgery pre-planning, and reducing surgical risks since it enables the precise localization and extensive assessment of complex vessel structures. Manual liver vessel segmentation is a time-intensive process reliant on operator expertise and skill. The complex, tree-like architecture of hepatic and portal veins, which are interwoven and anatomically variable, further complicates this challenge. This study addresses these challenges by proposing the UNETR (U-Net Transformers) architecture for the multi-class segmentation of portal and hepatic veins in liver CT images. UNETR leverages a transformer-based encoder to effectively capture long-range dependencies, overcoming the limitations of convolutional neural networks (CNNs) in handling complex anatomical structures. The proposed method was evaluated on contrast-enhanced CT images from the IRCAD as well as a locally dataset developed from a hospital. On the local dataset, the UNETR model achieved Dice coefficients of 49.71% for portal veins, 69.39% for hepatic veins, and 76.74% for overall vessel segmentation, while reaching Dice coefficients of 62.54% for vessel segmentation on the IRCAD dataset. These results highlight the method's effectiveness in identifying complex vessel structures across diverse datasets. These findings underscore the critical role of advanced architectures and precise annotations in improving segmentation accuracy. This work provides a foundation for future advancements in automated liver surgery pre-planning, with the potential to enhance clinical outcomes significantly. The implementation code is available on GitHub: https://github.com/saharsarkar/Multiclass-Vessel-Segmentation .

Region Uncertainty Estimation for Medical Image Segmentation with Noisy Labels.

Han K, Wang S, Chen J, Qian C, Lyu C, Ma S, Qiu C, Sheng VS, Huang Q, Liu Z

pubmed logopapersJul 14 2025
The success of deep learning in 3D medical image segmentation hinges on training with a large dataset of fully annotated 3D volumes, which are difficult and time-consuming to acquire. Although recent foundation models (e.g., segment anything model, SAM) can utilize sparse annotations to reduce annotation costs, segmentation tasks involving organs and tissues with blurred boundaries remain challenging. To address this issue, we propose a region uncertainty estimation framework for Computed Tomography (CT) image segmentation using noisy labels. Specifically, we propose a sample-stratified training strategy that stratifies samples according to their varying quality labels, prioritizing confident and fine-grained information at each training stage. This sample-to-voxel level processing enables more reliable supervision information to propagate to noisy label data, thus effectively mitigating the impact of noisy annotations. Moreover, we further design a boundary-guided regional uncertainty estimation module that adapts sample hierarchical training to assist in evaluating sample confidence. Experiments conducted across multiple CT datasets demonstrate the superiority of our proposed method over several competitive approaches under various noise conditions. Our proposed reliable label propagation strategy not only significantly reduces the cost of medical image annotation and robust model training but also improves the segmentation performance in scenarios with imperfect annotations, thus paving the way towards the application of medical segmentation foundation models under low-resource and remote scenarios. Code will be available at https://github.com/KHan-UJS/NoisyLabel.

Associations of Computerized Tomography-Based Body Composition and Food Insecurity in Bariatric Surgery Patients.

Sizemore JA, Magudia K, He H, Landa K, Bartholomew AJ, Howell TC, Michaels AD, Fong P, Greenberg JA, Wilson L, Palakshappa D, Seymour KA

pubmed logopapersJul 14 2025
Food insecurity (FI) is associated with increased adiposity and obesity-related medical conditions, and body composition can affect metabolic risk. Bariatric surgery effectively treats obesity and metabolic diseases. The association of FI with baseline computerized tomography (CT)-based body composition and bariatric surgery outcomes was investigated in this exploratory study. Fifty-four retrospectively identified adults had bariatric surgery, preoperative CT scan from 2017 to 2019, completed a six-item food security survey, and had body composition measured by bioelectrical impedance analysis (BIA). Skeletal muscle, visceral fat, and subcutaneous fat areas were determined from abdominal CT and normalized to published age, sex, and race reference values. Anthropometric data, related medical conditions, and medications were collected preoperatively, and at 6 months and at 12 months postoperatively. Patients were stratified into food security (FS) or FI based on survey responses. Fourteen (26%) patients were categorized as FI. Patients with FI had lower skeletal muscle area and higher subcutaneous fat area than patients with FS on baseline CT exam (p < 0.05). There was no difference in baseline BIA between patients with FS and FI. The two groups had similar weight loss, reduction in obesity-related medications, and healthcare utilization following bariatric surgery at 6 and 12 months postoperatively. Patients with FI had higher subcutaneous fat and lower skeletal muscle than patients with FS by baseline CT exam, findings which were not detected by BIA. CT analysis enabled by an artificial intelligence workflow offers more precise and detailed body composition data.

Graph-based Multi-Modal Interaction Lightweight Network for Brain Tumor Segmentation (GMLN-BTS) in Edge Iterative MRI Lesion Localization System (EdgeIMLocSys)

Guohao Huo, Ruiting Dai, Hao Tang

arxiv logopreprintJul 14 2025
Brain tumor segmentation plays a critical role in clinical diagnosis and treatment planning, yet the variability in imaging quality across different MRI scanners presents significant challenges to model generalization. To address this, we propose the Edge Iterative MRI Lesion Localization System (EdgeIMLocSys), which integrates Continuous Learning from Human Feedback to adaptively fine-tune segmentation models based on clinician feedback, thereby enhancing robustness to scanner-specific imaging characteristics. Central to this system is the Graph-based Multi-Modal Interaction Lightweight Network for Brain Tumor Segmentation (GMLN-BTS), which employs a Modality-Aware Adaptive Encoder (M2AE) to extract multi-scale semantic features efficiently, and a Graph-based Multi-Modal Collaborative Interaction Module (G2MCIM) to model complementary cross-modal relationships via graph structures. Additionally, we introduce a novel Voxel Refinement UpSampling Module (VRUM) that synergistically combines linear interpolation and multi-scale transposed convolutions to suppress artifacts while preserving high-frequency details, improving segmentation boundary accuracy. Our proposed GMLN-BTS model achieves a Dice score of 85.1% on the BraTS2017 dataset with only 4.58 million parameters, representing a 98% reduction compared to mainstream 3D Transformer models, and significantly outperforms existing lightweight approaches. This work demonstrates a synergistic breakthrough in achieving high-accuracy, resource-efficient brain tumor segmentation suitable for deployment in resource-constrained clinical environments.

Feasibility study of fully automatic measurement of adenoid size on lateral neck and head radiographs using deep learning.

Hao D, Tang L, Li D, Miao S, Dong C, Cui J, Gao C, Li J

pubmed logopapersJul 14 2025
The objective and reliable quantification of adenoid size is pivotal for precise clinical diagnosis and the formulation of effective treatment strategies. Conventional manual measurement techniques, however, are often labor-intensive and time-consuming. To develop and validate a fully automated system for measuring adenoid size using deep learning (DL) on lateral head and neck radiographs. In this retrospective study, we analyzed 711 lateral head and neck radiographs collected from two centers between February and July 2023. A DL-based adenoid size measurement system was developed, utilizing Fujioka's method. The system employed the RTMDet network and RTMPose networks for accurate landmark detection, and mathematical formulas were applied to determine adenoid size. To evaluate consistency and reliability of the system, we employed the intra-class correlation coefficient (ICC), mean absolute difference (MAD), and Bland-Altman plots as key assessment metrics. The DL-based system exhibited high reliability in the prediction of adenoid, nasopharynx, and adenoid-nasopharyngeal ratio measurements, showcasing strong agreement with the reference standard. The results indicated an ICC for adenoid measurements of 0.902 [95%CI, 0.872-0.925], with a MAD of 1.189 and a root mean square (RMS) of 1.974. For nasopharynx measurements, the ICC was 0.868 [95%CI, 0.828-0.899], with a MAD of 1.671 and an RMS of 1.916. Additionally, the adenoid-nasopharyngeal ratio measurements yielded an ICC of 0.911 [95%CI, 0.883-0.932], a MAD of 0.054, and an RMS of 0.076. The developed DL-based system effectively automates the measurement of the adenoid-nasopharyngeal ratio, adenoid, and nasopharynx on lateral neck or head radiographs, showcasing high reliability.
Page 63 of 1341333 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.