Sort by:
Page 177 of 3593587 results

Associations of Computerized Tomography-Based Body Composition and Food Insecurity in Bariatric Surgery Patients.

Sizemore JA, Magudia K, He H, Landa K, Bartholomew AJ, Howell TC, Michaels AD, Fong P, Greenberg JA, Wilson L, Palakshappa D, Seymour KA

pubmed logopapersJul 14 2025
Food insecurity (FI) is associated with increased adiposity and obesity-related medical conditions, and body composition can affect metabolic risk. Bariatric surgery effectively treats obesity and metabolic diseases. The association of FI with baseline computerized tomography (CT)-based body composition and bariatric surgery outcomes was investigated in this exploratory study. Fifty-four retrospectively identified adults had bariatric surgery, preoperative CT scan from 2017 to 2019, completed a six-item food security survey, and had body composition measured by bioelectrical impedance analysis (BIA). Skeletal muscle, visceral fat, and subcutaneous fat areas were determined from abdominal CT and normalized to published age, sex, and race reference values. Anthropometric data, related medical conditions, and medications were collected preoperatively, and at 6 months and at 12 months postoperatively. Patients were stratified into food security (FS) or FI based on survey responses. Fourteen (26%) patients were categorized as FI. Patients with FI had lower skeletal muscle area and higher subcutaneous fat area than patients with FS on baseline CT exam (p < 0.05). There was no difference in baseline BIA between patients with FS and FI. The two groups had similar weight loss, reduction in obesity-related medications, and healthcare utilization following bariatric surgery at 6 and 12 months postoperatively. Patients with FI had higher subcutaneous fat and lower skeletal muscle than patients with FS by baseline CT exam, findings which were not detected by BIA. CT analysis enabled by an artificial intelligence workflow offers more precise and detailed body composition data.

Automated multiclass segmentation of liver vessel structures in CT images using deep learning approaches: a liver surgery pre-planning tool.

Sarkar S, Rahmani M, Farnia P, Ahmadian A, Mozayani N

pubmed logopapersJul 14 2025
Accurate liver vessel segmentation is essential for effective liver surgery pre-planning, and reducing surgical risks since it enables the precise localization and extensive assessment of complex vessel structures. Manual liver vessel segmentation is a time-intensive process reliant on operator expertise and skill. The complex, tree-like architecture of hepatic and portal veins, which are interwoven and anatomically variable, further complicates this challenge. This study addresses these challenges by proposing the UNETR (U-Net Transformers) architecture for the multi-class segmentation of portal and hepatic veins in liver CT images. UNETR leverages a transformer-based encoder to effectively capture long-range dependencies, overcoming the limitations of convolutional neural networks (CNNs) in handling complex anatomical structures. The proposed method was evaluated on contrast-enhanced CT images from the IRCAD as well as a locally dataset developed from a hospital. On the local dataset, the UNETR model achieved Dice coefficients of 49.71% for portal veins, 69.39% for hepatic veins, and 76.74% for overall vessel segmentation, while reaching Dice coefficients of 62.54% for vessel segmentation on the IRCAD dataset. These results highlight the method's effectiveness in identifying complex vessel structures across diverse datasets. These findings underscore the critical role of advanced architectures and precise annotations in improving segmentation accuracy. This work provides a foundation for future advancements in automated liver surgery pre-planning, with the potential to enhance clinical outcomes significantly. The implementation code is available on GitHub: https://github.com/saharsarkar/Multiclass-Vessel-Segmentation .

Feasibility study of fully automatic measurement of adenoid size on lateral neck and head radiographs using deep learning.

Hao D, Tang L, Li D, Miao S, Dong C, Cui J, Gao C, Li J

pubmed logopapersJul 14 2025
The objective and reliable quantification of adenoid size is pivotal for precise clinical diagnosis and the formulation of effective treatment strategies. Conventional manual measurement techniques, however, are often labor-intensive and time-consuming. To develop and validate a fully automated system for measuring adenoid size using deep learning (DL) on lateral head and neck radiographs. In this retrospective study, we analyzed 711 lateral head and neck radiographs collected from two centers between February and July 2023. A DL-based adenoid size measurement system was developed, utilizing Fujioka's method. The system employed the RTMDet network and RTMPose networks for accurate landmark detection, and mathematical formulas were applied to determine adenoid size. To evaluate consistency and reliability of the system, we employed the intra-class correlation coefficient (ICC), mean absolute difference (MAD), and Bland-Altman plots as key assessment metrics. The DL-based system exhibited high reliability in the prediction of adenoid, nasopharynx, and adenoid-nasopharyngeal ratio measurements, showcasing strong agreement with the reference standard. The results indicated an ICC for adenoid measurements of 0.902 [95%CI, 0.872-0.925], with a MAD of 1.189 and a root mean square (RMS) of 1.974. For nasopharynx measurements, the ICC was 0.868 [95%CI, 0.828-0.899], with a MAD of 1.671 and an RMS of 1.916. Additionally, the adenoid-nasopharyngeal ratio measurements yielded an ICC of 0.911 [95%CI, 0.883-0.932], a MAD of 0.054, and an RMS of 0.076. The developed DL-based system effectively automates the measurement of the adenoid-nasopharyngeal ratio, adenoid, and nasopharynx on lateral neck or head radiographs, showcasing high reliability.

Deep Learning-Based Prediction for Bone Cement Leakage During Percutaneous Kyphoplasty Using Preoperative Computed Tomography: MODEL Development and Validation.

Chen R, Wang T, Liu X, Xi Y, Liu D, Xie T, Wang A, Fan N, Yuan S, Du P, Jiao S, Zhang Y, Zang L

pubmed logopapersJul 14 2025
Retrospective study. To develop a deep learning (DL) model to predict bone cement leakage (BCL) subtypes during percutaneous kyphoplasty (PKP) using preoperative computed tomography (CT) as well as employing multicenter data to evaluate the effectiveness and generalizability of the model. DL excels at automatically extracting features from medical images. However, there is a lack of models that can predict BCL subtypes based on preoperative images. This study included an internal dataset for DL model training, validation, and testing as well as an external dataset for additional model testing. Our model integrated a segment localization module based on vertebral segmentation via three-dimensional (3D) U-Net with a classification module based on 3D ResNet-50. Vertebral level mismatch rates were calculated, and confusion matrixes were used to compare the performance of the DL model with that of spine surgeons in predicting BCL subtypes. Furthermore, the simple Cohen's kappa coefficient was used to assess the reliability of spine surgeons and the DL model against the reference standard. A total of 901 patients containing 997 eligible segments were included in the internal dataset. The model demonstrated a vertebral segment identification accuracy of 96.9%. It also showed high area under the curve (AUC) values of 0.734-0.831 and sensitivities of 0.649-0.900 for BCL prediction in the internal dataset. Similar favorable AUC values of 0.709-0.818 and sensitivities of 0.706-0.857 were observed in the external dataset, indicating the stability and generalizability of the model. Moreover, the model outperformed nonexpert spine surgeons in predicting BCL subtypes, except for type II. The model achieved satisfactory accuracy, reliability, generalizability, and interpretability in predicting BCL subtypes, outperforming nonexpert spine surgeons. This study offers valuable insights for assessing osteoporotic vertebral compression fractures, thereby aiding preoperative surgical decision-making. 3.

Pathological omics prediction of early and advanced colon cancer based on artificial intelligence model.

Wang Z, Wu Y, Li Y, Wang Q, Yi H, Shi H, Sun X, Liu C, Wang K

pubmed logopapersJul 14 2025
Artificial intelligence (AI) models based on pathological slides have great potential to assist pathologists in disease diagnosis and have become an important research direction in the field of medical image analysis. The aim of this study was to develop an AI model based on whole-slide images to predict the stage of colon cancer. In this study, a total of 100 pathological slides of colon cancer patients were collected as the training set, and 421 pathological slides of colon cancer were downloaded from The Cancer Genome Atlas (TCGA) database as the external validation set. Cellprofiler and CLAM tools were used to extract pathological features, and machine learning algorithms and deep learning algorithms were used to construct prediction models. The area under the curve (AUC) of the best machine learning model was 0.78 in the internal test set and 0.68 in the external test set. The AUC of the deep learning model in the internal test set was 0.889, and the accuracy of the model was 0.854. The AUC of the deep learning model in the external test set was 0.700. The prediction model has the potential to generalize in the process of combining pathological omics diagnosis. Compared with machine learning, deep learning has higher recognition and accuracy of images, and the performance of the model is better.

Advanced U-Net Architectures with CNN Backbones for Automated Lung Cancer Detection and Segmentation in Chest CT Images

Alireza Golkarieha, Kiana Kiashemshakib, Sajjad Rezvani Boroujenic, Nasibeh Asadi Isakand

arxiv logopreprintJul 14 2025
This study investigates the effectiveness of U-Net architectures integrated with various convolutional neural network (CNN) backbones for automated lung cancer detection and segmentation in chest CT images, addressing the critical need for accurate diagnostic tools in clinical settings. A balanced dataset of 832 chest CT images (416 cancerous and 416 non-cancerous) was preprocessed using Contrast Limited Adaptive Histogram Equalization (CLAHE) and resized to 128x128 pixels. U-Net models were developed with three CNN backbones: ResNet50, VGG16, and Xception, to segment lung regions. After segmentation, CNN-based classifiers and hybrid models combining CNN feature extraction with traditional machine learning classifiers (Support Vector Machine, Random Forest, and Gradient Boosting) were evaluated using 5-fold cross-validation. Metrics included accuracy, precision, recall, F1-score, Dice coefficient, and ROC-AUC. U-Net with ResNet50 achieved the best performance for cancerous lungs (Dice: 0.9495, Accuracy: 0.9735), while U-Net with VGG16 performed best for non-cancerous segmentation (Dice: 0.9532, Accuracy: 0.9513). For classification, the CNN model using U-Net with Xception achieved 99.1 percent accuracy, 99.74 percent recall, and 99.42 percent F1-score. The hybrid CNN-SVM-Xception model achieved 96.7 percent accuracy and 97.88 percent F1-score. Compared to prior methods, our framework consistently outperformed existing models. In conclusion, combining U-Net with advanced CNN backbones provides a powerful method for both segmentation and classification of lung cancer in CT scans, supporting early diagnosis and clinical decision-making.

Leveraging Swin Transformer for enhanced diagnosis of Alzheimer's disease using multi-shell diffusion MRI

Quentin Dessain, Nicolas Delinte, Bernard Hanseeuw, Laurence Dricot, Benoît Macq

arxiv logopreprintJul 14 2025
Objective: This study aims to support early diagnosis of Alzheimer's disease and detection of amyloid accumulation by leveraging the microstructural information available in multi-shell diffusion MRI (dMRI) data, using a vision transformer-based deep learning framework. Methods: We present a classification pipeline that employs the Swin Transformer, a hierarchical vision transformer model, on multi-shell dMRI data for the classification of Alzheimer's disease and amyloid presence. Key metrics from DTI and NODDI were extracted and projected onto 2D planes to enable transfer learning with ImageNet-pretrained models. To efficiently adapt the transformer to limited labeled neuroimaging data, we integrated Low-Rank Adaptation. We assessed the framework on diagnostic group prediction (cognitively normal, mild cognitive impairment, Alzheimer's disease dementia) and amyloid status classification. Results: The framework achieved competitive classification results within the scope of multi-shell dMRI-based features, with the best balanced accuracy of 95.2% for distinguishing cognitively normal individuals from those with Alzheimer's disease dementia using NODDI metrics. For amyloid detection, it reached 77.2% balanced accuracy in distinguishing amyloid-positive mild cognitive impairment/Alzheimer's disease dementia subjects from amyloid-negative cognitively normal subjects, and 67.9% for identifying amyloid-positive individuals among cognitively normal subjects. Grad-CAM-based explainability analysis identified clinically relevant brain regions, including the parahippocampal gyrus and hippocampus, as key contributors to model predictions. Conclusion: This study demonstrates the promise of diffusion MRI and transformer-based architectures for early detection of Alzheimer's disease and amyloid pathology, supporting biomarker-driven diagnostics in data-limited biomedical settings.

Graph-based Multi-Modal Interaction Lightweight Network for Brain Tumor Segmentation (GMLN-BTS) in Edge Iterative MRI Lesion Localization System (EdgeIMLocSys)

Guohao Huo, Ruiting Dai, Hao Tang

arxiv logopreprintJul 14 2025
Brain tumor segmentation plays a critical role in clinical diagnosis and treatment planning, yet the variability in imaging quality across different MRI scanners presents significant challenges to model generalization. To address this, we propose the Edge Iterative MRI Lesion Localization System (EdgeIMLocSys), which integrates Continuous Learning from Human Feedback to adaptively fine-tune segmentation models based on clinician feedback, thereby enhancing robustness to scanner-specific imaging characteristics. Central to this system is the Graph-based Multi-Modal Interaction Lightweight Network for Brain Tumor Segmentation (GMLN-BTS), which employs a Modality-Aware Adaptive Encoder (M2AE) to extract multi-scale semantic features efficiently, and a Graph-based Multi-Modal Collaborative Interaction Module (G2MCIM) to model complementary cross-modal relationships via graph structures. Additionally, we introduce a novel Voxel Refinement UpSampling Module (VRUM) that synergistically combines linear interpolation and multi-scale transposed convolutions to suppress artifacts while preserving high-frequency details, improving segmentation boundary accuracy. Our proposed GMLN-BTS model achieves a Dice score of 85.1% on the BraTS2017 dataset with only 4.58 million parameters, representing a 98% reduction compared to mainstream 3D Transformer models, and significantly outperforms existing lightweight approaches. This work demonstrates a synergistic breakthrough in achieving high-accuracy, resource-efficient brain tumor segmentation suitable for deployment in resource-constrained clinical environments.

A Brain Tumor Segmentation Method Based on CLIP and 3D U-Net with Cross-Modal Semantic Guidance and Multi-Level Feature Fusion

Mingda Zhang

arxiv logopreprintJul 14 2025
Precise segmentation of brain tumors from magnetic resonance imaging (MRI) is essential for neuro-oncology diagnosis and treatment planning. Despite advances in deep learning methods, automatic segmentation remains challenging due to tumor morphological heterogeneity and complex three-dimensional spatial relationships. Current techniques primarily rely on visual features extracted from MRI sequences while underutilizing semantic knowledge embedded in medical reports. This research presents a multi-level fusion architecture that integrates pixel-level, feature-level, and semantic-level information, facilitating comprehensive processing from low-level data to high-level concepts. The semantic-level fusion pathway combines the semantic understanding capabilities of Contrastive Language-Image Pre-training (CLIP) models with the spatial feature extraction advantages of 3D U-Net through three mechanisms: 3D-2D semantic bridging, cross-modal semantic guidance, and semantic-based attention mechanisms. Experimental validation on the BraTS 2020 dataset demonstrates that the proposed model achieves an overall Dice coefficient of 0.8567, representing a 4.8% improvement compared to traditional 3D U-Net, with a 7.3% Dice coefficient increase in the clinically important enhancing tumor (ET) region.

The MSA Atrophy Index (MSA-AI): An Imaging Marker for Diagnosis and Clinical Progression in Multiple System Atrophy.

Trujillo P, Hett K, Cooper A, Brown AE, Iregui J, Donahue MJ, Landman ME, Biaggioni I, Bradbury M, Wong C, Stamler D, Claassen DO

pubmed logopapersJul 14 2025
Reliable biomarkers are essential for tracking disease progression and advancing treatments for multiple system atrophy (MSA). In this study, we propose the MSA Atrophy Index (MSA-AI), a novel composite volumetric measure to distinguish MSA from related disorders and monitor disease progression. Seventeen participants with an initial diagnosis of probable MSA were enrolled in the longitudinal bioMUSE study and underwent 3T MRI, biofluid analysis, and clinical assessments at baseline, 6, and 12 months. Final diagnoses were determined after 12 months using clinical progression, imaging, and fluid biomarkers. Ten participants retained an MSA diagnosis, while five were reclassified as either Parkinson disease (PD, n = 4) or dementia with Lewy bodies (DLB, n = 1). Cross-sectional comparisons included additional MSA cases (n = 26), healthy controls (n = 23), pure autonomic failure (n = 23), PD (n = 56), and DLB (n = 8). Lentiform nucleus, cerebellum, and brainstem volumes were extracted using deep learning-based segmentation. Z-scores were computed using a normative dataset (n = 469) and integrated into the MSA-AI. Group differences were tested with linear regression; longitudinal changes and clinical correlations were assessed using mixed-effects models and Spearman correlations. MSA patients exhibited significantly lower MSA-AI scores compared to all other diagnostic groups (p < 0.001). The MSA-AI effectively distinguished MSA from related synucleinopathies, correlated with baseline clinical severity (ρ = -0.57, p < 0.001), and predicted disease progression (ρ = -0.55, p = 0.03). Longitudinal reductions in MSA-AI were associated with worsening clinical scores over 12 months (ρ = -0.61, p = 0.01). The MSA-AI is a promising imaging biomarker for diagnosis and monitoring disease progression in MSA. These findings require validation in larger, independent cohorts.
Page 177 of 3593587 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.