Sort by:
Page 58 of 1341332 results

A Novel Downsampling Strategy Based on Information Complementarity for Medical Image Segmentation

Wenbo Yue, Chang Li, Guoping Xu

arxiv logopreprintJul 20 2025
In convolutional neural networks (CNNs), downsampling operations are crucial to model performance. Although traditional downsampling methods (such as maximum pooling and cross-row convolution) perform well in feature aggregation, receptive field expansion, and computational reduction, they may lead to the loss of key spatial information in semantic segmentation tasks, thereby affecting the pixel-by-pixel prediction accuracy.To this end, this study proposes a downsampling method based on information complementarity - Hybrid Pooling Downsampling (HPD). The core is to replace the traditional method with MinMaxPooling, and effectively retain the light and dark contrast and detail features of the image by extracting the maximum value information of the local area.Experiment on various CNN architectures on the ACDC and Synapse datasets show that HPD outperforms traditional methods in segmentation performance, and increases the DSC coefficient by 0.5% on average. The results show that the HPD module provides an efficient solution for semantic segmentation tasks.

Depthwise-Dilated Convolutional Adapters for Medical Object Tracking and Segmentation Using the Segment Anything Model 2

Guoping Xu, Christopher Kabat, You Zhang

arxiv logopreprintJul 19 2025
Recent advances in medical image segmentation have been driven by deep learning; however, most existing methods remain limited by modality-specific designs and exhibit poor adaptability to dynamic medical imaging scenarios. The Segment Anything Model 2 (SAM2) and its related variants, which introduce a streaming memory mechanism for real-time video segmentation, present new opportunities for prompt-based, generalizable solutions. Nevertheless, adapting these models to medical video scenarios typically requires large-scale datasets for retraining or transfer learning, leading to high computational costs and the risk of catastrophic forgetting. To address these challenges, we propose DD-SAM2, an efficient adaptation framework for SAM2 that incorporates a Depthwise-Dilated Adapter (DD-Adapter) to enhance multi-scale feature extraction with minimal parameter overhead. This design enables effective fine-tuning of SAM2 on medical videos with limited training data. Unlike existing adapter-based methods focused solely on static images, DD-SAM2 fully exploits SAM2's streaming memory for medical video object tracking and segmentation. Comprehensive evaluations on TrackRad2025 (tumor segmentation) and EchoNet-Dynamic (left ventricle tracking) datasets demonstrate superior performance, achieving Dice scores of 0.93 and 0.97, respectively. To the best of our knowledge, this work provides an initial attempt at systematically exploring adapter-based SAM2 fine-tuning for medical video segmentation and tracking. Code, datasets, and models will be publicly available at https://github.com/apple1986/DD-SAM2.

Automated Quantitative Evaluation of Age-Related Thymic Involution on Plain Chest CT.

Okamura YT, Endo K, Toriihara A, Fukuda I, Isogai J, Sato Y, Yasuoka K, Kagami SI

pubmed logopapersJul 19 2025
The thymus is an important immune organ involved in T-cell generation. Age-related involution of the thymus has been linked to various age-related pathologies in recent studies. However, there has been no method proposed to quantify age-related thymic involution based on a clinical image. The purpose of this study was to establish an objective and automatic method to quantify age-related thymic involution based on plain chest computed tomography (CT) images. We newly defined the thymic region for quantification (TRQ) as the target anatomical region. We manually segmented the TRQ in 135 CT studies, followed by construction of segmentation neural network (NN) models using the data. We developed the estimator of thymic volume (ETV), a quantitative indicator of the thymic tissue volume inside the segmented TRQ, based on simple mathematical modeling. The Hounsfield unit (HU) value and volume of the NN-segmented TRQ were measured, and the ETV was calculated in each CT study from 853 healthy subjects. We investigated how these measures were related to age and sex using quantile additive regression models. A significant correlation between the NN-segmented and manually segmented TRQ was seen for both the HU value and volume (r = 0.996 and r = 0.986, respectively). ETV declined exponentially with age (p < 0.001), consistent with age-related decline in the thymic tissue volume. In conclusion, our method enabled robust quantification of age-related thymic involution. Our method may aid in the prediction and risk classification of pathologies related to thymic involution.

Artificial intelligence-based models for quantification of intra-pancreatic fat deposition and their clinical relevance: a systematic review of imaging studies.

Joshi T, Virostko J, Petrov MS

pubmed logopapersJul 19 2025
High intra-pancreatic fat deposition (IPFD) plays an important role in diseases of the pancreas. The intricate anatomy of the pancreas and the surrounding structures has historically made IPFD quantification a challenging measurement to make accurately on radiological images. To take on the challenge, automated IPFD quantification methods using artificial intelligence (AI) have recently been deployed. The aim was to benchmark the current knowledge on the use of AI-based models to measure IPFD automatedly. The search was conducted in the MEDLINE, Embase, Scopus, and IEEE Xplore databases. Studies were eligible if they used AI for both segmentation of the pancreas and quantification of IPFD. The ground truth was manual segmentation by radiologists. When possible, data were pooled statistically using a random-effects model. A total of 12 studies (10 cross-sectional and 2 longitudinal) encompassing more than 50 thousand people were included. Eight of the 12 studies used MRI, whereas four studies employed CT. U-Net model and nnU-Net model were the most frequently used AI-based models. The pooled Dice similarity coefficient of AI-based models in quantifying IPFD was 82.3% (95% confidence interval, 73.5 to 91.1%). The clinical application of AI-based models showed the relevance of high IPFD to acute pancreatitis, pancreatic cancer, and type 2 diabetes mellitus. Current AI-based models for IPFD quantification are suboptimal, as the dissimilarity between AI-based and manual quantification of IPFD is not negligible. Future advancements in fully automated measurements of IPFD will accelerate the accumulation of robust, large-scale evidence on the role of high IPFD in pancreatic diseases. KEY POINTS: Question What is the current evidence on the performance and clinical applicability of artificial intelligence-based models for automated quantification of intra-pancreatic fat deposition? Findings The nnU-Net model achieved the highest Dice similarity coefficient among MRI-based studies, whereas the nnTransfer model demonstrated the highest Dice similarity coefficient in CT-based studies. Clinical relevance Standardisation of reporting on artificial intelligence-based models for the quantification of intra-pancreatic fat deposition will be essential to enhancing the clinical applicability and reliability of artificial intelligence in imaging patients with diseases of the pancreas.

Accuracy and Time Efficiency of Artificial Intelligence-Driven Tooth Segmentation on CBCT Images: A Validation Study Using Two Implant Planning Software Programs.

Ntovas P, Sirirattanagool P, Asavanamuang P, Jain S, Tavelli L, Revilla-León M, Galarraga-Vinueza ME

pubmed logopapersJul 18 2025
To assess the accuracy and time efficiency of manual versus artificial intelligence (AI)-driven tooth segmentation on cone-beam computed tomography (CBCT) images, using AI tools integrated within implant planning software, and to evaluate the impact of artifacts, dental arch, tooth type, and region. Fourteen patients who underwent CBCT scans were randomly selected for this study. Using the acquired datasets, 67 extracted teeth were segmented using one manual and two AI-driven tools. The segmentation time for each method was recorded. The extracted teeth were scanned with an intraoral scanner to serve as the reference. The virtual models generated by each segmentation method were superimposed with the surface scan models to calculate volumetric discrepancies. The discrepancy between the evaluated AI-driven and manual segmentation methods ranged from 0.10 to 0.98 mm, with a mean RMS of 0.27 (0.11) mm. Manual segmentation resulted in less RMS deviation compared to both AI-driven methods (CDX; BSB) (p < 0.05). Significant differences were observed between all investigated segmentation methods, both for the overall tooth area and each region, with the apical portion of the root showing the lowest accuracy (p < 0.05). Tooth type did not have a significant effect on segmentation (p > 0.05). Both AI-driven segmentation methods reduced segmentation time compared to manual segmentation (p < 0.05). AI-driven segmentation can generate reliable virtual 3D tooth models, with accuracy comparable to that of manual segmentation performed by experienced clinicians, while also significantly improving time efficiency. To further enhance accuracy in cases involving restoration artifacts, continued development and optimization of AI-driven tooth segmentation models are necessary.

Multi-Centre Validation of a Deep Learning Model for Scoliosis Assessment

Šimon Kubov, Simon Klíčník, Jakub Dandár, Zdeněk Straka, Karolína Kvaková, Daniel Kvak

arxiv logopreprintJul 18 2025
Scoliosis affects roughly 2 to 4 percent of adolescents, and treatment decisions depend on precise Cobb angle measurement. Manual assessment is time consuming and subject to inter observer variation. We conducted a retrospective, multi centre evaluation of a fully automated deep learning software (Carebot AI Bones, Spine Measurement functionality; Carebot s.r.o.) on 103 standing anteroposterior whole spine radiographs collected from ten hospitals. Two musculoskeletal radiologists independently measured each study and served as reference readers. Agreement between the AI and each radiologist was assessed with Bland Altman analysis, mean absolute error (MAE), root mean squared error (RMSE), Pearson correlation coefficient, and Cohen kappa for four grade severity classification. Against Radiologist 1 the AI achieved an MAE of 3.89 degrees (RMSE 4.77 degrees) with a bias of 0.70 degrees and limits of agreement from minus 8.59 to plus 9.99 degrees. Against Radiologist 2 the AI achieved an MAE of 3.90 degrees (RMSE 5.68 degrees) with a bias of 2.14 degrees and limits from minus 8.23 to plus 12.50 degrees. Pearson correlations were r equals 0.906 and r equals 0.880 (inter reader r equals 0.928), while Cohen kappa for severity grading reached 0.51 and 0.64 (inter reader kappa 0.59). These results demonstrate that the proposed software reproduces expert level Cobb angle measurements and categorical grading across multiple centres, suggesting its utility for streamlining scoliosis reporting and triage in clinical workflows.

Artificial Intelligence for Tumor [<sup>18</sup>F]FDG PET Imaging: Advancements and Future Trends - Part II.

Safarian A, Mirshahvalad SA, Farbod A, Jung T, Nasrollahi H, Schweighofer-Zwink G, Rendl G, Pirich C, Vali R, Beheshti M

pubmed logopapersJul 18 2025
The integration of artificial intelligence (AI) into [<sup>18</sup>F]FDG PET/CT imaging continues to expand, offering new opportunities for more precise, consistent, and personalized oncologic evaluations. Building on the foundation established in Part I, this second part explores AI-driven innovations across a broader range of malignancies, including hematological, genitourinary, melanoma, and central nervous system tumors as well applications of AI in pediatric oncology. Radiomics and machine learning algorithms are being explored for their ability to enhance diagnostic accuracy, reduce interobserver variability, and inform complex clinical decision-making, such as identifying patients with refractory lymphoma, assessing pseudoprogression in melanoma, or predicting brain metastases in extracranial malignancies. Additionally, AI-assisted lesion segmentation, quantitative feature extraction, and heterogeneity analysis are contributing to improved prediction of treatment response and long-term survival outcomes. Despite encouraging results, variability in imaging protocols, segmentation methods, and validation strategies across studies continues to challenge reproducibility and remains a barrier to clinical translation. This review evaluates recent advancements of AI, its current clinical applications, and emphasizes the need for robust standardization and prospective validation to ensure the reproducibility and generalizability of AI tools in PET imaging and clinical practice.

Open-access ultrasonic diaphragm dataset and an automatic diaphragm measurement using deep learning network.

Li Z, Mao L, Jia F, Zhang S, Han C, Fu S, Zheng Y, Chu Y, Chen Z, Wang D, Duan H, Zheng Y

pubmed logopapersJul 18 2025
The assessment of diaphragm function is crucial for effective clinical management and the prevention of complications associated with diaphragmatic dysfunction. However, current measurement methodologies rely on manual techniques that are susceptible to human error: How does the performance of an automatic diaphragm measurement system based on a segmentation neural network focusing on diaphragm thickness and excursion compare with existing methodologies? The proposed system integrates segmentation and parameter measurement, leveraging a newly established ultrasound diaphragm dataset. This dataset comprises B-mode ultrasound images and videos for diaphragm thickness assessment, as well as M-mode images and videos for movement measurement. We introduce a novel deep learning-based segmentation network, the Multi-ratio Dilated U-Net (MDRU-Net), to enable accurate diaphragm measurements. The system additionally incorporates a comprehensive implementation plan for automated measurement. Automatic measurement results are compared against manual assessments conducted by clinicians, revealing an average error of 8.12% in diaphragm thickening fraction measurements and a mere 4.3% average relative error in diaphragm excursion measurements. The results indicate overall minor discrepancies and enhanced potential for clinical detection of diaphragmatic conditions. Additionally, we design a user-friendly automatic measurement system for assessing diaphragm parameters and an accompanying method for measuring ultrasound-derived diaphragm parameters. In this paper, we constructed a diaphragm ultrasound dataset of thickness and excursion. Based on the U-Net architecture, we developed an automatic diaphragm segmentation algorithm and designed an automatic parameter measurement scheme. A comparative error analysis was conducted against manual measurements. Overall, the proposed diaphragm ultrasound segmentation algorithm demonstrated high segmentation performance and efficiency. The automatic measurement scheme based on this algorithm exhibited high accuracy, eliminating subjective influence and enhancing the automation of diaphragm ultrasound parameter assessment, thereby providing new possibilities for diaphragm evaluation.

AI-Driven segmentation and morphogeometric profiling of epicardial adipose tissue in type 2 diabetes.

Feng F, Hasaballa AI, Long T, Sun X, Fernandez J, Carlhäll CJ, Zhao J

pubmed logopapersJul 18 2025
Epicardial adipose tissue (EAT) is associated with cardiometabolic risk in type 2 diabetes (T2D), but its spatial distribution and structural alterations remain understudied. We aim to develop a shape-aware, AI-based method for automated segmentation and morphogeometric analysis of EAT in T2D. A total of 90 participants (45 with T2D and 45 age-, sex-matched controls) underwent cardiac 3D Dixon MRI, enrolled between 2014 and 2018 as part of the sub-study of the Swedish SCAPIS cohort. We developed EAT-Seg, a multi-modal deep learning model incorporating signed distance maps (SDMs) for shape-aware segmentation. Segmentation performance was evaluated using the Dice similarity coefficient (DSC), the 95% Hausdorff distance (HD95), and the average symmetric surface distance (ASSD). Statistical shape analysis combined with partial least squares discriminant analysis (PLS-DA) was applied to point cloud representations of EAT to capture latent spatial variations between groups. Morphogeometric features, including volume, 3D local thickness map, elongation and fragmentation index, were extracted and correlated with PLS-DA latent variables using Pearson correlation. Features with high-correlation were identified as key differentiators and evaluated using a Random Forest classifier. EAT-Seg achieved a DSC of 0.881, a HD95 of 3.213 mm, and an ASSD of 0.602 mm. Statistical shape analysis revealed spatial distribution differences in EAT between T2D and control groups. Morphogeometric feature analysis identified volume and thickness gradient-related features as key discriminators (r > 0.8, P < 0.05). Random Forest classification achieved an AUC of 0.703. This AI-based framework enables accurate segmentation for structurally complex EAT and reveals key morphogeometric differences associated with T2D, supporting its potential as a biomarker for cardiometabolic risk assessment.

Software architecture and manual for novel versatile CT image analysis toolbox -- AnatomyArchive

Lei Xu, Torkel B Brismar

arxiv logopreprintJul 18 2025
We have developed a novel CT image analysis package named AnatomyArchive, built on top of the recent full body segmentation model TotalSegmentator. It provides automatic target volume selection and deselection capabilities according to user-configured anatomies for volumetric upper- and lower-bounds. It has a knowledge graph-based and time efficient tool for anatomy segmentation mask management and medical image database maintenance. AnatomyArchive enables automatic body volume cropping, as well as automatic arm-detection and exclusion, for more precise body composition analysis in both 2D and 3D formats. It provides robust voxel-based radiomic feature extraction, feature visualization, and an integrated toolchain for statistical tests and analysis. A python-based GPU-accelerated nearly photo-realistic segmentation-integrated composite cinematic rendering is also included. We present here its software architecture design, illustrate its workflow and working principle of algorithms as well provide a few examples on how the software can be used to assist development of modern machine learning models. Open-source codes will be released at https://github.com/lxu-medai/AnatomyArchive for only research and educational purposes.
Page 58 of 1341332 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.