Sort by:
Page 52 of 1341332 results

Quantification of hepatic steatosis on post-contrast computed tomography scans using artificial intelligence tools.

Derstine BA, Holcombe SA, Chen VL, Pai MP, Sullivan JA, Wang SC, Su GL

pubmed logopapersJul 26 2025
Early detection of steatotic liver disease (SLD) is critically important. In clinical practice, hepatic steatosis is frequently diagnosed using computed tomography (CT) performed for unrelated clinical indications. An equation for estimating magnetic resonance proton density fat fraction (MR-PDFF) using liver attenuation on non-contrast CT exists, but no equivalent equation exists for post-contrast CT. We sought to (1) determine whether an automated workflow can accurately measure liver attenuation, (2) validate previously identified optimal thresholds for liver or liver-spleen attenuation in post-contrast studies, and (3) develop a method for estimating MR-PDFF (FF) on post-contrast CT. The fully automated TotalSegmentator 'total' machine learning model was used to segment 3D liver and spleen from non-contrast and post-contrast CT scans. Mean attenuation was extracted from liver (L) and spleen (S) volumes and from manually placed regions of interest (ROIs) in multi-phase CT scans of two cohorts: derivation (n = 1740) and external validation (n = 1044). Non-linear regression was used to determine the optimal coefficients for three phase-specific (arterial, venous, delayed) increasing exponential decay equations relating post-contrast L to non-contrast L. MR-PDFF was estimated from non-contrast CT and used as the reference standard. The mean attenuation for manual ROIs versus automated volumes were nearly perfectly correlated for both liver and spleen (r > .96, p < .001). For moderate-to-severe steatosis (L < 40 HU), the density of the liver (L) alone was a better classifier than either liver-spleen difference (L-S) or ratio (L/S) on post-contrast CTs. Fat fraction calculated using a corrected post-contrast liver attenuation measure agreed with non-contrast FF > 15% in both the derivation and external validation cohort, with AUROC between 0.92 and 0.97 on arterial, venous, and delayed phases. Automated volumetric mean attenuation of liver and spleen can be used instead of manually placed ROIs for liver fat assessments. Liver attenuation alone in post-contrast phases can be used to assess the presence of moderate-to-severe hepatic steatosis. Correction equations for liver attenuation on post-contrast phase CT scans enable reasonable quantification of liver steatosis, providing potential opportunities for utilizing clinical scans to develop large scale screening or studies in SLD.

FaRMamba: Frequency-based learning and Reconstruction aided Mamba for Medical Segmentation

Ze Rong, ZiYue Zhao, Zhaoxin Wang, Lei Ma

arxiv logopreprintJul 26 2025
Accurate medical image segmentation remains challenging due to blurred lesion boundaries (LBA), loss of high-frequency details (LHD), and difficulty in modeling long-range anatomical structures (DC-LRSS). Vision Mamba employs one-dimensional causal state-space recurrence to efficiently model global dependencies, thereby substantially mitigating DC-LRSS. However, its patch tokenization and 1D serialization disrupt local pixel adjacency and impose a low-pass filtering effect, resulting in Local High-frequency Information Capture Deficiency (LHICD) and two-dimensional Spatial Structure Degradation (2D-SSD), which in turn exacerbate LBA and LHD. In this work, we propose FaRMamba, a novel extension that explicitly addresses LHICD and 2D-SSD through two complementary modules. A Multi-Scale Frequency Transform Module (MSFM) restores attenuated high-frequency cues by isolating and reconstructing multi-band spectra via wavelet, cosine, and Fourier transforms. A Self-Supervised Reconstruction Auxiliary Encoder (SSRAE) enforces pixel-level reconstruction on the shared Mamba encoder to recover full 2D spatial correlations, enhancing both fine textures and global context. Extensive evaluations on CAMUS echocardiography, MRI-based Mouse-cochlea, and Kvasir-Seg endoscopy demonstrate that FaRMamba consistently outperforms competitive CNN-Transformer hybrids and existing Mamba variants, delivering superior boundary accuracy, detail preservation, and global coherence without prohibitive computational overhead. This work provides a flexible frequency-aware framework for future segmentation models that directly mitigates core challenges in medical imaging.

T-MPEDNet: Unveiling the Synergy of Transformer-aware Multiscale Progressive Encoder-Decoder Network with Feature Recalibration for Tumor and Liver Segmentation

Chandravardhan Singh Raghaw, Jasmer Singh Sanjotra, Mohammad Zia Ur Rehman, Shubhi Bansal, Shahid Shafi Dar, Nagendra Kumar

arxiv logopreprintJul 25 2025
Precise and automated segmentation of the liver and its tumor within CT scans plays a pivotal role in swift diagnosis and the development of optimal treatment plans for individuals with liver diseases and malignancies. However, automated liver and tumor segmentation faces significant hurdles arising from the inherent heterogeneity of tumors and the diverse visual characteristics of livers across a broad spectrum of patients. Aiming to address these challenges, we present a novel Transformer-aware Multiscale Progressive Encoder-Decoder Network (T-MPEDNet) for automated segmentation of tumor and liver. T-MPEDNet leverages a deep adaptive features backbone through a progressive encoder-decoder structure, enhanced by skip connections for recalibrating channel-wise features while preserving spatial integrity. A Transformer-inspired dynamic attention mechanism captures long-range contextual relationships within the spatial domain, further enhanced by multi-scale feature utilization for refined local details, leading to accurate prediction. Morphological boundary refinement is then employed to address indistinct boundaries with neighboring organs, capturing finer details and yielding precise boundary labels. The efficacy of T-MPEDNet is comprehensively assessed on two widely utilized public benchmark datasets, LiTS and 3DIRCADb. Extensive quantitative and qualitative analyses demonstrate the superiority of T-MPEDNet compared to twelve state-of-the-art methods. On LiTS, T-MPEDNet achieves outstanding Dice Similarity Coefficients (DSC) of 97.6% and 89.1% for liver and tumor segmentation, respectively. Similar performance is observed on 3DIRCADb, with DSCs of 98.3% and 83.3% for liver and tumor segmentation, respectively. Our findings prove that T-MPEDNet is an efficacious and reliable framework for automated segmentation of the liver and its tumor in CT scans.

Pre- and Post-Treatment Glioma Segmentation with the Medical Imaging Segmentation Toolkit

Adrian Celaya, Tucker Netherton, Dawid Schellingerhout, Caroline Chung, Beatrice Riviere, David Fuentes

arxiv logopreprintJul 25 2025
Medical image segmentation continues to advance rapidly, yet rigorous comparison between methods remains challenging due to a lack of standardized and customizable tooling. In this work, we present the current state of the Medical Imaging Segmentation Toolkit (MIST), with a particular focus on its flexible and modular postprocessing framework designed for the BraTS 2025 pre- and post-treatment glioma segmentation challenge. Since its debut in the 2024 BraTS adult glioma post-treatment segmentation challenge, MIST's postprocessing module has been significantly extended to support a wide range of transforms, including removal or replacement of small objects, extraction of the largest connected components, and morphological operations such as hole filling and closing. These transforms can be composed into user-defined strategies, enabling fine-grained control over the final segmentation output. We evaluate three such strategies - ranging from simple small-object removal to more complex, class-specific pipelines - and rank their performance using the BraTS ranking protocol. Our results highlight how MIST facilitates rapid experimentation and targeted refinement, ultimately producing high-quality segmentations for the BraTS 2025 challenge. MIST remains open source and extensible, supporting reproducible and scalable research in medical image segmentation.

Is Exchangeability better than I.I.D to handle Data Distribution Shifts while Pooling Data for Data-scarce Medical image segmentation?

Ayush Roy, Samin Enam, Jun Xia, Vishnu Suresh Lokhande, Won Hwa Kim

arxiv logopreprintJul 25 2025
Data scarcity is a major challenge in medical imaging, particularly for deep learning models. While data pooling (combining datasets from multiple sources) and data addition (adding more data from a new dataset) have been shown to enhance model performance, they are not without complications. Specifically, increasing the size of the training dataset through pooling or addition can induce distributional shifts, negatively affecting downstream model performance, a phenomenon known as the "Data Addition Dilemma". While the traditional i.i.d. assumption may not hold in multi-source contexts, assuming exchangeability across datasets provides a more practical framework for data pooling. In this work, we investigate medical image segmentation under these conditions, drawing insights from causal frameworks to propose a method for controlling foreground-background feature discrepancies across all layers of deep networks. This approach improves feature representations, which are crucial in data-addition scenarios. Our method achieves state-of-the-art segmentation performance on histopathology and ultrasound images across five datasets, including a novel ultrasound dataset that we have curated and contributed. Qualitative results demonstrate more refined and accurate segmentation maps compared to prominent baselines across three model architectures. The code will be available on Github.

Clinical application of a deep learning system for automatic mandibular alveolar bone quantity assessment and suggested treatment options using CBCT cross-sections.

Rashid MO, Gaghor S

pubmed logopapersJul 25 2025
Assessing dimensions of available bone throughout hundreds of cone-beam computed tomography cross-sectional images of the edentulous area is time-consuming, focus-demanding, and prone to variability and mistakes. This study aims for a clinically applicable artificial intelligence-based automation system for available bone quantity assessment and providing possible surgical and nonsurgical treatment options in a real-time manner. YOLOv8-seg, a single-stage convolutional neural network detector, has been used to segment mandibular alveolar bone and the inferior alveolar canal from cross-sectional images of a custom dataset. Measurements from the segmented mask of the bone and canal have been calculated mathematically and compared with manual measurements from 2 different operators, and the time for the measurement task has been compared. Classification of bone dimension with 25 treatment options has been automatically suggested by the system and validated with a team of specialists. The YOLOv8 model achieved significantly accurate improvements in segmenting anatomical structures with a precision of 0.951, recall of 0.915, mAP50 of 0.952, Intersection over Union of 0.871, and dice similarity coefficient of 0.911. The efficiency ratio of that segmentation performed by the artificial intelligence-based system is 2001 times faster in comparison to the human subject. A statistically significant difference in the measurements from the system to operators in height and time is recorded. The system's recommendations matched the clinicians' assessments in 94% of cases (83/88). Cohen κ of 0.89 indicated near-perfect agreement. The YOLOv8 model is an effective tool, providing high accuracy in segmenting dental structures with balanced computational requirements, and even with the challenges presented, the system can be clinically applicable with future improvements, providing less time-consuming and, most importantly, specialist-level accurate implant planning reports.

A DCT-UNet-based framework for pulmonary airway segmentation integrating label self-updating and terminal region growing.

Zhao S, Wu Y, Xu J, Li M, Feng J, Xia S, Chen R, Liang Z, Qian W, Qi S

pubmed logopapersJul 25 2025
&#xD;Intrathoracic airway segmentation in computed tomography (CT) is important for quantitative and qualitative analysis of various chronic respiratory diseases and bronchial surgery navigation. However, the airway tree's morphological complexity, incomplete labels resulting from annotation difficulty, and intra-class imbalance between main and terminal airways limit the segmentation performance.&#xD;Methods:&#xD;Three methodological improvements are proposed to deal with the challenges. Firstly, we design a DCT-UNet to collect better information on neighbouring voxels and ones within a larger spatial region. Secondly, an airway label self-updating (ALSU) strategy is proposed to iteratively update the reference labels to conquer the problem of incomplete labels. Thirdly, a deep learning-based terminal region growing (TRG) is adopted to extract terminal airways. Extensive experiments were conducted on two internal datasets and three public datasets.&#xD;Results:&#xD;Compared to the counterparts, the proposed method can achieve a higher Branch Detected, Tree-length Detected, Branch Ratio, and Tree-length Ratio (ISICDM2021 dataset, 95.19%, 94.89%, 166.45%, and 172.29%; BAS dataset, 96.03%, 95.11%, 129.35%, and 137.00%). Ablation experiments show the effectiveness of three proposed solutions. Our method is applied to an in-house Chorionic Obstructive Pulmonary Disease (COPD) dataset. The measures of branch count, tree length, endpoint count, airway volume, and airway surface area are significantly different between COPD severity stages.&#xD;Conclusions:&#xD;The proposed methods can segment more terminal bronchi and larger length of airway, even some bronchi which are real but missed in the manual annotation can be detected. Potential application significance has been presented in characterizing COPD airway lesions and severity stages.&#xD.

CT-free kidney single-photon emission computed tomography for glomerular filtration rate.

Kwon K, Oh D, Kim JH, Yoo J, Lee WW

pubmed logopapersJul 25 2025
This study explores an artificial intelligence-based approach to perform CT-free quantitative SPECT for kidney imaging using Tc-99 m DTPA, aiming to estimate glomerular filtration rate (GFR) without relying on CT. A total of 1000 SPECT/CT scans were used to train and test a deep-learning model that segments kidneys automatically based on synthetic attenuation maps (µ-maps) derived from SPECT alone. The model employed a residual U-Net with edge attention and was optimized using windowing-maximum normalization and a generalized Dice similarity loss function. Performance evaluation showed strong agreement with manual CT-based segmentation, achieving a Dice score of 0.818 ± 0.056 and minimal volume differences of 17.9 ± 43.6 mL (mean ± standard deviation). An additional set of 50 scans confirmed that GFR calculated from the AI-based CT-free SPECT (109.3 ± 17.3 mL/min) was nearly identical to the conventional SPECT/CT method (109.2 ± 18.4 mL/min, p = 0.9396). This CT-free method reduced radiation exposure by up to 78.8% and shortened segmentation time from 40 min to under 1 min. The findings suggest that AI can effectively replace CT in kidney SPECT imaging, maintaining quantitative accuracy while improving safety and efficiency.

Extreme Cardiac MRI Analysis under Respiratory Motion: Results of the CMRxMotion Challenge

Kang Wang, Chen Qin, Zhang Shi, Haoran Wang, Xiwen Zhang, Chen Chen, Cheng Ouyang, Chengliang Dai, Yuanhan Mo, Chenchen Dai, Xutong Kuang, Ruizhe Li, Xin Chen, Xiuzheng Yue, Song Tian, Alejandro Mora-Rubio, Kumaradevan Punithakumar, Shizhan Gong, Qi Dou, Sina Amirrajab, Yasmina Al Khalil, Cian M. Scannell, Lexiaozi Fan, Huili Yang, Xiaowu Sun, Rob van der Geest, Tewodros Weldebirhan Arega, Fabrice Meriaudeau, Caner Özer, Amin Ranem, John Kalkhof, İlkay Öksüz, Anirban Mukhopadhyay, Abdul Qayyum, Moona Mazher, Steven A Niederer, Carles Garcia-Cabrera, Eric Arazo, Michal K. Grzeszczyk, Szymon Płotka, Wanqin Ma, Xiaomeng Li, Rongjun Ge, Yongqing Kou, Xinrong Chen, He Wang, Chengyan Wang, Wenjia Bai, Shuo Wang

arxiv logopreprintJul 25 2025
Deep learning models have achieved state-of-the-art performance in automated Cardiac Magnetic Resonance (CMR) analysis. However, the efficacy of these models is highly dependent on the availability of high-quality, artifact-free images. In clinical practice, CMR acquisitions are frequently degraded by respiratory motion, yet the robustness of deep learning models against such artifacts remains an underexplored problem. To promote research in this domain, we organized the MICCAI CMRxMotion challenge. We curated and publicly released a dataset of 320 CMR cine series from 40 healthy volunteers who performed specific breathing protocols to induce a controlled spectrum of motion artifacts. The challenge comprised two tasks: 1) automated image quality assessment to classify images based on motion severity, and 2) robust myocardial segmentation in the presence of motion artifacts. A total of 22 algorithms were submitted and evaluated on the two designated tasks. This paper presents a comprehensive overview of the challenge design and dataset, reports the evaluation results for the top-performing methods, and further investigates the impact of motion artifacts on five clinically relevant biomarkers. All resources and code are publicly available at: https://github.com/CMRxMotion

SAM2-Aug: Prior knowledge-based Augmentation for Target Volume Auto-Segmentation in Adaptive Radiation Therapy Using Segment Anything Model 2

Guoping Xu, Yan Dai, Hengrui Zhao, Ying Zhang, Jie Deng, Weiguo Lu, You Zhang

arxiv logopreprintJul 25 2025
Purpose: Accurate tumor segmentation is vital for adaptive radiation therapy (ART) but remains time-consuming and user-dependent. Segment Anything Model 2 (SAM2) shows promise for prompt-based segmentation but struggles with tumor accuracy. We propose prior knowledge-based augmentation strategies to enhance SAM2 for ART. Methods: Two strategies were introduced to improve SAM2: (1) using prior MR images and annotations as contextual inputs, and (2) improving prompt robustness via random bounding box expansion and mask erosion/dilation. The resulting model, SAM2-Aug, was fine-tuned and tested on the One-Seq-Liver dataset (115 MRIs from 31 liver cancer patients), and evaluated without retraining on Mix-Seq-Abdomen (88 MRIs, 28 patients) and Mix-Seq-Brain (86 MRIs, 37 patients). Results: SAM2-Aug outperformed convolutional, transformer-based, and prompt-driven models across all datasets, achieving Dice scores of 0.86(liver), 0.89(abdomen), and 0.90(brain). It demonstrated strong generalization across tumor types and imaging sequences, with improved performance in boundary-sensitive metrics. Conclusions: Incorporating prior images and enhancing prompt diversity significantly boosts segmentation accuracy and generalizability. SAM2-Aug offers a robust, efficient solution for tumor segmentation in ART. Code and models will be released at https://github.com/apple1986/SAM2-Aug.
Page 52 of 1341332 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.