Sort by:
Page 23 of 1331328 results

Cross-channel feature transfer 3D U-Net for automatic segmentation of the perilymph and endolymph fluid spaces in hydrops MRI.

Yoo TW, Yeo CD, Lee EJ, Oh IS

pubmed logopapersSep 1 2025
The identification of endolymphatic hydrops (EH) using magnetic resonance imaging (MRI) is crucial for understanding inner ear disorders such as Meniere's disease and sudden low-frequency hearing loss. The EH ratio is calculated as the ratio of the endolymphatic fluid space to the perilymphatic fluid space. We propose a novel cross-channel feature transfer (CCFT) 3D U-Net for fully automated segmentation of the perilymphatic and endolymphatic fluid spaces in hydrops MRI. The model exhibits state-of-the-art performance in segmenting the endolymphatic fluid space by transferring magnetic resonance cisternography (MRC) features to HYDROPS-Mi2 (HYbriD of Reversed image Of Positive endolymph signal and native image of positive perilymph Signal multiplied with the heavily T2-weighted MR cisternography). Experimental results using the CCFT module showed that the segmentation performance of the perilymphatic space was 0.9459 for the Dice similarity coefficient (DSC) and 0.8975 for the intersection over union (IOU), and that of the endolymphatic space was 0.8053 for the DSC and 0.6778 for the IOU.

MSA2-Net: Utilizing Self-Adaptive Convolution Module to Extract Multi-Scale Information in Medical Image Segmentation

Chao Deng, Xiaosen Li, Xiao Qin

arxiv logopreprintSep 1 2025
The nnUNet segmentation framework adeptly adjusts most hyperparameters in training scripts automatically, but it overlooks the tuning of internal hyperparameters within the segmentation network itself, which constrains the model's ability to generalize. Addressing this limitation, this study presents a novel Self-Adaptive Convolution Module that dynamically adjusts the size of the convolution kernels depending on the unique fingerprints of different datasets. This adjustment enables the MSA2-Net, when equipped with this module, to proficiently capture both global and local features within the feature maps. Self-Adaptive Convolution Module is strategically integrated into two key components of the MSA2-Net: the Multi-Scale Convolution Bridge and the Multi-Scale Amalgamation Decoder. In the MSConvBridge, the module enhances the ability to refine outputs from various stages of the CSWin Transformer during the skip connections, effectively eliminating redundant data that could potentially impair the decoder's performance. Simultaneously, the MSADecoder, utilizing the module, excels in capturing detailed information of organs varying in size during the decoding phase. This capability ensures that the decoder's output closely reproduces the intricate details within the feature maps, thus yielding highly accurate segmentation images. MSA2-Net, bolstered by this advanced architecture, has demonstrated exceptional performance, achieving Dice coefficient scores of 86.49\%, 92.56\%, 93.37\%, and 92.98\% on the Synapse, ACDC, Kvasir, and Skin Lesion Segmentation (ISIC2017) datasets, respectively. This underscores MSA2-Net's robustness and precision in medical image segmentation tasks across various datasets.

Deep learning model for predicting lymph node metastasis around rectal cancer based on rectal tumor core area and mesangial imaging features.

Guo L, Fu K, Wang W, Zhou L, Chen L, Jiang M

pubmed logopapersSep 1 2025
Assessing lymph node metastasis (LNM) involvement in patients with rectal cancer (RC) is fundamental in disease management. In this study, we used artificial intelligence (AI) technology to develop a segmentation model that automatically segments the tumor core area and mesangial tissue from magnetic resonance T2-weighted imaging (T2WI) and apparent diffusion coefficient (ADC) images collected from 122 RC patients to improve the accuracy of LNM prediction, after which omics machine modeling was performed on the segmented ROI. An automatic segmentation model was developed using nn-UNet. This pipeline integrates deep learning (DL), specifically 3D U-Net, for semantic segmentation and image processing techniques such as resampling, normalization, connected component analysis, image registration, and radiomics features coupled with machine learning. The results showed that the DL segmentation method could effectively segment the tumor and mesangial areas from MR sequences (the median dice coefficient: 0.90 ± 0.08; mesorectum segmentation: 0.85 ± 0.36), and the radiological characteristics of rectal and mesangial tissues in T2WI and ADC images could help distinguish RC treatments. The nn-UNet model demonstrated promising preliminary results, achieving the highest area under the curve (AUC) values in various scenarios. In the evaluation encompassing both tumor lesions and mesorectum involvement, the model exhibited an AUC of 0.743, highlighting its strong discriminatory ability to predict a combined outcome involving both elements. Specifically targeting tumor lesions, the model achieved an AUC of 0.731, emphasizing its effectiveness in distinguishing between positive and negative cases of tumor lesions. In assessing the prediction of mesorectum involvement, the model displayed moderate predictive utility with an AUC of 0.753. The nn-UNet model demonstrated impressive performance across all evaluated scenarios, including combined tumor lesions and mesorectum involvement, tumor lesions alone, and mesorectum involvement alone. The online version contains supplementary material available at 10.1186/s12880-025-01878-9.

Pulmonary Biomechanics in COPD: Imaging Techniques and Clinical Applications.

Aguilera SM, Chaudhary MFA, Gerard SE, Reinhardt JM, Bodduluri S

pubmed logopapersSep 1 2025
The respiratory system depends on complex biomechanical processes to enable gas exchange. The mechanical properties of the lung parenchyma, airways, vasculature, and surrounding structures play an essential role in overall ventilation efficacy. These complex biomechanical processes however are significantly altered in chronic obstructive pulmonary disease (COPD) due to emphysematous destruction of lung parenchyma, chronic airway inflammation, and small airway obstruction. Recent advancements computed tomography (CT) and magnetic resonance imaging (MRI) acquisition techniques, combined with sophisticated image post-processing algorithms and deep neural network integration, have enabled comprehensive quantitative assessment of lung structure, tissue deformation, and lung function at the tissue level. These methods have led to better phenotyping, therapeutic strategies and refined our understanding of pathological processes that compromise pulmonary function in COPD. In this review, we discuss recent developments in imaging and image processing methods for studying pulmonary biomechanics with specific focus on clinical applications for chronic obstructive pulmonary disease (COPD) including the assessment of regional ventilation, planning of endobronchial valve treatment, prediction of disease onset and progression, sizing of lungs for transplantation, and guiding mechanical ventilation. These advanced image-based biomechanical measurements when combined with clinical expertise play a critical role in disease management and personalized therapeutic interventions for patients with COPD.

3D Deep Learning for Virtual Orbital Defect Reconstruction: A Precise and Automated Approach.

Yu F, Liu C, Zhong C, Zeng W, Chen J, Liu W, Guo J, Tang W

pubmed logopapersSep 1 2025
Accurate virtual orbital reconstruction is crucial for preoperative planning. Traditional methods, such as the mirroring technique, are unsuitable for orbital defects involving both sides of the midline and are time-consuming and labor-intensive. This study introduces a modified 3D U-Net+++ architecture for orbital defects reconstruction, aiming to enhance precision and automation. The model was trained and tested with 300 synthetic defects from cranial spiral CT scans. The method was validated in 15 clinical cases of orbital fractures and evaluated using quantitative metrics, visual assessments, and a 5-point Likert scale, by 3 surgeons. For synthetic defect reconstruction, the network achieved a 95% Hausdorff distance (HD95) of<2.0 mm, an average symmetric surface distance (ASSD) of ∼0.02 mm, a surface Dice similarity coefficient (Surface DSC)>0.94, a peak signal-to-noise ratio (PSNR)>35 dB, and a structural similarity index (SSIM)>0.98, outperforming the compared state-of-the-art networks. For clinical cases, the average 5-point Likert scale scores for structural integrity, edge consistency, and overall morphology were>4, with no significant difference between unilateral and bilateral/trans-midline defects. For clinical unilateral defect reconstruction, the HD95 was ∼2.5 mm, ASSD<0.02 mm, Surface DSC>0.91, PSNR>30 dB, and SSIM>0.99. The automatic reconstruction process took ∼10 seconds per case. In conclusion, this method offers a precise and highly automated solution for orbital defect reconstruction, particularly for bilateral and trans-midline defects. We anticipate that this method will significantly assist future clinical practice.

FocalTransNet: A Hybrid Focal-Enhanced Transformer Network for Medical Image Segmentation.

Liao M, Yang R, Zhao Y, Liang W, Yuan J

pubmed logopapersSep 1 2025
CNNs have demonstrated superior performance in medical image segmentation. To overcome the limitation of only using local receptive field, previous work has attempted to integrate Transformers into convolutional network components such as encoders, decoders, or skip connections. However, these methods can only establish long-distance dependencies for some specific patterns and usually neglect the loss of fine-grained details during downsampling in multi-scale feature extraction. To address the issues, we present a novel hybrid Transformer network called FocalTransNet. specifically, we construct a focal-enhanced (FE) Transformer module by introducing dense cross-connections into a CNN-Transformer dual-path structure and deploy the FE Transformer throughout the entire encoder. Different from existing hybrid networks that employ embedding or stacking strategies, the proposed model allows for a comprehensive extraction and deep fusion of both local and global features at different scales. Besides, we propose a symmetric patch merging (SPM) module for downsampling, which can retain the fine-grained details by stablishing a specific information compensation mechanism. We evaluated the proposed method on four different medical image segmentation benchmarks. The proposed method outperforms previous state-of-the-art convolutional networks, Transformers, and hybrid networks. The code for FocalTransNet is publicly available at https://github.com/nemanjajoe/FocalTransNet.

DeepNuParc: A novel deep clustering framework for fine-scale parcellation of brain nuclei using diffusion MRI tractography.

He H, Zhu C, Zhang L, Liu Y, Xu X, Chen Y, Zekelman L, Rushmore J, Rathi Y, Makris N, O'Donnell LJ, Zhang F

pubmed logopapersSep 1 2025
Brain nuclei are clusters of anatomically distinct neurons that serve as important hubs for processing and relaying information in various neural circuits. Fine-scale parcellation of the brain nuclei is vital for a comprehensive understanding of their anatomico-functional correlations. Diffusion MRI tractography is an advanced imaging technique that can estimate the brain's white matter structural connectivity to potentially reveal the topography of the nuclei of interest for studying their subdivisions. In this work, we present a deep clustering pipeline, namely DeepNuParc, to perform automated, fine-scale parcellation of brain nuclei using diffusion MRI tractography. First, we incorporate a newly proposed deep learning approach to enable accurate segmentation of the nuclei of interest directly on the dMRI data. Next, we design a novel streamline clustering-based structural connectivity feature for a robust representation of voxels within the nuclei. Finally, we improve the popular joint dimensionality reduction and k-means clustering approach to enable nuclei parcellation at a finer scale. We demonstrate DeepNuParc on two important brain structures, i.e. the amygdala and the thalamus, that are known to have multiple anatomically and functionally distinct nucleus subdivisions. Experimental results show that DeepNuParc enables consistent parcellation of the nuclei into multiple parcels across multiple subjects and achieves good correspondence with the widely used coarse-scale atlases. Our code is available at https://github.com/HarlandZZC/deep_nuclei_parcellation.

RibPull: Implicit Occupancy Fields and Medial Axis Extraction for CT Ribcage Scans

Emmanouil Nikolakakis, Amine Ouasfi, Julie Digne, Razvan Marinescu

arxiv logopreprintSep 1 2025
We present RibPull, a methodology that utilizes implicit occupancy fields to bridge computational geometry and medical imaging. Implicit 3D representations use continuous functions that handle sparse and noisy data more effectively than discrete methods. While voxel grids are standard for medical imaging, they suffer from resolution limitations, topological information loss, and inefficient handling of sparsity. Coordinate functions preserve complex geometrical information and represent a better solution for sparse data representation, while allowing for further morphological operations. Implicit scene representations enable neural networks to encode entire 3D scenes within their weights. The result is a continuous function that can implicitly compesate for sparse signals and infer further information about the 3D scene by passing any combination of 3D coordinates as input to the model. In this work, we use neural occupancy fields that predict whether a 3D point lies inside or outside an object to represent CT-scanned ribcages. We also apply a Laplacian-based contraction to extract the medial axis of the ribcage, thus demonstrating a geometrical operation that benefits greatly from continuous coordinate-based 3D scene representations versus voxel-based representations. We evaluate our methodology on 20 medical scans from the RibSeg dataset, which is itself an extension of the RibFrac dataset. We will release our code upon publication.

Progesterone for Traumatic Brain Injury, Experimental Clinical Treatment III Trial Revisited: Objective Classification of Traumatic Brain Injury With Brain Imaging Segmentation and Biomarker Levels.

Cheong S, Gupta R, Kadaba Sridhar S, Hall AJ, Frankel M, Wright DW, Sham YY, Samadani U

pubmed logopapersSep 1 2025
This post hoc study of the Progesterone for Traumatic Brain Injury, Experimental Clinical Treatment (ProTECT) III trial investigates whether improving traumatic brain injury (TBI) classification, using serum biomarkers (glial fibrillary acidic protein [GFAP] and ubiquitin carboxyl-terminal esterase L1 [UCH-L1]) and algorithmically assessed total lesion volume, could identify a subset of responders to progesterone treatment, beyond broad measures like the Glasgow Coma Scale (GCS) and Glasgow Outcome Scale-Extended (GOS-E), which may fail to capture subtle changes in TBI recovery. Brain lesion volumes on CT scans were quantified using Brain Lesion Analysis and Segmentation Tool for CT. Patients were classified into true-positive and true-negative groups based on an optimization scheme to determine a threshold that maximizes agreement between radiological assessment and objectively measured lesion volume. True-positives were further categorized into low (> 0.2-10 mL), medium (> 10-50 mL), and high (> 50 mL) lesion volumes for analysis with protein biomarkers and injury severity. Correlation analyses linked Rotterdam scores (RSs) with biomarker levels and lesion volumes, whereas Welch's t-test evaluated biomarker differences between groups and progesterone's effects. Forty-nine level 1 trauma centers in the United States. Patients with moderate-to-severe TBI. Progesterone. GFAP and UCH-L1 levels were significantly higher in true-positive cases with low to medium lesion volume. Only UCH-L1 differed between progesterone and placebo groups at 48 hours. Both biomarkers and lesion volume in the true-positive group correlated with the RS. No sex-specific or treatment differences were found. This study reaffirms elevated levels of GFAP and UCH-L1 as biomarkers for detecting TBI in patients with brain lesions and for predicting clinical outcomes. Despite improved classification using CT-imaging segmentation and serum biomarkers, we did not identify a subset of progesterone responders within 24 or 48 hours of progesterone treatment. More rigorous and quantifiable measures for classifying the nature of injury may be needed to enable development of therapeutics as neither serum markers nor algorithmic CT analysis performed better than the older metrics of Rotterdam or GCS metrics.

Can General-Purpose Omnimodels Compete with Specialists? A Case Study in Medical Image Segmentation

Yizhe Zhang, Qiang Chen, Tao Zhou

arxiv logopreprintAug 31 2025
The emergence of powerful, general-purpose omnimodels capable of processing diverse data modalities has raised a critical question: can these ``jack-of-all-trades'' systems perform on par with highly specialized models in knowledge-intensive domains? This work investigates this question within the high-stakes field of medical image segmentation. We conduct a comparative study analyzing the zero-shot performance of a state-of-the-art omnimodel (Gemini 2.5 Pro, the ``Nano Banana'' model) against domain-specific deep learning models on three distinct tasks: polyp (endoscopy), retinal vessel (fundus), and breast tumor segmentation (ultrasound). Our study focuses on performance at the extremes by curating subsets of the ``easiest'' and ``hardest'' cases based on the specialist models' accuracy. Our findings reveal a nuanced and task-dependent landscape. For polyp and breast tumor segmentation, specialist models excel on easy samples, but the omnimodel demonstrates greater robustness on hard samples where specialists fail catastrophically. Conversely, for the fine-grained task of retinal vessel segmentation, the specialist model maintains superior performance across both easy and hard cases. Intriguingly, qualitative analysis suggests omnimodels may possess higher sensitivity, identifying subtle anatomical features missed by human annotators. Our results indicate that while current omnimodels are not yet a universal replacement for specialists, their unique strengths suggest a potential complementary role with specialist models, particularly in enhancing robustness on challenging edge cases.
Page 23 of 1331328 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.