Sort by:
Page 66 of 1341333 results

GH-UNet: group-wise hybrid convolution-VIT for robust medical image segmentation.

Wang S, Li G, Gao M, Zhuo L, Liu M, Ma Z, Zhao W, Fu X

pubmed logopapersJul 10 2025
Medical image segmentation is vital for accurate diagnosis. While U-Net-based models are effective, they struggle to capture long-range dependencies in complex anatomy. We propose GH-UNet, a Group-wise Hybrid Convolution-ViT model within the U-Net framework, to address this limitation. GH-UNet integrates a hybrid convolution-Transformer encoder for both local detail and global context modeling, a Group-wise Dynamic Gating (GDG) module for adaptive feature weighting, and a cascaded decoder for multi-scale integration. Both the encoder and GDG are modular, enabling compatibility with various CNN or ViT backbones. Extensive experiments on five public and one private dataset show GH-UNet consistently achieves superior performance. On ISIC2016, it surpasses H2Former with 1.37% and 1.94% gains in DICE and IOU, respectively, using only 38% of the parameters and 49.61% of the FLOPs. The code is freely accessible via: https://github.com/xiachashuanghua/GH-UNet .

HNOSeg-XS: Extremely Small Hartley Neural Operator for Efficient and Resolution-Robust 3D Image Segmentation

Ken C. L. Wong, Hongzhi Wang, Tanveer Syeda-Mahmood

arxiv logopreprintJul 10 2025
In medical image segmentation, convolutional neural networks (CNNs) and transformers are dominant. For CNNs, given the local receptive fields of convolutional layers, long-range spatial correlations are captured through consecutive convolutions and pooling. However, as the computational cost and memory footprint can be prohibitively large, 3D models can only afford fewer layers than 2D models with reduced receptive fields and abstract levels. For transformers, although long-range correlations can be captured by multi-head attention, its quadratic complexity with respect to input size is computationally demanding. Therefore, either model may require input size reduction to allow more filters and layers for better segmentation. Nevertheless, given their discrete nature, models trained with patch-wise training or image downsampling may produce suboptimal results when applied on higher resolutions. To address this issue, here we propose the resolution-robust HNOSeg-XS architecture. We model image segmentation by learnable partial differential equations through the Fourier neural operator which has the zero-shot super-resolution property. By replacing the Fourier transform by the Hartley transform and reformulating the problem in the frequency domain, we created the HNOSeg-XS model, which is resolution robust, fast, memory efficient, and extremely parameter efficient. When tested on the BraTS'23, KiTS'23, and MVSeg'23 datasets with a Tesla V100 GPU, HNOSeg-XS showed its superior resolution robustness with fewer than 34.7k model parameters. It also achieved the overall best inference time (< 0.24 s) and memory efficiency (< 1.8 GiB) compared to the tested CNN and transformer models.

FF Swin-Unet: a strategy for automated segmentation and severity scoring of NAFLD.

Fan L, Lei Y, Song F, Sun X, Zhang Z

pubmed logopapersJul 10 2025
Non-alcoholic fatty liver disease (NAFLD) is a significant risk factor for liver cancer and cardiovascular diseases, imposing substantial social and economic burdens. Computed tomography (CT) scans are crucial for diagnosing NAFLD and assessing its severity. However, current manual measurement techniques require considerable human effort and resources from radiologists, and there is a lack of standardized methods for classifying the severity of NAFLD in existing research. To address these challenges, we propose a novel method for NAFLD segmentation and automated severity scoring. The method consists of three key modules: (1) The Semi-automatization nnU-Net Module (SNM) constructs a high-quality dataset by combining manual annotations with semi-automated refinement; (2) The Focal Feature Fusion Swin-Unet Module (FSM) enhances liver and spleen segmentation through multi-scale feature fusion and Swin Transformer-based architectures; (3) The Automated Severity Scoring Module (ASSM) integrates segmentation results with radiological features to classify NAFLD severity. These modules are embedded in a Flask-RESTful API-based system, enabling users to upload abdominal CT data for automated preprocessing, segmentation, and scoring. The Focal Feature Fusion Swin-Unet (FF Swin-Unet) method significantly improves segmentation accuracy, achieving a Dice similarity coefficient (DSC) of 95.64% and a 95th percentile Hausdorff distance (HD95) of 15.94. The accuracy of the automated severity scoring is 90%. With model compression and ONNX deployment, the evaluation speed for each case is approximately 5 seconds. Compared to manual diagnosis, the system can process a large volume of data simultaneously, rapidly, and efficiently while maintaining the same level of diagnostic accuracy, significantly reducing the workload of medical professionals. Our research demonstrates that the proposed system has high accuracy in processing large volumes of CT data and providing automated NAFLD severity scores quickly and efficiently. This method has the potential to significantly reduce the workload of medical professionals and holds immense clinical application potential.

Integrative multimodal ultrasound and radiomics for early prediction of neoadjuvant therapy response in breast cancer: a clinical study.

Wang S, Liu J, Song L, Zhao H, Wan X, Peng Y

pubmed logopapersJul 9 2025
This study aimed to develop an early predictive model for neoadjuvant therapy (NAT) response in breast cancer by integrating multimodal ultrasound (conventional B-mode, shear-wave elastography, and contrast-enhanced ultrasound) and radiomics with clinical-pathological data, and to evaluate its predictive accuracy after two cycles of NAT. This retrospective study included 239 breast cancer patients receiving neoadjuvant therapy, divided into training (n = 167) and validation (n = 72) cohorts. Multimodal ultrasound-B-mode, shear-wave elastography (SWE), and contrast-enhanced ultrasound (CEUS)-was performed at baseline and after two cycles. Tumors were segmented using a U-Net-based deep learning model with radiologist adjustment, and radiomic features were extracted via PyRadiomics. Candidate variables were screened using univariate analysis and multicollinearity checks, followed by LASSO and stepwise logistic regression to build three models: a clinical-ultrasound model, a radiomics-only model, and a combined model. Model performance for early response prediction was assessed using ROC analysis. In the training cohort (n = 167), Model_Clinic achieved an AUC of 0.85, with HER2 positivity, maximum tumor stiffness (Emax), stiffness heterogeneity (Estd), and the CEUS "radiation sign" emerging as independent predictors (all P < 0.05). The radiomics model showed moderate performance at baseline (AUC 0.69) but improved after two cycles (AUC 0.83), and a model using radiomic feature changes achieved an AUC of 0.79. Model_Combined demonstrated the best performance with a training AUC of 0.91 (sensitivity 89.4%, specificity 82.9%). In the validation cohort (n = 72), all models showed comparable AUCs (Model_Combined ~ 0.90) without significant degradation, and Model_Combined significantly outperformed Model_Clinic and Model_RSA (DeLong P = 0.006 and 0.042, respectively). In our study, integrating multimodal ultrasound and radiomic features improved the early prediction of NAT response in breast cancer, and could provide valuable information to enable timely treatment adjustments and more personalized management strategies.

SimCortex: Collision-free Simultaneous Cortical Surfaces Reconstruction

Kaveh Moradkhani, R Jarrett Rushmore, Sylvain Bouix

arxiv logopreprintJul 9 2025
Accurate cortical surface reconstruction from magnetic resonance imaging (MRI) data is crucial for reliable neuroanatomical analyses. Current methods have to contend with complex cortical geometries, strict topological requirements, and often produce surfaces with overlaps, self-intersections, and topological defects. To overcome these shortcomings, we introduce SimCortex, a deep learning framework that simultaneously reconstructs all brain surfaces (left/right white-matter and pial) from T1-weighted(T1w) MRI volumes while preserving topological properties. Our method first segments the T1w image into a nine-class tissue label map. From these segmentations, we generate subject-specific, collision-free initial surface meshes. These surfaces serve as precise initializations for subsequent multiscale diffeomorphic deformations. Employing stationary velocity fields (SVFs) integrated via scaling-and-squaring, our approach ensures smooth, topology-preserving transformations with significantly reduced surface collisions and self-intersections. Evaluations on standard datasets demonstrate that SimCortex dramatically reduces surface overlaps and self-intersections, surpassing current methods while maintaining state-of-the-art geometric accuracy.

Airway Segmentation Network for Enhanced Tubular Feature Extraction

Qibiao Wu, Yagang Wang, Qian Zhang

arxiv logopreprintJul 9 2025
Manual annotation of airway regions in computed tomography images is a time-consuming and expertise-dependent task. Automatic airway segmentation is therefore a prerequisite for enabling rapid bronchoscopic navigation and the clinical deployment of bronchoscopic robotic systems. Although convolutional neural network methods have gained considerable attention in airway segmentation, the unique tree-like structure of airways poses challenges for conventional and deformable convolutions, which often fail to focus on fine airway structures, leading to missed segments and discontinuities. To address this issue, this study proposes a novel tubular feature extraction network, named TfeNet. TfeNet introduces a novel direction-aware convolution operation that first applies spatial rotation transformations to adjust the sampling positions of linear convolution kernels. The deformed kernels are then represented as line segments or polylines in 3D space. Furthermore, a tubular feature fusion module (TFFM) is designed based on asymmetric convolution and residual connection strategies, enhancing the network's focus on subtle airway structures. Extensive experiments conducted on one public dataset and two datasets used in airway segmentation challenges demonstrate that the proposed TfeNet achieves more accuracy and continuous airway structure predictions compared with existing methods. In particular, TfeNet achieves the highest overall score of 94.95% on the current largest airway segmentation dataset, Airway Tree Modeling(ATM22), and demonstrates advanced performance on the lung fibrosis dataset(AIIB23). The code is available at https://github.com/QibiaoWu/TfeNet.

Impact of polymer source variations on hydrogel structure and product performance in dexamethasone-loaded ophthalmic inserts.

VandenBerg MA, Zaman RU, Plavchak CL, Smith WC, Nejad HB, Beringhs AO, Wang Y, Xu X

pubmed logopapersJul 9 2025
Localized drug delivery can enhance therapeutic efficacy while minimizing systemic side effects, making sustained-release ophthalmic inserts an attractive alternative to traditional eye drops. Such inserts offer improved patient compliance through prolonged therapeutic effects and a reduced need for frequent administration. This study focuses on dexamethasone-containing ophthalmic inserts. These inserts utilize a key excipient, polyethylene glycol (PEG), which forms a hydrogel upon contact with tear fluid. Developing generic equivalents of PEG-based inserts is challenging due to difficulties in characterizing inactive ingredients and the absence of standardized physicochemical characterization methods to demonstrate similarity. To address this gap, a suite of analytical approaches was applied to both PEG precursor materials sourced from different vendors and manufactured inserts. <sup>1</sup>H NMR, FTIR, MALDI, and SEC revealed variations in end-group functionalization, impurity content, and molecular weight distribution of the excipient. These differences led to changes in the finished insert network properties such as porosity, pore size and structure, gel mechanical strength, and crystallinity, which were corroborated by X-ray microscopy, AI-based image analysis, thermal, mechanical, and density measurements. In vitro release testing revealed distinct drug release profiles across formulations, with swelling rate correlated to release rate (i.e., faster release with rapid swelling). The use of non-micronized and micronized dexamethasone also contributed to release profile differences. Through comprehensive characterization of these PEG-based dexamethasone inserts, correlations between polymer quality, hydrogel microstructure, and release kinetics were established. The study highlights how excipient differences can alter product performance, emphasizing the importance of thorough analysis in developing generic equivalents of complex drug products.

Integrating radiomic texture analysis and deep learning for automated myocardial infarction detection in cine-MRI.

Xu W, Shi X

pubmed logopapersJul 8 2025
Robust differentiation between infarcted and normal myocardial tissue is essential for improving diagnostic accuracy and personalizing treatment in myocardial infarction (MI). This study proposes a hybrid framework combining radiomic texture analysis with deep learning-based segmentation to enhance MI detection on non-contrast cine cardiac magnetic resonance (CMR) imaging.The approach incorporates radiomic features derived from the Gray-Level Co-Occurrence Matrix (GLCM) and Gray-Level Run Length Matrix (GLRLM) methods into a modified U-Net segmentation network. A three-stage feature selection pipeline was employed, followed by classification using multiple machine learning models. Early and intermediate fusion strategies were integrated into the hybrid architecture. The model was validated on cine-CMR data from the SCD and Kaggle datasets.Joint Entropy, Max Probability, and RLNU emerged as the most discriminative features, with Joint Entropy achieving the highest AUC (0.948). The hybrid model outperformed standalone U-Net in segmentation (Dice = 0.887, IoU = 0.803, HD95 = 4.48 mm) and classification (accuracy = 96.30%, AUC = 0.97, precision = 0.96, recall = 0.94, F1-score = 0.96). Dimensionality reduction via PCA and t-SNE confirmed distinct class separability. Correlation coefficients (r = 0.95-0.98) and Bland-Altman plots demonstrated high agreement between predicted and reference infarct sizes.Integrating radiomic features into a deep learning segmentation pipeline improves MI detection and interpretability in cine-CMR. This scalable and explainable hybrid framework holds potential for broader applications in multimodal cardiac imaging and automated myocardial tissue characterization.

A novel UNet-SegNet and vision transformer architectures for efficient segmentation and classification in medical imaging.

Tongbram S, Shimray BA, Singh LS

pubmed logopapersJul 8 2025
Medical imaging has become an essential tool in the diagnosis and treatment of various diseases, and provides critical insights through ultrasound, MRI, and X-ray modalities. Despite its importance, challenges remain in the accurate segmentation and classification of complex structures owing to factors such as low contrast, noise, and irregular anatomical shapes. This study addresses these challenges by proposing a novel hybrid deep learning model that integrates the strengths of Convolutional Autoencoders (CAE), UNet, and SegNet architectures. In the preprocessing phase, a Convolutional Autoencoder is used to effectively reduce noise while preserving essential image details, ensuring that the images used for segmentation and classification are of high quality. The ability of CAE to denoise images while retaining critical features enhances the accuracy of the subsequent analysis. The developed model employs UNet for multiscale feature extraction and SegNet for precise boundary reconstruction, with Dynamic Feature Fusion integrated at each skip connection to dynamically weight and combine the feature maps from the encoder and decoder. This ensures that both global and local features are effectively captured, while emphasizing the critical regions for segmentation. To further enhance the model's performance, the Hybrid Emperor Penguin Optimizer (HEPO) was employed for feature selection, while the Hybrid Vision Transformer with Convolutional Embedding (HyViT-CE) was used for the classification task. This hybrid approach allows the model to maintain high accuracy across different medical imaging tasks. The model was evaluated using three major datasets: brain tumor MRI, breast ultrasound, and chest X-rays. The results demonstrate exceptional performance, achieving an accuracy of 99.92% for brain tumor segmentation, 99.67% for breast cancer detection, and 99.93% for chest X-ray classification. These outcomes highlight the ability of the model to deliver reliable and accurate diagnostics across various medical contexts, underscoring its potential as a valuable tool in clinical settings. The findings of this study will contribute to advancing deep learning applications in medical imaging, addressing existing research gaps, and offering a robust solution for improved patient care.

Inter-AI Agreement in Measuring Cine MRI-Derived Cardiac Function and Motion Patterns: A Pilot Study.

Lin K, Sarnari R, Gordon DZ, Markl M, Carr JC

pubmed logopapersJul 8 2025
Manually analyzing a series of MRI images to obtain information about the heart's motion is a time-consuming and labor-intensive task. Recently, many AI-driven tools have been used to automatically analyze cardiac MRI. However, it is still unknown whether the results generated by these tools are consistent. The aim of the present study was to investigate the agreement of AI-powered automated tools for measuring cine MRI-derived cardiac function and motion indices. Cine MRI datasets of 23 healthy volunteers (10 males, 32.7 ± 11.3 years) were processed using heart deformation analysis (HDA, Trufistrain) and Circle CVI 42. The left and right ventricular (LV/RV) end-diastolic volume (LVEDV and RVEDV), end-systolic volume (LVESV and RVESV), stroke volume (LVSV and RVSV), cardiac output (LVCO and RVCO), ejection fraction (LVEF and RVEF), LV mass (LVM), LV global strain, strain rate, displacement, and velocity were calculated without interventions. Agreements and discrepancies of indices acquired with the two tools were evaluated from various aspects using t-tests, Pearson correlation coefficient (r), interclass correlation coefficient (ICC), and coefficient of variation (CoV). Systematic biases for measuring cardiac function and motion indices were observed. In global cardiac function indices, LVEF (56.9% ± 6.4 vs. 57.8% ± 5.7, p = 0.433, r = 0.609, ICC = 0.757, CoV = 6.7%) and LVM (82.7 g ± 21.6 vs. 82.6 g ± 18.7, p = 0.988, r = 0.923, ICC = 0.956, CoV = 11.7%) acquired with HDA and Circle seemed to be exchangeable. Among cardiac motion indices, circumferential strain rate demonstrated good agreements between two tools (97 ± 14.6 vs. 97.8 ± 13.6, p = 0.598, r = 0.89, ICC = 0.943, CoV = 5.1%). Cine MRI-derived cardiac function and motion indices obtained using different AI-powered image processing tools are related but may also differ. Such variations should be considered when evaluating results sourced from different studies.
Page 66 of 1341333 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.