Sort by:
Page 126 of 1621612 results

Slim UNETR++: A lightweight 3D medical image segmentation network for medical image analysis.

Jin J, Yang S, Tong J, Zhang K, Wang Z

pubmed logopapersJun 2 2025
Convolutional neural network (CNN) models, such as U-Net, V-Net, and DeepLab, have achieved remarkable results across various medical imaging modalities, and ultrasound. Additionally, hybrid Transformer-based segmentation methods have shown great potential in medical image analysis. Despite the breakthroughs in feature extraction through self-attention mechanisms, these methods are computationally intensive, especially for three-dimensional medical imaging, posing significant challenges to graphics processing unit (GPU) hardware. Consequently, the demand for lightweight models is increasing. To address this issue, we designed a high-accuracy yet lightweight model that combines the strengths of CNNs and Transformers. We introduce Slim UNEt TRansformers++ (Slim UNETR++), which builds upon Slim UNETR by incorporating Medical ConvNeXt (MedNeXt), Spatial-Channel Attention (SCA), and Efficient Paired-Attention (EPA) modules. This integration leverages the advantages of both CNN and Transformer architectures to enhance model accuracy. The core component of Slim UNETR++ is the Slim UNETR++ block, which facilitates efficient information exchange through a sparse self-attention mechanism and low-cost representation aggregation. We also introduced throughput as a performance metric to quantify data processing speed. Experimental results demonstrate that Slim UNETR++ outperforms other models in terms of accuracy and model size. On the BraTS2021 dataset, Slim UNETR++ achieved a Dice accuracy of 93.12% and a 95% Hausdorff distance (HD95) of 4.23mm, significantly surpassing mainstream relevant methods such as Swin UNETR.

MRI Radiomics based on paraspinal muscle for prediction postoperative outcomes in lumbar degenerative spondylolisthesis.

Yu Y, Xu W, Li X, Zeng X, Su Z, Wang Q, Li S, Liu C, Wang Z, Wang S, Liao L, Zhang J

pubmed logopapersJun 2 2025
This study aims to develop an paraspinal muscle-based radiomics model using a machine learning approach and assess its utility in predicting postoperative outcomes among patients with lumbar degenerative spondylolisthesis (LDS). This retrospective study included a total of 155 patients diagnosed with LDS who underwent single-level posterior lumbar interbody fusion (PLIF) surgery between January 2021 and October 2023. The patients were divided into train and test cohorts in a ratio of 8:2.Radiomics features were extracted from axial T2-weighted lumbar MRI, and seven machine learning models were developed after selecting the most relevant radiomic features using T-test, Pearson correlation, and Lasso. A combined model was then created by integrating both clinical and radiomics features. The performance of the models was evaluated through ROC, sensitivity, and specificity, while their clinical utility was assessed using AUC and Decision Curve Analysis (DCA). The LR model demonstrated robust predictive performance compared to the other machine learning models evaluated in the study. The combined model, integrating both clinical and radiomic features, exhibited an AUC of 0.822 (95% CI, 0.761-0.883) in the training cohorts and 0.826 (95% CI, 0.766-0.886) in the test cohorts, indicating substantial predictive capability. Moreover, the combined model showed superior clinical benefit and increased classification accuracy when compared to the radiomics model alone. The findings suggest that the combined model holds promise for accurately predicting postoperative outcomes in patients with LDS and could be valuable in guiding treatment strategies and assisting clinicians in making informed clinical decisions for LDS patients.

Accelerating 3D radial MPnRAGE using a self-supervised deep factor model.

Chen Y, Kecskemeti SR, Holmes JH, Corum CA, Yaghoobi N, Magnotta VA, Jacob M

pubmed logopapersJun 2 2025
To develop a self-supervised and memory-efficient deep learning image reconstruction method for 4D non-Cartesian MRI with high resolution and a large parametric dimension. The deep factor model (DFM) represents a parametric series of 3D multicontrast images using a neural network conditioned by the inversion time using efficient zero-filled reconstructions as input estimates. The model parameters are learned in a single-shot learning (SSL) fashion from the k-space data of each acquisition. A compatible transfer learning (TL) approach using previously acquired data is also developed to reduce reconstruction time. The DFM is compared to subspace methods with different regularization strategies in a series of phantom and in vivo experiments using the MPnRAGE acquisition for multicontrast <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn></mrow> </msub> </mrow> <annotation>$$ {T}_1 $$</annotation></semantics> </math> imaging and quantitative <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn></mrow> </msub> </mrow> <annotation>$$ {T}_1 $$</annotation></semantics> </math> estimation. DFM-SSL improved the image quality and reduced bias and variance in quantitative <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn></mrow> </msub> </mrow> <annotation>$$ {T}_1 $$</annotation></semantics> </math> estimates in both phantom and in vivo studies, outperforming all other tested methods. DFM-TL reduced the inference time while maintaining a performance comparable to DFM-SSL and outperforming subspace methods with multiple regularization techniques. The proposed DFM offers a superior representation of the multicontrast images compared to subspace models, especially in the highly accelerated MPnRAGE setting. The self-supervised training is ideal for methods with both high resolution and a large parametric dimension, where training neural networks can become computationally demanding without a dedicated high-end GPU array.

Attention-enhanced residual U-Net: lymph node segmentation method with bimodal MRI images.

Qiu J, Chen C, Li M, Hong J, Dong B, Xu S, Lin Y

pubmed logopapersJun 2 2025
In medical images, lymph nodes (LNs) have fuzzy boundaries, diverse shapes and sizes, and structures similar to surrounding tissues. To automatically segment uterine LNs from sagittal magnetic resonance (MRI) scans, we combined T2-weighted imaging (T2WI) and diffusion-weighted imaging (DWI) images and tested the final results in our proposed model. This study used a data set of 158 MRI images of patients with FIGO staged LN confirmed by pathology. To improve the robustness of the model, data augmentation was applied to expand the data set. The training data was manually annotated by two experienced radiologists. The DWI and T2 images were fused and inputted into U-Net. The efficient channel attention (ECA) module was added to U-Net. A residual network was added to the encoding-decoding stage, named Efficient residual U-Net (ERU-Net), to obtain the final segmentation results and calculate the mean intersection-over-union (mIoU). The experimental results demonstrated that the ERU-Net network showed strong segmentation performance, which was significantly better than other segmentation networks. The mIoU reached 0.83, and the average pixel accuracy was 0.91. In addition, the precision was 0.90, and the corresponding recall was 0.91. In this study, ERU-Net successfully achieved the segmentation of LN in uterine MRI images. Compared with other segmentation networks, our network has the best segmentation effect on uterine LN. This provides a valuable reference for doctors to develop more effective and efficient treatment plans.

MobileTurkerNeXt: investigating the detection of Bankart and SLAP lesions using magnetic resonance images.

Gurger M, Esmez O, Key S, Hafeez-Baig A, Dogan S, Tuncer T

pubmed logopapersJun 2 2025
The landscape of computer vision is predominantly shaped by two groundbreaking methodologies: transformers and convolutional neural networks (CNNs). In this study, we aim to introduce an innovative mobile CNN architecture designed for orthopedic imaging that efficiently identifies both Bankart and SLAP lesions. Our approach involved the collection of two distinct magnetic resonance (MR) image datasets, with the primary goal of automating the detection of Bankart and SLAP lesions. A novel mobile CNN, dubbed MobileTurkerNeXt, forms the cornerstone of this research. This newly developed model, comprising roughly 1 million trainable parameters, unfolds across four principal stages: the stem, main, downsampling, and output phases. The stem phase incorporates three convolutional layers to initiate feature extraction. In the main phase, we introduce an innovative block, drawing inspiration from ConvNeXt, EfficientNet, and ResNet architectures. The downsampling phase utilizes patchify average pooling and pixel-wise convolution to effectively reduce spatial dimensions, while the output phase is meticulously engineered to yield classification outcomes. Our experimentation with MobileTurkerNeXt spanned three comparative scenarios: Bankart versus normal, SLAP versus normal, and a tripartite comparison of Bankart, SLAP, and normal cases. The model demonstrated exemplary performance, achieving test classification accuracies exceeding 96% across these scenarios. The empirical results underscore the MobileTurkerNeXt's superior classification process in differentiating among Bankart, SLAP, and normal conditions in orthopedic imaging. This underscores the potential of our proposed mobile CNN in advancing diagnostic capabilities and contributing significantly to the field of medical image analysis.

Current trends in glioma tumor segmentation: A survey of deep learning modules.

Shoushtari FK, Elahi R, Valizadeh G, Moodi F, Salari HM, Rad HS

pubmed logopapersJun 2 2025
Multiparametric Magnetic Resonance Imaging (mpMRI) is the gold standard for diagnosing brain tumors, especially gliomas, which are difficult to segment due to their heterogeneity and varied sub-regions. While manual segmentation is time-consuming and error-prone, Deep Learning (DL) automates the process with greater accuracy and speed. We conducted ablation studies on surveyed articles to evaluate the impact of "add-on" modules-addressing challenges like spatial information loss, class imbalance, and overfitting-on glioma segmentation performance. Advanced modules-such as atrous (dilated) convolutions, inception, attention, transformer, and hybrid modules-significantly enhance segmentation accuracy, efficiency, multiscale feature extraction, and boundary delineation, while lightweight modules reduce computational complexity. Experiments on the Brain Tumor Segmentation (BraTS) dataset (comprising low- and high-grade gliomas) confirm their robustness, with top-performing models achieving high Dice score for tumor sub-regions. This survey underscores the need for optimal module selection and placement to balance speed, accuracy, and interpretability in glioma segmentation. Future work should focus on improving model interpretability, lowering computational costs, and boosting generalizability. Tools like NeuroQuant® and Raidionics demonstrate potential for clinical translation. Further refinement could enable regulatory approval, advancing precision in brain tumor diagnosis and treatment planning.

Metabolic Dysfunction-Associated Steatotic Liver Disease Is Associated With Accelerated Brain Ageing: A Population-Based Study.

Wang J, Yang R, Miao Y, Zhang X, Paillard-Borg S, Fang Z, Xu W

pubmed logopapersJun 1 2025
Metabolic dysfunction-associated steatotic liver disease (MASLD) is linked to cognitive decline and dementia risk. We aimed to investigate the association between MASLD and brain ageing and explore the role of low-grade inflammation. Within the UK Biobank, 30 386 chronic neurological disorders-free participants who underwent brain magnetic resonance imaging (MRI) scans were included. Individuals were categorised into no MASLD/related SLD and MASLD/related SLD (including subtypes of MASLD, MASLD with increased alcohol intake [MetALD] and MASLD with other combined aetiology). Brain age was estimated using machine learning by 1079 brain MRI phenotypes. Brain age gap (BAG) was calculated as the difference between brain age and chronological age. Low-grade inflammation (INFLA) was calculated based on white blood cell count, platelet, neutrophil granulocyte to lymphocyte ratio and C-reactive protein. Data were analysed using linear regression and structural equation models. At baseline, 7360 (24.2%) participants had MASLD/related SLD. Compared to participants with no MASLD/related SLD, those with MASLD/related SLD had significantly larger BAG (β = 0.86, 95% CI = 0.70, 1.02), as well as those with MASLD (β = 0.59, 95% CI = 0.41, 0.77) or MetALD (β = 1.57, 95% CI = 1.31, 1.83). The association between MASLD/related SLD and larger BAG was significant across middle-aged (< 60) and older (≥ 60) adults, males and females, and APOE ɛ4 carriers and non-carriers. INFLA mediated 13.53% of the association between MASLD/related SLD and larger BAG (p < 0.001). MASLD/related SLD, as well as MASLD and MetALD, is associated with accelerated brain ageing, even among middle-aged adults and APOE ɛ4 non-carriers. Low-grade systemic inflammation may partially mediate this association.

A rule-based method to automatically locate lumbar vertebral bodies on MRI images.

Xiberta P, Vila M, Ruiz M, Julià I Juanola A, Puig J, Vilanova JC, Boada I

pubmed logopapersJun 1 2025
Segmentation is a critical process in medical image interpretation. It is also essential for preparing training datasets for machine learning (ML)-based solutions. Despite technological advancements, achieving fully automatic segmentation is still challenging. User interaction is required to initiate the process, either by defining points or regions of interest, or by verifying and refining the output. One of the complex structures that requires semi-automatic segmentation procedures or manually defined training datasets is the lumbar spine. Automating the placement of a point within each lumbar vertebral body could significantly reduce user interaction in these procedures. A new method for automatically locating lumbar vertebral bodies in sagittal magnetic resonance images (MRI) is presented. The method integrates different image processing techniques and relies on the vertebral body morphology. Testing was mainly performed using 50 MRI scans that were previously annotated manually by placing a point at the centre of each lumbar vertebral body. A complementary public dataset was also used to assess robustness. Evaluation metrics included the correct labelling of each structure, the inclusion of each point within the corresponding vertebral body area, and the accuracy of the locations relative to the vertebral body centres using root mean squared error (RMSE) and mean absolute error (MAE). A one-sample Student's t-test was also performed to find the distance beyond which differences are considered significant (α = 0.05). All lumbar vertebral bodies from the primary dataset were correctly labelled, and the average RMSE and MAE between the automatic and manual locations were less than 5 mm. Distances to the vertebral body centres were found to be significantly less than 4.33 mm with a p-value < 0.05, and significantly less than half the average minimum diameter of a lumbar vertebral body with a p-value < 0.00001. Results from the complementary public dataset include high labelling and inclusion rates (85.1% and 94.3%, respectively), and similar accuracy values. The proposed method successfully achieves robust and accurate automatic placement of points within each lumbar vertebral body. The automation of this process enables the transition from semi-automatic to fully automatic methods, thus reducing error-prone and time-consuming user interaction, and facilitating the creation of training datasets for ML-based solutions.

DeepValve: The first automatic detection pipeline for the mitral valve in Cardiac Magnetic Resonance imaging.

Monopoli G, Haas D, Singh A, Aabel EW, Ribe M, Castrini AI, Hasselberg NE, Bugge C, Five C, Haugaa K, Forsch N, Thambawita V, Balaban G, Maleckar MM

pubmed logopapersJun 1 2025
Mitral valve (MV) assessment is key to diagnosing valvular disease and to addressing its serious downstream complications. Cardiac magnetic resonance (CMR) has become an essential diagnostic tool in MV disease, offering detailed views of the valve structure and function, and overcoming the limitations of other imaging modalities. Automated detection of the MV leaflets in CMR could enable rapid and precise assessments that enhance diagnostic accuracy. To address this gap, we introduce DeepValve, the first deep learning (DL) pipeline for MV detection using CMR. Within DeepValve, we tested three valve detection models: a keypoint-regression model (UNET-REG), a segmentation model (UNET-SEG) and a hybrid model based on keypoint detection (DSNT-REG). We also propose metrics for evaluating the quality of MV detection, including Procrustes-based metrics (UNET-REG, DSNT-REG) and customized Dice-based metrics (UNET-SEG). We developed and tested our models on a clinical dataset comprising 120 CMR images from patients with confirmed MV disease (mitral valve prolapse and mitral annular disjunction). Our results show that DSNT-REG delivered the best regression performance, accurately locating landmark locations. UNET-SEG achieved satisfactory Dice and customized Dice scores, also accurately predicting valve location and topology. Overall, our work represents a critical first step towards automated MV assessment using DL in CMR and paving the way for improved clinical assessment in MV disease.
Page 126 of 1621612 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.