Sort by:
Page 138 of 1621612 results

Bidirectional Projection-Based Multi-Modal Fusion Transformer for Early Detection of Cerebral Palsy in Infants.

Qi K, Huang T, Jin C, Yang Y, Ying S, Sun J, Yang J

pubmed logopapersMay 30 2025
Periventricular white matter injury (PWMI) is the most frequent magnetic resonance imaging (MRI) finding in infants with Cerebral Palsy (CP). We aim to detect CP and identify subtle, sparse PWMI lesions in infants under two years of age with immature brain structures. Based on the characteristic that the responsible lesions are located within five target regions, we first construct a multi-modal dataset including 243 cases with the mask annotations of five target regions for delineating anatomical structures on T1-Weighted Imaging (T1WI) images, masks for lesions on T2-Weighted Imaging (T2WI) images, and categories (CP or Non-CP). Furthermore, we develop a bidirectional projection-based multi-modal fusion transformer (BiP-MFT), incorporating a Bidirectional Projection Fusion Module (BPFM) for integrating the features between five target regions on T1WI images and lesions on T2WI images. Our BiP-MFT achieves subject-level classification accuracy of 0.90, specificity of 0.87, and sensitivity of 0.94. It surpasses the best results of nine comparative methods, with 0.10, 0.08, and 0.09 improvements in classification accuracy, specificity and sensitivity respectively. Our BPFM outperforms eight compared feature fusion strategies using Transformer and U-Net backbones on our dataset. Ablation studies on the dataset annotations and model components justify the effectiveness of our annotation method and the model rationality. The proposed dataset and codes are available at https://github.com/Kai-Qi/BiP-MFT.

Deep learning-driven modality imputation and subregion segmentation to enhance high-grade glioma grading.

Yu J, Liu Q, Xu C, Zhou Q, Xu J, Zhu L, Chen C, Zhou Y, Xiao B, Zheng L, Zhou X, Zhang F, Ye Y, Mi H, Zhang D, Yang L, Wu Z, Wang J, Chen M, Zhou Z, Wang H, Wang VY, Wang E, Xu D

pubmed logopapersMay 30 2025
This study aims to develop a deep learning framework that leverages modality imputation and subregion segmentation to improve grading accuracy in high-grade gliomas. A retrospective analysis was conducted using data from 1,251 patients in the BraTS2021 dataset as the main cohort and 181 clinical cases collected from a medical center between April 2013 and June 2018 (51 years ± 17; 104 males) as the external test set. We propose a PatchGAN-based modality imputation network with an Aggregated Residual Transformer (ART) module combining Transformer self-attention and CNN feature extraction via residual links, paired with a U-Net variant for segmentation. Generative accuracy used PSNR and SSIM for modality conversions, while segmentation performance was measured with DSC and HD95 across necrotic core (NCR), edema (ED), and enhancing tumor (ET) regions. Senior radiologists conducted a comprehensive Likert-based assessment, with diagnostic accuracy evaluated by AUC. Statistical analysis was performed using the Wilcoxon signed-rank test and the DeLong test. The best source-target modality pairs for imputation were T1 to T1ce and T1ce to T2 (p < 0.001). In subregion segmentation, the overall DSC was 0.878 and HD95 was 19.491, with the ET region showing the highest segmentation accuracy (DSC: 0.877, HD95: 12.149). Clinical validation revealed an improvement in grading accuracy by the senior radiologist, with the AUC increasing from 0.718 to 0.913 (P < 0.001) when using the combined imputation and segmentation models. The proposed deep learning framework improves high-grade glioma grading by modality imputation and segmentation, aiding the senior radiologist and offering potential to advance clinical decision-making.

A Study on Predicting the Efficacy of Posterior Lumbar Interbody Fusion Surgery Using a Deep Learning Radiomics Model.

Fang L, Pan Y, Zheng H, Li F, Zhang W, Liu J, Zhou Q

pubmed logopapersMay 30 2025
This study seeks to develop a combined model integrating clinical data, radiomics, and deep learning (DL) for predicting the efficacy of posterior lumbar interbody fusion (PLIF) surgery. A retrospective review was conducted on 461 patients who underwent PLIF for degenerative lumbar diseases. These patients were partitioned into a training set (n=368) and a test set (n=93) in an 8:2 ratio. Clinical models, radiomics models, and DL models were constructed based on logistic regression and random forest, respectively. A combined model was established by integrating these three models. All radiomics and DL features were extracted from sagittal T2-weighted images using 3D slicer software. The least absolute shrinkage and selection operator method selected the optimal radiomics and DL features to build the models. In addition to analyzing the original region of interest (ROI), we also conducted different degrees of mask expansion on the ROI to determine the optimal ROI. The performance of the model was evaluated by using the receiver operating characteristic curve (ROC) and the area under the ROC curve. The differences in AUC were compared by DeLong test. Among the clinical characteristics, patient age, body weight, and preoperative intervertebral distance at the surgical segment are risk factors affecting the fusion outcome. The radiomics model based on MRI with expanded 10 mm mask showed excellent performance (training set AUC=0.814, 95% CI: (0.761-0.866); test set AUC=0.749, 95% CI: [0.631-0.866]). Among all single models, the DL model had the best diagnostic prediction performance, with AUC values of (0.995, 95% CI: [0.991-0.999]) for the training set and (0.803, 95% CI: [0.705-0.902]) for the test set. Compared to all the models, the combined model of clinical features, radiomics features, and DL features had the best diagnostic prediction performance, with AUC values of (0.993, 95% CI: [0.987-0.999]) for the training set and (0.866, 95% CI: [0.778-0.955]) for the test set. The proposed clinical feature-deep learning radiomics model can effectively predict the postoperative efficacy of patients undergoing PLIF surgery and has good clinical applicability.

Fully automated measurement of aortic pulse wave velocity from routine cardiac MRI studies.

Jiang Y, Yao T, Paliwal N, Knight D, Punjabi K, Steeden J, Hughes AD, Muthurangu V, Davies R

pubmed logopapersMay 30 2025
Aortic pulse wave velocity (PWV) is a prognostic biomarker for cardiovascular disease, which can be measured by dividing the aortic path length by the pulse transit time. However, current MRI techniques require special sequences and time-consuming manual analysis. We aimed to fully automate the process using deep learning to measure PWV from standard sequences, facilitating PWV measurement in routine clinical and research scans. A deep learning (DL) model was developed to generate high-resolution 3D aortic segmentations from routine 2D trans-axial SSFP localizer images, and the centerlines of the resulting segmentations were used to estimate the aortic path length. A further DL model was built to automatically segment the ascending and descending aorta in phase contrast images, and pulse transit time was estimated from the sampled flow curves. Quantitative comparison with trained observers was performed for path length, aortic flow segmentation and transit time, either using an external clinical dataset with both localizers and paired 3D images acquired or on a sample of UK Biobank subjects. Potential application to clinical research scans was evaluated on 1053 subjects from the UK Biobank. Aortic path length measurement was accurate with no major difference between the proposed method (125 ± 19 mm) and manual measurement by a trained observer (124 ± 19 mm) (P = 0.88). Automated phase contrast image segmentation was similar to that of a trained observer for both the ascending (Dice vs manual: 0.96) and descending (Dice 0.89) aorta with no major difference in transit time estimation (proposed method = 21 ± 9 ms, manual = 22 ± 9 ms; P = 0.15). 966 of 1053 (92 %) UK Biobank subjects were successfully analyzed, with a median PWV of 6.8 m/s, increasing 27 % per decade of age and 6.5 % higher per 10 mmHg higher systolic blood pressure. We describe a fully automated method for measuring PWV from standard cardiac MRI localizers and a single phase contrast imaging plane. The method is robust and can be applied to routine clinical scans, and could unlock the potential of measuring PWV in large-scale clinical and population studies. All models and deployment codes are available online.

A combined attention mechanism for brain tumor segmentation of lower-grade glioma in magnetic resonance images.

Hedibi H, Beladgham M, Bouida A

pubmed logopapersMay 29 2025
Low-grade gliomas (LGGs) are among the most problematic brain tumors to reliably segment in FLAIR MRI, and effective delineation of these lesions is critical for clinical diagnosis, treatment planning, and patient monitoring. Nevertheless, conventional U-Net-based approaches usually suffer from the loss of critical structural details owing to repetitive down-sampling, while the encoder features often retain irrelevant information that is not properly utilized by the decoder. To solve these challenges, this paper offers a dual-attention U-shaped design, named ECASE-Unet, which seamlessly integrates Efficient Channel Attention (ECA) and Squeeze-and-Excitation (SE) blocks in both the encoder and decoder stages. By selectively recalibrating channel-wise information, the model increases diagnostically significant regions of interest and reduces noise. Furthermore, dilated convolutions are introduced at the bottleneck layer to capture multi-scale contextual cues without inflating computational complexity, and dropout regularization is systematically applied to prevent overfitting on heterogeneous data. Experimental results on the Kaggle Low-Grade-Glioma dataset suggest that ECASE-Unet greatly outperforms previous segmentation algorithms, reaching a Dice coefficient of 0.9197 and an Intersection over Union (IoU) of 0.8521. Comprehensive ablation studies further reveal that integrating ECA and SE modules delivers complementing benefits, supporting the model's robust efficacy in precisely identifying LGG boundaries. These findings underline the potential of ECASE-Unet to expedite clinical operations and improve patient outcomes. Future work will focus on improving the model's applicability to new MRI modalities and studying the integration of clinical characteristics for a more comprehensive characterization of brain tumors.

Research on multi-algorithm and explainable AI techniques for predictive modeling of acute spinal cord injury using multimodal data.

Tai J, Wang L, Xie Y, Li Y, Fu H, Ma X, Li H, Li X, Yan Z, Liu J

pubmed logopapersMay 29 2025
Machine learning technology has been extensively applied in the medical field, particularly in the context of disease prediction and patient rehabilitation assessment. Acute spinal cord injury (ASCI) is a sudden trauma that frequently results in severe neurological deficits and a significant decline in quality of life. Early prediction of neurological recovery is crucial for the personalized treatment planning. While extensively explored in other medical fields, this study is the first to apply multiple machine learning methods and Shapley Additive Explanations (SHAP) analysis specifically to ASCI for predicting neurological recovery. A total of 387 ASCI patients were included, with clinical, imaging, and laboratory data collected. Key features were selected using univariate analysis, Lasso regression, and other feature selection techniques, integrating clinical, radiomics, and laboratory data. A range of machine learning models, including XGBoost, Logistic Regression, KNN, SVM, Decision Tree, Random Forest, LightGBM, ExtraTrees, Gradient Boosting, and Gaussian Naive Bayes, were evaluated, with Gaussian Naive Bayes exhibiting the best performance. Radiomics features extracted from T2-weighted fat-suppressed MRI scans, such as original_glszm_SizeZoneNonUniformity and wavelet-HLL_glcm_SumEntropy, significantly enhanced predictive accuracy. SHAP analysis identified critical clinical features, including IMLL, INR, BMI, Cys C, and RDW-CV, in the predictive model. The model was validated and demonstrated excellent performance across multiple metrics. The clinical utility and interpretability of the model were further enhanced through the application of patient clustering and nomogram analysis. This model has the potential to serve as a reliable tool for clinicians in the formulation of personalized treatment plans and prognosis assessment.

Deep Learning CAIPIRINHA-VIBE Improves and Accelerates Head and Neck MRI.

Nitschke LV, Lerchbaumer M, Ulas T, Deppe D, Nickel D, Geisel D, Kubicka F, Wagner M, Walter-Rittel T

pubmed logopapersMay 29 2025
The aim of this study was to evaluate image quality for contrast-enhanced (CE) neck MRI with a deep learning-reconstructed VIBE sequence with acceleration factors (AF) 4 (DL4-VIBE) and 6 (DL6-VIBE). Patients referred for neck MRI were examined in a 3-Tesla scanner in this prospective, single-center study. Four CE fat-saturated (FS) VIBE sequences were acquired in each patient: Star-VIBE (4:01 min), VIBE (2:05 min), DL4-VIBE (0:24 min), DL6-VIBE (0:17 min). Image quality was evaluated by three radiologists with a 5-point Likert scale and included overall image quality, muscle contour delineation, conspicuity of mucosa and pharyngeal musculature, FS uniformity, and motion artifacts. Objective image quality was assessed with signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and quantification of metal artifacts. 68 patients (60.3% male; mean age 57.4±16 years) were included in this study. DL4-VIBE was superior for overall image quality, delineation of muscle contours, differentiation of mucosa and pharyngeal musculature, vascular delineation, and motion artifacts. Notably, DL4-VIBE exhibited exceptional FS uniformity (p<0.001). SNR and CNR were superior for DL4-VIBE compared to all other sequences (p<0.001). Metal artifacts were least pronounced in the standard VIBE, followed by DL4-VIBE (p<0.001). Although DL6-VIBE was inferior to DL4-VIBE, it demonstrated improved FS homogeneity, delineation of pharyngeal mucosa, and CNR compared to Star-VIBE and VIBE. DL4-VIBE significantly improves image quality for CE neck MRI with a fraction of the scan time of conventional sequences.

Parameter-Free Bio-Inspired Channel Attention for Enhanced Cardiac MRI Reconstruction

Anam Hashmi, Julia Dietlmeier, Kathleen M. Curran, Noel E. O'Connor

arxiv logopreprintMay 29 2025
Attention is a fundamental component of the human visual recognition system. The inclusion of attention in a convolutional neural network amplifies relevant visual features and suppresses the less important ones. Integrating attention mechanisms into convolutional neural networks enhances model performance and interpretability. Spatial and channel attention mechanisms have shown significant advantages across many downstream tasks in medical imaging. While existing attention modules have proven to be effective, their design often lacks a robust theoretical underpinning. In this study, we address this gap by proposing a non-linear attention architecture for cardiac MRI reconstruction and hypothesize that insights from ecological principles can guide the development of effective and efficient attention mechanisms. Specifically, we investigate a non-linear ecological difference equation that describes single-species population growth to devise a parameter-free attention module surpassing current state-of-the-art parameter-free methods.

RNN-AHF Framework: Enhancing Multi-focal Nature of Hypoxic Ischemic Encephalopathy Lesion Region in MRI Image Using Optimized Rough Neural Network Weight and Anti-Homomorphic Filter.

Thangeswari M, Muthucumaraswamy R, Anitha K, Shanker NR

pubmed logopapersMay 29 2025
Image enhancement of the Hypoxic-Ischemic Encephalopathy (HIE) lesion region in neonatal brain MR images is a challenging task due to the diffuse (i.e., multi-focal) nature, small size, and low contrast of the lesions. Classifying the stages of HIE is also difficult because of the unclear boundaries and edges of the lesions, which are dispersedthroughout the brain. Moreover, unclear boundaries and edges are due to chemical shifts, partial volume artifacts, and motion artifacts. Further, voxels may reflect signals from adjacent tissues. Existing algorithms perform poorly in HIE lesion enhancement due to artifacts, voxels, and the diffuse nature of the lesion. In this paper, we propose a Rough Neural Network and Anti-Homomorphic Filter (RNN-AHF) framework for the enhancement of the HIE lesion region. The RNN-AHF framework reduces the pixel dimensionality of the feature space, eliminates unnecessary pixels, and preserves essential pixels for lesion enhancement. The RNN efficiently learns and identifies pixel patterns and facilitates adaptive enhancement based on different weights in the neural network. The proposed RNN-AHF framework operates using optimized neural weights and an optimized training function. The hybridization of optimized weights and the training function enhances the lesion region with high contrast while preserving the boundaries and edges. The proposed RNN-AHF framework achieves a lesion image enhancement and classification accuracy of approximately 93.5%, which is better than traditional algorithms.

Free-running isotropic three-dimensional cine magnetic resonance imaging with deep learning image reconstruction.

Erdem S, Erdem O, Stebbings S, Greil G, Hussain T, Zou Q

pubmed logopapersMay 29 2025
Cardiovascular magnetic resonance (CMR) cine imaging is the gold standard for assessing ventricular volumes and function. It typically requires two-dimensional (2D) bSSFP sequences and multiple breath-holds, which can be challenging for patients with limited breath-holding capacity. Three-dimensional (3D) cardiovascular magnetic resonance angiography (MRA) usually suffers from lengthy acquisition. Free-running 3D cine imaging with deep learning (DL) reconstruction offers a potential solution by acquiring both cine and angiography simultaneously. To evaluate the efficiency and accuracy of a ferumoxytol-enhanced 3D cine imaging MR sequence combined with DL reconstruction and Heart-NAV technology in patients with congenital heart disease. This Institutional Review Board approved this prospective study that compared (i) functional and volumetric measurements between 3 and 2D cine images; (ii) contrast-to-noise ratio (CNR) between deep-learning (DL) and compressed sensing (CS)-reconstructed 3D cine images; and (iii) cross-sectional area (CSA) measurements between DL-reconstructed 3D cine images and the clinical 3D MRA images acquired using the bSSFP sequence. Paired t-tests were used to compare group measurements, and Bland-Altman analysis assessed agreement in CSA and volumetric data. Sixteen patients (seven males; median age 6 years) were recruited. 3D cine imaging showed slightly larger right ventricular (RV) volumes and lower RV ejection fraction (EF) compared to 2D cine, with a significant difference only in RV end-systolic volume (P = 0.02). Left ventricular (LV) volumes and EF were slightly higher, and LV mass was lower, without significant differences (P ≥ 0.05). DL-reconstructed 3D cine images showed significantly higher CNR in all pulmonary veins than CS-reconstructed 3D cine images (all P < 0.05). Highly accelerated free-running 3D cine imaging with DL reconstruction shortens acquisition times and provides comparable volumetric measurements to 2D cine, and comparable CSA to clinical 3D MRA.
Page 138 of 1621612 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.