Sort by:
Page 21 of 72720 results

Significance of Papillary and Trabecular Muscular Volume in Right Ventricular Volumetry with Cardiac MR Imaging.

Shibagaki Y, Oka H, Imanishi R, Shimada S, Nakau K, Takahashi S

pubmed logopapersJun 20 2025
Pulmonary valve regurgitation after repaired Tetralogy of Fallot (TOF) or double-outlet right ventricle (DORV) causes hypertrophy and papillary muscle enlargement. Cardiac magnetic resonance imaging (CMR) can evaluate the right ventricular (RV) dilatation, but the effect of trabecular and papillary muscle (TPM) exclusion on RV volume for TOF or DORV reoperation decision is unclear. Twenty-three patients with repaired TOF or DORV, and 19 healthy controls aged ≥15, underwent CMR from 2012 to 2022. TPM volume is measured by artificial intelligence. Reoperation was considered when RV end-diastolic volume index (RVEDVI) >150 mL/m<sup>2</sup> or RV end-systolic volume index (RVESVI) >80 mL/m<sup>2</sup>. RV volumes were higher in the disease group than controls (P α 0.001). RV mass and TPM volumes were higher in the disease group (P α 0.001). The reduction rate of RV volumes due to the exclusion of TPM volume was 6.3% (2.1-10.5), 11.7% (6.9-13.8), and 13.9% (9.5-19.4) in the control, volume load, and volume α pressure load groups, respectively. TPM/RV volumes were higher in the volume α pressure load group (control: 0.07 g/mL, volume: 0.14 g/mL, volume α pressure: 0.17 g/mL), and correlated with QRS duration (R α 0.77). In 3 patients in the volume α pressure, RV volume included TPM was indicated for reoperation, but when RV volume was reduced by TPM removal, reoperation was no indicated. RV volume measurements, including TPM in volume α pressure load, may help determine appropriate volume recommendations for reoperation.

TextBraTS: Text-Guided Volumetric Brain Tumor Segmentation with Innovative Dataset Development and Fusion Module Exploration

Xiaoyu Shi, Rahul Kumar Jain, Yinhao Li, Ruibo Hou, Jingliang Cheng, Jie Bai, Guohua Zhao, Lanfen Lin, Rui Xu, Yen-wei Chen

arxiv logopreprintJun 20 2025
Deep learning has demonstrated remarkable success in medical image segmentation and computer-aided diagnosis. In particular, numerous advanced methods have achieved state-of-the-art performance in brain tumor segmentation from MRI scans. While recent studies in other medical imaging domains have revealed that integrating textual reports with visual data can enhance segmentation accuracy, the field of brain tumor analysis lacks a comprehensive dataset that combines radiological images with corresponding textual annotations. This limitation has hindered the exploration of multimodal approaches that leverage both imaging and textual data. To bridge this critical gap, we introduce the TextBraTS dataset, the first publicly available volume-level multimodal dataset that contains paired MRI volumes and rich textual annotations, derived from the widely adopted BraTS2020 benchmark. Building upon this novel dataset, we propose a novel baseline framework and sequential cross-attention method for text-guided volumetric medical image segmentation. Through extensive experiments with various text-image fusion strategies and templated text formulations, our approach demonstrates significant improvements in brain tumor segmentation accuracy, offering valuable insights into effective multimodal integration techniques. Our dataset, implementation code, and pre-trained models are publicly available at https://github.com/Jupitern52/TextBraTS.

Segmentation of clinical imagery for improved epidural stimulation to address spinal cord injury

Matelsky, J. K., Sharma, P., Johnson, E. C., Wang, S., Boakye, M., Angeli, C., Forrest, G. F., Harkema, S. J., Tenore, F.

medrxiv logopreprintJun 20 2025
Spinal cord injury (SCI) can severely impair motor and autonomic function, with long-term consequences for quality of life. Epidural stimulation has emerged as a promising intervention, offering partial recovery by activating neural circuits below the injury. To make this therapy effective in practice, precise placement of stimulation electrodes is essential -- and that requires accurate segmentation of spinal cord structures in MRI data. We present a protocol for manual segmentation tailored to SCI anatomy, and evaluated a deep learning approach using a U-Net architecture to automate this segmentation process. Our approach yields accurate, efficient segmentation that identify potential electrode placement sites with high fidelity. Preliminary results suggest that this framework can accelerate SCI MRI analysis and improve planning for epidural stimulation, helping bridge the gap between advanced neurotechnologies and real-world clinical application with faster surgeries and more accurate electrode placement.

Deep learning detects retropharyngeal edema on MRI in patients with acute neck infections.

Rainio O, Huhtanen H, Vierula JP, Nurminen J, Heikkinen J, Nyman M, Klén R, Hirvonen J

pubmed logopapersJun 19 2025
In acute neck infections, magnetic resonance imaging (MRI) shows retropharyngeal edema (RPE), which is a prognostic imaging biomarker for a severe course of illness. This study aimed to develop a deep learning-based algorithm for the automated detection of RPE. We developed a deep neural network consisting of two parts using axial T2-weighted water-only Dixon MRI images from 479 patients with acute neck infections annotated by radiologists at both slice and patient levels. First, a convolutional neural network (CNN) classified individual slices; second, an algorithm classified patients based on a stack of slices. Model performance was compared with the radiologists' assessment as a reference standard. Accuracy, sensitivity, specificity, and area under receiver operating characteristic curve (AUROC) were calculated. The proposed CNN was compared with InceptionV3, and the patient-level classification algorithm was compared with traditional machine learning models. Of the 479 patients, 244 (51%) were positive and 235 (49%) negative for RPE. Our model achieved accuracy, sensitivity, specificity, and AUROC of 94.6%, 83.3%, 96.2%, and 94.1% at the slice level, and 87.4%, 86.5%, 88.2%, and 94.8% at the patient level, respectively. The proposed CNN was faster than InceptionV3 but equally accurate. Our patient classification algorithm outperformed traditional machine learning models. A deep learning model, based on weakly annotated data and computationally manageable training, achieved high accuracy for automatically detecting RPE on MRI in patients with acute neck infections. Our automated method for detecting relevant MRI findings was efficiently trained and might be easily deployed in practice to study clinical applicability. This approach might improve early detection of patients at high risk for a severe course of acute neck infections. Deep learning automatically detected retropharyngeal edema on MRI in acute neck infections. Areas under the receiver operating characteristic curve were 94.1% at the slice level and 94.8% at the patient level. The proposed convolutional neural network was lightweight and required only weakly annotated data.

Multitask Deep Learning for Automated Segmentation and Prognostic Stratification of Endometrial Cancer via Biparametric MRI.

Yan R, Zhang X, Cao Q, Xu J, Chen Y, Qin S, Zhang S, Zhao W, Xing X, Yang W, Lang N

pubmed logopapersJun 19 2025
Endometrial cancer (EC) is a common gynecologic malignancy; accurate assessment of key prognostic factors is important for treatment planning. To develop a deep learning (DL) framework based on biparametric MRI for automated segmentation and multitask classification of EC key prognostic factors, including grade, stage, histological subtype, lymphovascular space invasion (LVSI), and deep myometrial invasion (DMI). Retrospective. A total of 325 patients with histologically confirmed EC were included: 211 training, 54 validation, and 60 test cases. T2-weighted imaging (T2WI, FSE/TSE) and diffusion-weighted imaging (DWI, SS-EPI) sequences at 1.5 and 3 T. The DL model comprised tumor segmentation and multitask classification. Manual delineation on T2WI and DWI acted as the reference standard for segmentation. Separate models were trained using T2WI alone, DWI alone and combined T2WI + DWI to classify dichotomized key prognostic factors. Performance was assessed in validation and test cohorts. For DMI, the combined model's was compared with visual assessment by four radiologists (with 1, 4, 7, and 20 years' experience), each of whom independently reviewed all cases. Segmentation was evaluated using the dice similarity coefficient (DSC), Jaccard similarity coefficient (JSC), Hausdorff distance (HD95), and average surface distance (ASD). Classification performance was assessed using area under the receiver operating characteristic curve (AUC). Model AUCs were compared using DeLong's test. p < 0.05 was considered significant. In the test cohort, DSCs were 0.80 (T2WI) and 0.78 (DWI) and JSCs were 0.69 for both. HD95 and ASD were 7.02/1.71 mm (T2WI) versus 10.58/2.13 mm (DWI). The classification framework achieved AUCs of 0.78-0.94 (validation) and 0.74-0.94 (test). For DMI, the combined model performed comparably to radiologists (p = 0.07-0.84). The unified DL framework demonstrates strong EC segmentation and classification performance, with high accuracy across multiple tasks. 3. Stage 3.

Non-Invasive Diagnosis of Chronic Myocardial Infarction via Composite In-Silico-Human Data Learning.

Mehdi RR, Kadivar N, Mukherjee T, Mendiola EA, Bersali A, Shah DJ, Karniadakis G, Avazmohammadi R

pubmed logopapersJun 19 2025
Myocardial infarction (MI) continues to be a leading cause of death worldwide. The precise quantification of infarcted tissue is crucial to diagnosis, therapeutic management, and post-MI care. Late gadolinium enhancement-cardiac magnetic resonance (LGE-CMR) is regarded as the gold standard for precise infarct tissue localization in MI patients. A fundamental limitation of LGE-CMR is the invasive intravenous introduction of gadolinium-based contrast agents that present potential high-risk toxicity, particularly for individuals with underlying chronic kidney diseases. Herein, a completely non-invasive methodology is developed to identify the location and extent of an infarct region in the left ventricle via a machine learning (ML) model using only cardiac strains as inputs. In this transformative approach, the remarkable performance of a multi-fidelity ML model is demonstrated, which combines rodent-based in-silico-generated training data (low-fidelity) with very limited patient-specific human data (high-fidelity) in predicting LGE ground truth. The results offer a new paradigm for developing feasible prognostic tools by augmenting synthetic simulation-based data with very small amounts of in vivo human data. More broadly, the proposed approach can significantly assist with addressing biomedical challenges in healthcare where human data are limited.

Machine learning-based MRI radiomics predict IL18 expression and overall survival of low-grade glioma patients.

Zhang Z, Xiao Y, Liu J, Xiao F, Zeng J, Zhu H, Tu W, Guo H

pubmed logopapersJun 19 2025
Interleukin-18 has broad immune regulatory functions. Genomic data and enhanced Magnetic Resonance Imaging data related to LGG patients were downloaded from The Cancer Genome Atlas and Cancer Imaging Archive, and the constructed model was externally validated using hospital MRI enhanced images and clinical pathological features. Radiomic feature extraction was performed using "PyRadiomics", feature selection was conducted using Maximum Relevance Minimum Redundancy and Recursive Feature Elimination methods, and a model was built using the Gradient Boosting Machine algorithm to predict the expression status of IL18. The constructed radiomics model achieved areas under the receiver operating characteristic curve of 0.861, 0.788, and 0.762 in the TCIA training dataset (n = 98), TCIA validation dataset (n = 41), and external validation dataset (n = 50). Calibration curves and decision curve analysis demonstrated the calibration and high clinical utility of the model. The radiomics model based on enhanced MRI can effectively predict the expression status of IL18 and the prognosis of LGG.

BrainTract: segmentation of white matter fiber tractography and analysis of structural connectivity using hybrid convolutional neural network.

Kumar PR, Shilpa B, Jha RK

pubmed logopapersJun 19 2025
Tractography uses diffusion Magnetic Resonance Imaging (dMRI) to noninvasively reconstruct brain white matter (WM) tracts, with Convolutional Neural Network (CNNs) like U-Net significantly advancing accuracy in medical image segmentation. This work proposes a metaheuristic optimization algorithm-based CNN architecture. This architecture combines the Inception-ResNet-V2 module and the densely connecting convolutional module (DI) into the Spatial Attention U-Net (SAU-Net) architecture for segmenting WM fiber tracts and analyzing the brain's structural connectivity. The proposed network model (DISAU-Net) consists of the following parts are; First, the Inception-ResNet-V2 block is used to replace the standard convolutional layers and expand the network's width; Second, the Dense-Inception block is used to extract features and deepen the network without the need for any additional parameters; Third, the down-sampling block is used to speed up training by decreasing the size of feature maps, and the up-sampling block is used to increase the maps' resolution. In addition, the parameter existing in the classifiers is randomly selected with the Gray Wolf Optimization (GWO) technique to boost the performance of the CNN architecture. We validated our method by segmenting WM tracts on dMRI scans of 280 subjects from the human connectome project (HCP) database. The proposed method is far more efficient than current methods. It offers unprecedented quantitative evaluation with high tract segmentation consistency, achieving accuracy of 97.10%, dice score of 96.88%, recall 95.74%, f1-score 94.79% for fiber tracts. The results showed that the proposed method is a potential approach for segmenting WM fiber tracts and analyzing the brain's structural connectivity.

Qualitative and quantitative analysis of functional cardiac MRI using a novel compressed SENSE sequence with artificial intelligence image reconstruction.

Konstantin K, Christian LM, Lenhard P, Thomas S, Robert T, Luisa LI, David M, Matej G, Kristina S, Philip NC

pubmed logopapersJun 19 2025
To evaluate the feasibility of combining Compressed SENSE (CS) with a newly developed deep learning-based algorithm (CS-AI) using a Convolutional Neural Network to accelerate balanced steady-state free precession (bSSFP)-sequences for cardiac magnetic resonance imaging (MRI). 30 healthy volunteers were examined prospectively with a 3 T MRI scanner. We acquired CINE bSSFP sequences for short axis (SA, multi-breath-hold) and four-chamber (4CH)-view of the heart. For each sequence, four different CS accelerations and CS-AI reconstructions with three different denoising parameters, CS-AI medium, CS-AI strong, and CS-AI complete, were used. Cardiac left ventricular (LV) function (i.e., ejection fraction, end-diastolic volume, end-systolic volume, and LV mass) was analyzed using the SA sequences in every CS factor and each AI level. Two readers, blinded to the acceleration and denoising levels, evaluated all sequences regarding image quality and artifacts using a 5-point Likert scale. Friedman and Dunn's multiple comparison tests were used for qualitative evaluation, ANOVA and Tukey Kramer test for quantitative metrics. Scan time could be decreased up to 57 % for the SA-Sequences and up to 56 % for the 4CH-Sequences compared to the clinically established sequences consisting of SA-CS3 and 4CH-CS2,5 (SA-CS3: 112 s vs. SA-CS6: 48 s; 4CH-CS2,5: 9 s vs. 4CH-CS5: 4 s, p < 0.001). LV-functional analysis was not compromised by using accelerated MRI sequences combined with CS-AI reconstructions (all p > 0.05). The image quality loss and artifact increase accompanying increasing acceleration levels could be entirely compensated by CS-AI post-processing, with the best results for image quality using the combination of the highest CS factor with strong AI (SA-CINE: Coef.:1.31, 95 %CI:1.05-1.58; 4CH-CINE: Coef.:1.18, 95 %CI:1.05-1.58; both p < 0.001), and with complete AI regarding the artifact score (SA-CINE: Coef.:1.33, 95 %CI:1.06-1.60; 4CH-CINE: Coef.:1.31, 95 %CI:0.86-1.77; both p < 0.001). Combining CS sequences with AI-based image reconstruction for denoising significantly decreases scan time in cardiac imaging while upholding LV functional analysis accuracy and delivering stable outcomes for image quality and artifact reduction. This integration presents a promising advancement in cardiac MRI, promising improved efficiency without compromising diagnostic quality.

Comparison of publicly available artificial intelligence models for pancreatic segmentation on T1-weighted Dixon images.

Sonoda Y, Fujisawa S, Kurokawa M, Gonoi W, Hanaoka S, Yoshikawa T, Abe O

pubmed logopapersJun 18 2025
This study aimed to compare three publicly available deep learning models (TotalSegmentator, TotalVibeSegmentator, and PanSegNet) for automated pancreatic segmentation on magnetic resonance images and to evaluate their performance against human annotations in terms of segmentation accuracy, volumetric measurement, and intrapancreatic fat fraction (IPFF) assessment. Twenty upper abdominal T1-weighted magnetic resonance series acquired using the two-point Dixon method were randomly selected. Three radiologists manually segmented the pancreas, and a ground-truth mask was constructed through a majority vote per voxel. Pancreatic segmentation was also performed using the three artificial intelligence models. Performance was evaluated using the Dice similarity coefficient (DSC), 95th-percentile Hausdorff distance, average symmetric surface distance, positive predictive value, sensitivity, Bland-Altman plots, and concordance correlation coefficient (CCC) for pancreatic volume and IPFF. PanSegNet achieved the highest DSC (mean ± standard deviation, 0.883 ± 0.095) and showed no statistically significant difference from the human interobserver DSC (0.896 ± 0.068; p = 0.24). In contrast, TotalVibeSegmentator (0.731 ± 0.105) and TotalSegmentator (0.707 ± 0.142) had significantly lower DSC values compared with the human interobserver average (p < 0.001). For pancreatic volume and IPFF, PanSegNet demonstrated the best agreement with the ground truth (CCC values of 0.958 and 0.993, respectively), followed by TotalSegmentator (0.834 and 0.980) and TotalVibeSegmentator (0.720 and 0.672). PanSegNet demonstrated the highest segmentation accuracy and the best agreement with human measurements for both pancreatic volume and IPFF on T1-weighted Dixon images. This model appears to be the most suitable for large-scale studies requiring automated pancreatic segmentation and intrapancreatic fat evaluation.
Page 21 of 72720 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.