Sort by:
Page 72 of 1241232 results

Ultrafast J-resolved magnetic resonance spectroscopic imaging for high-resolution metabolic brain imaging.

Zhao Y, Li Y, Jin W, Guo R, Ma C, Tang W, Li Y, El Fakhri G, Liang ZP

pubmed logopapersJun 20 2025
Magnetic resonance spectroscopic imaging has potential for non-invasive metabolic imaging of the human brain. Here we report a method that overcomes several long-standing technical barriers associated with clinical magnetic resonance spectroscopic imaging, including long data acquisition times, limited spatial coverage and poor spatial resolution. Our method achieves ultrafast data acquisition using an efficient approach to encode spatial, spectral and J-coupling information of multiple molecules. Physics-informed machine learning is synergistically integrated in data processing to enable reconstruction of high-quality molecular maps. We validated the proposed method through phantom experiments. We obtained high-resolution molecular maps from healthy participants, revealing metabolic heterogeneities in different brain regions. We also obtained high-resolution whole-brain molecular maps in regular clinical settings, revealing metabolic alterations in tumours and multiple sclerosis. This method has the potential to transform clinical metabolic imaging and provide a long-desired capability for non-invasive label-free metabolic imaging of brain function and diseases for both research and clinical applications.

Image-Based Search in Radiology: Identification of Brain Tumor Subtypes within Databases Using MRI-Based Radiomic Features.

von Reppert M, Chadha S, Willms K, Avesta A, Maleki N, Zeevi T, Lost J, Tillmanns N, Jekel L, Merkaj S, Lin M, Hoffmann KT, Aneja S, Aboian MS

pubmed logopapersJun 20 2025
Existing neuroradiology reference materials do not cover the full range of primary brain tumor presentations, and text-based medical image search engines are limited by the lack of consistent structure in radiology reports. To address this, an image-based search approach is introduced here, leveraging an institutional database to find reference MRIs visually similar to presented query cases. Two hundred ninety-five patients (mean age and standard deviation, 51 ± 20 years) with primary brain tumors who underwent surgical and/or radiotherapeutic treatment between 2000 and 2021 were included in this retrospective study. Semiautomated convolutional neural network-based tumor segmentation was performed, and radiomic features were extracted. The data set was split into reference and query subsets, and dimensionality reduction was applied to cluster reference cases. Radiomic features extracted from each query case were projected onto the clustered reference cases, and nearest neighbors were retrieved. Retrieval performance was evaluated by using mean average precision at k, and the best-performing dimensionality reduction technique was identified. Expert readers independently rated visual similarity by using a 5-point Likert scale. t-Distributed stochastic neighbor embedding with 6 components was the highest-performing dimensionality reduction technique, with mean average precision at 5 ranging from 78%-100% by tumor type. The top 5 retrieved reference cases showed high visual similarity Likert scores with corresponding query cases (76% 'similar' or 'very similar'). We introduce an image-based search method for exploring historical MR images of primary brain tumors and fetching reference cases closely resembling queried ones. Assessment involving comparison of tumor types and visual similarity Likert scoring by expert neuroradiologists validates the effectiveness of this method.

Robust Radiomic Signatures of Intervertebral Disc Degeneration from MRI.

McSweeney T, Tiulpin A, Kowlagi N, Määttä J, Karppinen J, Saarakkala S

pubmed logopapersJun 20 2025
A retrospective analysis. The aim of this study was to identify a robust radiomic signature from deep learning segmentations for intervertebral disc (IVD) degeneration classification. Low back pain (LBP) is the most common musculoskeletal symptom worldwide and IVD degeneration is an important contributing factor. To improve the quantitative phenotyping of IVD degeneration from T2-weighted magnetic resonance imaging (MRI) and better understand its relationship with LBP, multiple shape and intensity features have been investigated. IVD radiomics have been less studied but could reveal sub-visual imaging characteristics of IVD degeneration. We used data from Northern Finland Birth Cohort 1966 members who underwent lumbar spine T2-weighted MRI scans at age 45-47 (n=1397). We used a deep learning model to segment the lumbar spine IVDs and extracted 737 radiomic features, as well as calculating IVD height index and peak signal intensity difference. Intraclass correlation coefficients across image and mask perturbations were calculated to identify robust features. Sparse partial least squares discriminant analysis was used to train a Pfirrmann grade classification model. The radiomics model had balanced accuracy of 76.7% (73.1-80.3%) and Cohen's Kappa of 0.70 (0.67-0.74), compared to 66.0% (62.0-69.9%) and 0.55 (0.51-0.59) for an IVD height index and peak signal intensity model. 2D sphericity and interquartile range emerged as radiomics-based features that were robust and highly correlated to Pfirrmann grade (Spearman's correlation coefficients of -0.72 and -0.77 respectively). Based on our findings these radiomic signatures could serve as alternatives to the conventional indices, representing a significant advance in the automated quantitative phenotyping of IVD degeneration from standard-of-care MRI.

TextBraTS: Text-Guided Volumetric Brain Tumor Segmentation with Innovative Dataset Development and Fusion Module Exploration

Xiaoyu Shi, Rahul Kumar Jain, Yinhao Li, Ruibo Hou, Jingliang Cheng, Jie Bai, Guohua Zhao, Lanfen Lin, Rui Xu, Yen-wei Chen

arxiv logopreprintJun 20 2025
Deep learning has demonstrated remarkable success in medical image segmentation and computer-aided diagnosis. In particular, numerous advanced methods have achieved state-of-the-art performance in brain tumor segmentation from MRI scans. While recent studies in other medical imaging domains have revealed that integrating textual reports with visual data can enhance segmentation accuracy, the field of brain tumor analysis lacks a comprehensive dataset that combines radiological images with corresponding textual annotations. This limitation has hindered the exploration of multimodal approaches that leverage both imaging and textual data. To bridge this critical gap, we introduce the TextBraTS dataset, the first publicly available volume-level multimodal dataset that contains paired MRI volumes and rich textual annotations, derived from the widely adopted BraTS2020 benchmark. Building upon this novel dataset, we propose a novel baseline framework and sequential cross-attention method for text-guided volumetric medical image segmentation. Through extensive experiments with various text-image fusion strategies and templated text formulations, our approach demonstrates significant improvements in brain tumor segmentation accuracy, offering valuable insights into effective multimodal integration techniques. Our dataset, implementation code, and pre-trained models are publicly available at https://github.com/Jupitern52/TextBraTS.

Three-dimensional U-Net with transfer learning improves automated whole brain delineation from MRI brain scans of rats, mice, and monkeys.

Porter VA, Hobson BA, D'Almeida AJ, Bales KL, Lein PJ, Chaudhari AJ

pubmed logopapersJun 20 2025
Automated whole-brain delineation (WBD) techniques often struggle to generalize across pre-clinical studies due to variations in animal models, magnetic resonance imaging (MRI) scanners, and tissue contrasts. We developed a 3D U-Net neural network for WBD pre-trained on organophosphate intoxication (OPI) rat brain MRI scans. We used transfer learning (TL) to adapt this OPI-pretrained network to other animal models: rat model of Alzheimer's disease (AD), mouse model of tetramethylenedisulfotetramine (TETS) intoxication, and titi monkey model of social bonding. We assessed an OPI-pretrained 3D U-Net across animal models under three conditions: (1) direct application to each dataset; (2) utilizing TL; and (3) training disease-specific U-Net models. For each condition, training dataset size (TDS) was optimized, and output WBDs were compared to manual segmentations for accuracy. The OPI-pretrained 3D U-Net (TDS = 100) achieved the best accuracy [median[min-max]] for the test OPI dataset with a Dice coefficient (DC) = [0.987 [0.977-0.992]] and Hausdorff distance (HD) = [0.86 [0.55-1.27]]mm. TL improved generalization across all models [AD (TDS = 40): DC = 0.987 [0.977-0.992] and HD = 0.72 [0.54-1.00]mm; TETS (TDS = 10): DC = 0.992 [0.984-0.993] and HD = 0.40 [0.31-0.50]mm; Monkey (TDS = 8): DC = 0.977 [0.968-0.979] and HD = 3.03 [2.19-3.91]mm], showing performance comparable to disease-specific networks. The OPI-pretrained 3D U-Net with TL achieved accuracy comparable to disease-specific networks with reduced training data (TDS ≤ 40 scans) across all models. Future work will focus on developing a multi-region delineation pipeline for pre-clinical MRI brain data, utilizing the proposed WBD as an initial step.

Non-Invasive Diagnosis of Chronic Myocardial Infarction via Composite In-Silico-Human Data Learning.

Mehdi RR, Kadivar N, Mukherjee T, Mendiola EA, Bersali A, Shah DJ, Karniadakis G, Avazmohammadi R

pubmed logopapersJun 19 2025
Myocardial infarction (MI) continues to be a leading cause of death worldwide. The precise quantification of infarcted tissue is crucial to diagnosis, therapeutic management, and post-MI care. Late gadolinium enhancement-cardiac magnetic resonance (LGE-CMR) is regarded as the gold standard for precise infarct tissue localization in MI patients. A fundamental limitation of LGE-CMR is the invasive intravenous introduction of gadolinium-based contrast agents that present potential high-risk toxicity, particularly for individuals with underlying chronic kidney diseases. Herein, a completely non-invasive methodology is developed to identify the location and extent of an infarct region in the left ventricle via a machine learning (ML) model using only cardiac strains as inputs. In this transformative approach, the remarkable performance of a multi-fidelity ML model is demonstrated, which combines rodent-based in-silico-generated training data (low-fidelity) with very limited patient-specific human data (high-fidelity) in predicting LGE ground truth. The results offer a new paradigm for developing feasible prognostic tools by augmenting synthetic simulation-based data with very small amounts of in vivo human data. More broadly, the proposed approach can significantly assist with addressing biomedical challenges in healthcare where human data are limited.

Machine learning-based MRI radiomics predict IL18 expression and overall survival of low-grade glioma patients.

Zhang Z, Xiao Y, Liu J, Xiao F, Zeng J, Zhu H, Tu W, Guo H

pubmed logopapersJun 19 2025
Interleukin-18 has broad immune regulatory functions. Genomic data and enhanced Magnetic Resonance Imaging data related to LGG patients were downloaded from The Cancer Genome Atlas and Cancer Imaging Archive, and the constructed model was externally validated using hospital MRI enhanced images and clinical pathological features. Radiomic feature extraction was performed using "PyRadiomics", feature selection was conducted using Maximum Relevance Minimum Redundancy and Recursive Feature Elimination methods, and a model was built using the Gradient Boosting Machine algorithm to predict the expression status of IL18. The constructed radiomics model achieved areas under the receiver operating characteristic curve of 0.861, 0.788, and 0.762 in the TCIA training dataset (n = 98), TCIA validation dataset (n = 41), and external validation dataset (n = 50). Calibration curves and decision curve analysis demonstrated the calibration and high clinical utility of the model. The radiomics model based on enhanced MRI can effectively predict the expression status of IL18 and the prognosis of LGG.

Multitask Deep Learning for Automated Segmentation and Prognostic Stratification of Endometrial Cancer via Biparametric MRI.

Yan R, Zhang X, Cao Q, Xu J, Chen Y, Qin S, Zhang S, Zhao W, Xing X, Yang W, Lang N

pubmed logopapersJun 19 2025
Endometrial cancer (EC) is a common gynecologic malignancy; accurate assessment of key prognostic factors is important for treatment planning. To develop a deep learning (DL) framework based on biparametric MRI for automated segmentation and multitask classification of EC key prognostic factors, including grade, stage, histological subtype, lymphovascular space invasion (LVSI), and deep myometrial invasion (DMI). Retrospective. A total of 325 patients with histologically confirmed EC were included: 211 training, 54 validation, and 60 test cases. T2-weighted imaging (T2WI, FSE/TSE) and diffusion-weighted imaging (DWI, SS-EPI) sequences at 1.5 and 3 T. The DL model comprised tumor segmentation and multitask classification. Manual delineation on T2WI and DWI acted as the reference standard for segmentation. Separate models were trained using T2WI alone, DWI alone and combined T2WI + DWI to classify dichotomized key prognostic factors. Performance was assessed in validation and test cohorts. For DMI, the combined model's was compared with visual assessment by four radiologists (with 1, 4, 7, and 20 years' experience), each of whom independently reviewed all cases. Segmentation was evaluated using the dice similarity coefficient (DSC), Jaccard similarity coefficient (JSC), Hausdorff distance (HD95), and average surface distance (ASD). Classification performance was assessed using area under the receiver operating characteristic curve (AUC). Model AUCs were compared using DeLong's test. p < 0.05 was considered significant. In the test cohort, DSCs were 0.80 (T2WI) and 0.78 (DWI) and JSCs were 0.69 for both. HD95 and ASD were 7.02/1.71 mm (T2WI) versus 10.58/2.13 mm (DWI). The classification framework achieved AUCs of 0.78-0.94 (validation) and 0.74-0.94 (test). For DMI, the combined model performed comparably to radiologists (p = 0.07-0.84). The unified DL framework demonstrates strong EC segmentation and classification performance, with high accuracy across multiple tasks. 3. Stage 3.

Deep learning detects retropharyngeal edema on MRI in patients with acute neck infections.

Rainio O, Huhtanen H, Vierula JP, Nurminen J, Heikkinen J, Nyman M, Klén R, Hirvonen J

pubmed logopapersJun 19 2025
In acute neck infections, magnetic resonance imaging (MRI) shows retropharyngeal edema (RPE), which is a prognostic imaging biomarker for a severe course of illness. This study aimed to develop a deep learning-based algorithm for the automated detection of RPE. We developed a deep neural network consisting of two parts using axial T2-weighted water-only Dixon MRI images from 479 patients with acute neck infections annotated by radiologists at both slice and patient levels. First, a convolutional neural network (CNN) classified individual slices; second, an algorithm classified patients based on a stack of slices. Model performance was compared with the radiologists' assessment as a reference standard. Accuracy, sensitivity, specificity, and area under receiver operating characteristic curve (AUROC) were calculated. The proposed CNN was compared with InceptionV3, and the patient-level classification algorithm was compared with traditional machine learning models. Of the 479 patients, 244 (51%) were positive and 235 (49%) negative for RPE. Our model achieved accuracy, sensitivity, specificity, and AUROC of 94.6%, 83.3%, 96.2%, and 94.1% at the slice level, and 87.4%, 86.5%, 88.2%, and 94.8% at the patient level, respectively. The proposed CNN was faster than InceptionV3 but equally accurate. Our patient classification algorithm outperformed traditional machine learning models. A deep learning model, based on weakly annotated data and computationally manageable training, achieved high accuracy for automatically detecting RPE on MRI in patients with acute neck infections. Our automated method for detecting relevant MRI findings was efficiently trained and might be easily deployed in practice to study clinical applicability. This approach might improve early detection of patients at high risk for a severe course of acute neck infections. Deep learning automatically detected retropharyngeal edema on MRI in acute neck infections. Areas under the receiver operating characteristic curve were 94.1% at the slice level and 94.8% at the patient level. The proposed convolutional neural network was lightweight and required only weakly annotated data.

Qualitative and quantitative analysis of functional cardiac MRI using a novel compressed SENSE sequence with artificial intelligence image reconstruction.

Konstantin K, Christian LM, Lenhard P, Thomas S, Robert T, Luisa LI, David M, Matej G, Kristina S, Philip NC

pubmed logopapersJun 19 2025
To evaluate the feasibility of combining Compressed SENSE (CS) with a newly developed deep learning-based algorithm (CS-AI) using a Convolutional Neural Network to accelerate balanced steady-state free precession (bSSFP)-sequences for cardiac magnetic resonance imaging (MRI). 30 healthy volunteers were examined prospectively with a 3 T MRI scanner. We acquired CINE bSSFP sequences for short axis (SA, multi-breath-hold) and four-chamber (4CH)-view of the heart. For each sequence, four different CS accelerations and CS-AI reconstructions with three different denoising parameters, CS-AI medium, CS-AI strong, and CS-AI complete, were used. Cardiac left ventricular (LV) function (i.e., ejection fraction, end-diastolic volume, end-systolic volume, and LV mass) was analyzed using the SA sequences in every CS factor and each AI level. Two readers, blinded to the acceleration and denoising levels, evaluated all sequences regarding image quality and artifacts using a 5-point Likert scale. Friedman and Dunn's multiple comparison tests were used for qualitative evaluation, ANOVA and Tukey Kramer test for quantitative metrics. Scan time could be decreased up to 57 % for the SA-Sequences and up to 56 % for the 4CH-Sequences compared to the clinically established sequences consisting of SA-CS3 and 4CH-CS2,5 (SA-CS3: 112 s vs. SA-CS6: 48 s; 4CH-CS2,5: 9 s vs. 4CH-CS5: 4 s, p < 0.001). LV-functional analysis was not compromised by using accelerated MRI sequences combined with CS-AI reconstructions (all p > 0.05). The image quality loss and artifact increase accompanying increasing acceleration levels could be entirely compensated by CS-AI post-processing, with the best results for image quality using the combination of the highest CS factor with strong AI (SA-CINE: Coef.:1.31, 95 %CI:1.05-1.58; 4CH-CINE: Coef.:1.18, 95 %CI:1.05-1.58; both p < 0.001), and with complete AI regarding the artifact score (SA-CINE: Coef.:1.33, 95 %CI:1.06-1.60; 4CH-CINE: Coef.:1.31, 95 %CI:0.86-1.77; both p < 0.001). Combining CS sequences with AI-based image reconstruction for denoising significantly decreases scan time in cardiac imaging while upholding LV functional analysis accuracy and delivering stable outcomes for image quality and artifact reduction. This integration presents a promising advancement in cardiac MRI, promising improved efficiency without compromising diagnostic quality.
Page 72 of 1241232 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.