Sort by:
Page 97 of 1261251 results

Accelerating 3D radial MPnRAGE using a self-supervised deep factor model.

Chen Y, Kecskemeti SR, Holmes JH, Corum CA, Yaghoobi N, Magnotta VA, Jacob M

pubmed logopapersJun 2 2025
To develop a self-supervised and memory-efficient deep learning image reconstruction method for 4D non-Cartesian MRI with high resolution and a large parametric dimension. The deep factor model (DFM) represents a parametric series of 3D multicontrast images using a neural network conditioned by the inversion time using efficient zero-filled reconstructions as input estimates. The model parameters are learned in a single-shot learning (SSL) fashion from the k-space data of each acquisition. A compatible transfer learning (TL) approach using previously acquired data is also developed to reduce reconstruction time. The DFM is compared to subspace methods with different regularization strategies in a series of phantom and in vivo experiments using the MPnRAGE acquisition for multicontrast <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn></mrow> </msub> </mrow> <annotation>$$ {T}_1 $$</annotation></semantics> </math> imaging and quantitative <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn></mrow> </msub> </mrow> <annotation>$$ {T}_1 $$</annotation></semantics> </math> estimation. DFM-SSL improved the image quality and reduced bias and variance in quantitative <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn></mrow> </msub> </mrow> <annotation>$$ {T}_1 $$</annotation></semantics> </math> estimates in both phantom and in vivo studies, outperforming all other tested methods. DFM-TL reduced the inference time while maintaining a performance comparable to DFM-SSL and outperforming subspace methods with multiple regularization techniques. The proposed DFM offers a superior representation of the multicontrast images compared to subspace models, especially in the highly accelerated MPnRAGE setting. The self-supervised training is ideal for methods with both high resolution and a large parametric dimension, where training neural networks can become computationally demanding without a dedicated high-end GPU array.

Radiogenomics and Radiomics of Skull Base Chordoma: Classification of Novel Radiomic Subgroups and Prediction of Genetic Signatures and Clinical Outcomes.

Gersey ZC, Zenkin S, Mamindla P, Amjadzadeh M, Ak M, Plute T, Peddagangireddy V, Abdallah H, Muthiah N, Wang EW, Snyderman C, Gardner PA, Colen RR, Zenonos GA

pubmed logopapersJun 2 2025
Chordomas are rare, aggressive tumors of notochordal origin, commonly affecting the spine and skull base. Skull Base Chordomas (SBCs) comprise approximately 39% of cases, with an incidence of less than 1 per million annually in the U.S. Prognosis remains poor due to resistance to chemotherapy, often requiring extensive surgical resection and adjuvant radiotherapy. Current classification methods based on chromosomal deletions are invasive and costly, presenting a need for alternative diagnostic tools. Radiomics allows for non-invasive SBC diagnosis and treatment planning. We developed and validated radiomic-based models using MRI data to predict Overall Survival (OS) and Progression-Free Survival following Surgery (PFSS) in SBC patients. Machine learning classifiers, including eXtreme Gradient Boosting (XGBoost), were employed along with feature selection techniques. Unsupervised clustering identified radiomic-based subgroups, which were correlated with chromosomal deletions and clinical outcomes. Our XGBoost model demonstrated superior predictive performance, achieving an area under the curve (AUC) of 83.33% for OS and 80.36% for PFSS, outperforming other classifiers. Radiomic clustering revealed two SBC groups with differing survival and molecular characteristics, strongly correlating with chromosomal deletion profiles. These findings indicate that radiomics can non-invasively characterize SBC phenotypes and stratify patients by prognosis. Radiomics shows promise as a reliable, non-invasive tool for the prognostication and classification of SBCs, minimizing the need for invasive genetic testing and supporting personalized treatment strategies.

Slim UNETR++: A lightweight 3D medical image segmentation network for medical image analysis.

Jin J, Yang S, Tong J, Zhang K, Wang Z

pubmed logopapersJun 2 2025
Convolutional neural network (CNN) models, such as U-Net, V-Net, and DeepLab, have achieved remarkable results across various medical imaging modalities, and ultrasound. Additionally, hybrid Transformer-based segmentation methods have shown great potential in medical image analysis. Despite the breakthroughs in feature extraction through self-attention mechanisms, these methods are computationally intensive, especially for three-dimensional medical imaging, posing significant challenges to graphics processing unit (GPU) hardware. Consequently, the demand for lightweight models is increasing. To address this issue, we designed a high-accuracy yet lightweight model that combines the strengths of CNNs and Transformers. We introduce Slim UNEt TRansformers++ (Slim UNETR++), which builds upon Slim UNETR by incorporating Medical ConvNeXt (MedNeXt), Spatial-Channel Attention (SCA), and Efficient Paired-Attention (EPA) modules. This integration leverages the advantages of both CNN and Transformer architectures to enhance model accuracy. The core component of Slim UNETR++ is the Slim UNETR++ block, which facilitates efficient information exchange through a sparse self-attention mechanism and low-cost representation aggregation. We also introduced throughput as a performance metric to quantify data processing speed. Experimental results demonstrate that Slim UNETR++ outperforms other models in terms of accuracy and model size. On the BraTS2021 dataset, Slim UNETR++ achieved a Dice accuracy of 93.12% and a 95% Hausdorff distance (HD95) of 4.23mm, significantly surpassing mainstream relevant methods such as Swin UNETR.

Fine-tuned large Language model for extracting newly identified acute brain infarcts based on computed tomography or magnetic resonance imaging reports.

Fujita N, Yasaka K, Kiryu S, Abe O

pubmed logopapersJun 2 2025
This study aimed to develop an automated early warning system using a large language model (LLM) to identify acute to subacute brain infarction from free-text computed tomography (CT) or magnetic resonance imaging (MRI) radiology reports. In this retrospective study, 5,573, 1,883, and 834 patients were included in the training (mean age, 67.5 ± 17.2 years; 2,831 males), validation (mean age, 61.5 ± 18.3 years; 994 males), and test (mean age, 66.5 ± 16.1 years; 488 males) datasets. An LLM (Japanese Bidirectional Encoder Representations from Transformers model) was fine-tuned to classify the CT and MRI reports into three groups (group 0, newly identified acute to subacute infarction; group 1, known acute to subacute infarction or old infarction; group 2, without infarction). The training and validation processes were repeated 15 times, and the best-performing model on the validation dataset was selected to further evaluate its performance on the test dataset. The best fine-tuned model exhibited sensitivities of 0.891, 0.905, and 0.959 for groups 0, 1, and 2, respectively, in the test dataset. The macrosensitivity (the average of sensitivity for all groups) and accuracy were 0.918 and 0.923, respectively. The model's performance in extracting newly identified acute brain infarcts was high, with an area under the receiver operating characteristic curve of 0.979 (95% confidence interval, 0.956-1.000). The average prediction time was 0.115 ± 0.037 s per patient. A fine-tuned LLM could extract newly identified acute to subacute brain infarcts based on CT or MRI findings with high performance.

GAN-based synthetic FDG PET images from T1 brain MRI can serve to improve performance of deep unsupervised anomaly detection models.

Zotova D, Pinon N, Trombetta R, Bouet R, Jung J, Lartizien C

pubmed logopapersJun 1 2025
Research in the cross-modal medical image translation domain has been very productive over the past few years in tackling the scarce availability of large curated multi-modality datasets with the promising performance of GAN-based architectures. However, only a few of these studies assessed task-based related performance of these synthetic data, especially for the training of deep models. We design and compare different GAN-based frameworks for generating synthetic brain[18F]fluorodeoxyglucose (FDG) PET images from T1 weighted MRI data. We first perform standard qualitative and quantitative visual quality evaluation. Then, we explore further impact of using these fake PET data in the training of a deep unsupervised anomaly detection (UAD) model designed to detect subtle epilepsy lesions in T1 MRI and FDG PET images. We introduce novel diagnostic task-oriented quality metrics of the synthetic FDG PET data tailored to our unsupervised detection task, then use these fake data to train a use case UAD model combining a deep representation learning based on siamese autoencoders with a OC-SVM density support estimation model. This model is trained on normal subjects only and allows the detection of any variation from the pattern of the normal population. We compare the detection performance of models trained on 35 paired real MR T1 of normal subjects paired either on 35 true PET images or on 35 synthetic PET images generated from the best performing generative models. Performance analysis is conducted on 17 exams of epilepsy patients undergoing surgery. The best performing GAN-based models allow generating realistic fake PET images of control subject with SSIM and PSNR values around 0.9 and 23.8, respectively and in distribution (ID) with regard to the true control dataset. The best UAD model trained on these synthetic normative PET data allows reaching 74% sensitivity. Our results confirm that GAN-based models are the best suited for MR T1 to FDG PET translation, outperforming transformer or diffusion models. We also demonstrate the diagnostic value of these synthetic data for the training of UAD models and evaluation on clinical exams of epilepsy patients. Our code and the normative image dataset are available.

Improving predictability, reliability, and generalizability of brain-wide associations for cognitive abilities via multimodal stacking.

Tetereva A, Knodt AR, Melzer TR, van der Vliet W, Gibson B, Hariri AR, Whitman ET, Li J, Lal Khakpoor F, Deng J, Ireland D, Ramrakha S, Pat N

pubmed logopapersJun 1 2025
Brain-wide association studies (BWASs) have attempted to relate cognitive abilities with brain phenotypes, but have been challenged by issues such as predictability, test-retest reliability, and cross-cohort generalizability. To tackle these challenges, we proposed a machine learning "stacking" approach that draws information from whole-brain MRI across different modalities, from task-functional MRI (fMRI) contrasts and functional connectivity during tasks and rest to structural measures, into one prediction model. We benchmarked the benefits of stacking using the Human Connectome Projects: Young Adults (<i>n</i> = 873, 22-35 years old) and Human Connectome Projects-Aging (<i>n</i> = 504, 35-100 years old) and the Dunedin Multidisciplinary Health and Development Study (Dunedin Study, <i>n</i> = 754, 45 years old). For predictability, stacked models led to out-of-sample <i>r</i>∼0.5-0.6 when predicting cognitive abilities at the time of scanning, primarily driven by task-fMRI contrasts. Notably, using the Dunedin Study, we were able to predict participants' cognitive abilities at ages 7, 9, and 11 years using their multimodal MRI at age 45 years, with an out-of-sample <i>r</i> of 0.52. For test-retest reliability, stacked models reached an excellent level of reliability (interclass correlation > 0.75), even when we stacked only task-fMRI contrasts together. For generalizability, a stacked model with nontask MRI built from one dataset significantly predicted cognitive abilities in other datasets. Altogether, stacking is a viable approach to undertake the three challenges of BWAS for cognitive abilities.

Development and validation of a combined clinical and MRI-based biomarker model to differentiate mild cognitive impairment from mild Alzheimer's disease.

Hosseini Z, Mohebbi A, Kiani I, Taghilou A, Mohammadjafari A, Aghamollaii V

pubmed logopapersJun 1 2025
Two of the most common complaints seen in neurology clinics are Alzheimer's disease (AD) and mild cognitive impairment (MCI), characterized by similar symptoms. The aim of this study was to develop and internally validate the diagnostic value of combined neurological and radiological predictors in differentiating mild AD from MCI as the outcome variable, which helps in preventing AD development. A cross-sectional study of 161 participants was conducted in a general healthcare setting, including 30 controls, 71 mild AD, and 60 MCI. Binary logistic regression was used to identify predictors of interest, with collinearity assessment conducted prior to model development. Model performance was assessed through calibration, shrinkage, and decision-curve analyses. Finally, the combined clinical and radiological model was compared to models utilizing only clinical or radiological predictors. The final model included age, sex, education status, Montreal cognitive assessment, Global Cerebral Atrophy Index, Medial Temporal Atrophy Scale, mean hippocampal volume, and Posterior Parietal Atrophy Index, with the area under the curve of 0.978 (0.934-0.996). Internal validation methods did not show substantial reduction in diagnostic performance. Combined model showed higher diagnostic performance compared to clinical and radiological models alone. Decision curve analysis highlighted the usefulness of this model for differentiation across all probability levels. A combined clinical-radiological model has excellent diagnostic performance in differentiating mild AD from MCI. Notably, the model leveraged straightforward neuroimaging markers, which are relatively simple to measure and interpret, suggesting that they could be integrated into practical, formula-driven diagnostic workflows without requiring computationally intensive deep learning models.

Efficient slice anomaly detection network for 3D brain MRI Volume.

Zhang Z, Mohsenzadeh Y

pubmed logopapersJun 1 2025
Current anomaly detection methods excel with benchmark industrial data but struggle with natural images and medical data due to varying definitions of 'normal' and 'abnormal.' This makes accurate identification of deviations in these fields particularly challenging. Especially for 3D brain MRI data, all the state-of-the-art models are reconstruction-based with 3D convolutional neural networks which are memory-intensive, time-consuming and producing noisy outputs that require further post-processing. We propose a framework called Simple Slice-based Network (SimpleSliceNet), which utilizes a model pre-trained on ImageNet and fine-tuned on a separate MRI dataset as a 2D slice feature extractor to reduce computational cost. We aggregate the extracted features to perform anomaly detection tasks on 3D brain MRI volumes. Our model integrates a conditional normalizing flow to calculate log likelihood of features and employs the contrastive loss to enhance anomaly detection accuracy. The results indicate improved performance, showcasing our model's remarkable adaptability and effectiveness when addressing the challenges exists in brain MRI data. In addition, for the large-scale 3D brain volumes, our model SimpleSliceNet outperforms the state-of-the-art 2D and 3D models in terms of accuracy, memory usage and time consumption. Code is available at: https://github.com/Jarvisarmy/SimpleSliceNet.

AO Spine Clinical Practice Recommendations for Diagnosis and Management of Degenerative Cervical Myelopathy: Evidence Based Decision Making - A Review of Cutting Edge Recent Literature Related to Degenerative Cervical Myelopathy.

Fehlings MG, Evaniew N, Ter Wengel PV, Vedantam A, Guha D, Margetis K, Nouri A, Ahmed AI, Neal CJ, Davies BM, Ganau M, Wilson JR, Martin AR, Grassner L, Tetreault L, Rahimi-Movaghar V, Marco R, Harrop J, Guest J, Alvi MA, Pedro KM, Kwon BK, Fisher CG, Kurpad SN

pubmed logopapersJun 1 2025
Study DesignLiterature review of key topics related to degenerative cervical myelopathy (DCM) with critical appraisal and clinical recommendations.ObjectiveThis article summarizes several key current topics related to the management of DCM.MethodsRecent literature related to the management of DCM was reviewed. Four articles were selected and critically appraised. Recommendations were graded as Strong or Conditional.ResultsArticle 1: The Relationship Between pre-operative MRI Signal Intensity and outcomes. <b>Conditional</b> recommendation to use diffusion-weighted imaging MR signal changes in the cervical cord to evaluate prognosis following surgical intervention for DCM. Article 2: Efficacy and Safety of Surgery for Mild DCM. <b>Conditional</b> recommendation that surgery is a valid option for mild DCM with favourable clinical outcomes. Article 3: Effect of Ventral vs Dorsal Spinal Surgery on Patient-Reported Physical Functioning in Patients With Cervical Spondylotic Myelopathy: A Randomized Clinical Trial. <b>Strong</b> recommendation that there is equipoise in the outcomes of anterior vs posterior surgical approaches in cases where either technique could be used. Article 4: Machine learning-based cluster analysis of DCM phenotypes. <b>Conditional</b> recommendation that clinicians consider pain, medical frailty, and the impact on health-related quality of life when counselling patients.ConclusionsDCM requires a multidimensional assessment including neurological dysfunction, pain, impact on health-related quality of life, medical frailty and MR imaging changes in the cord. Surgical treatment is effective and is a valid option for mild DCM. In patients where either anterior or posterior surgical approaches can be used, both techniques afford similar clinical benefit albeit with different complication profiles.

Extracerebral Normalization of <sup>18</sup>F-FDG PET Imaging Combined with Behavioral CRS-R Scores Predict Recovery from Disorders of Consciousness.

Guo K, Li G, Quan Z, Wang Y, Wang J, Kang F, Wang J

pubmed logopapersJun 1 2025
Identifying patients likely to regain consciousness early on is a challenge. The assessment of consciousness levels and the prediction of wakefulness probabilities are facilitated by <sup>18</sup>F-fluorodeoxyglucose (<sup>18</sup>F-FDG) positron emission tomography (PET). This study aimed to develop a prognostic model for predicting 1-year postinjury outcomes in prolonged disorders of consciousness (DoC) using <sup>18</sup>F-FDG PET alongside clinical behavioral scores. Eighty-seven patients with prolonged DoC newly diagnosed with behavioral Coma Recovery Scale-Revised (CRS-R) scores and <sup>18</sup>F-FDG PET/computed tomography (18F-FDG PET/CT) scans were included. PET images were normalized by the cerebellum and extracerebral tissue, respectively. Images were divided into training and independent test sets at a ratio of 5:1. Image-based classification was conducted using the DenseNet121 network, whereas tabular-based deep learning was employed to train depth features extracted from imaging models and behavioral CRS-R scores. The performance of the models was assessed and compared using the McNemar test. Among the 87 patients with DoC who received routine treatments, 52 patients showed recovery of consciousness, whereas 35 did not. The classification of the standardized uptake value ratio by extracerebral tissue model demonstrated a higher specificity and lower sensitivity in predicting consciousness recovery than the classification of the standardized uptake value ratio by cerebellum model. With area under the curve values of 0.751 ± 0.093 and 0.412 ± 0.104 on the test sets, respectively, the difference is not statistically significant (P = 0.73). The combination of standardized uptake value ratio by extracerebral tissue and computed tomography depth features with behavioral CRS-R scores yielded the highest classification accuracy, with area under the curve values of 0.950 ± 0.027 and 0.933 ± 0.015 on the training and test sets, respectively, outperforming any individual mode. In this preliminary study, a multimodal prognostic model based on <sup>18</sup>F-FDG PET extracerebral normalization and behavioral CRS-R scores facilitated the prediction of recovery in DoC.
Page 97 of 1261251 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.