Sort by:
Page 19 of 1601593 results

Machine Learning for Preoperative Assessment and Postoperative Prediction in Cervical Cancer: Multicenter Retrospective Model Integrating MRI and Clinicopathological Data.

Li S, Guo C, Fang Y, Qiu J, Zhang H, Ling L, Xu J, Peng X, Jiang C, Wang J, Hua K

pubmed logopapersSep 12 2025
Machine learning (ML) has been increasingly applied to cervical cancer (CC) research. However, few studies have combined both clinical parameters and imaging data. At the same time, there remains an urgent need for more robust and accurate preoperative assessment of parametrial invasion and lymph node metastasis, as well as postoperative prognosis prediction. The objective of this study is to develop an integrated ML model combining clinicopathological variables and magnetic resonance image features for (1) preoperative parametrial invasion and lymph node metastasis detection and (2) postoperative recurrence and survival prediction. Retrospective data from 250 patients with CC (2014-2022; 2 tertiary hospitals) were analyzed. Variables were assessed for their predictive value regarding parametrial invasion, lymph node metastasis, survival, and recurrence using 7 ML models: K-nearest neighbor (KNN), support vector machine, decision tree, random forest (RF), balanced RF, weighted DT, and weighted KNN. Performance was assessed via 5-fold cross-validation using accuracy, sensitivity, specificity, precision, F1-score, and area under the receiver operating characteristic curve (AUC). The optimal models were deployed in an artificial intelligence-assisted contouring and prognosis prediction system. Among 250 women, there were 11 deaths and 24 recurrences. (1) For preoperative evaluation, the integrated model using balanced RF achieved optimal performance (sensitivity 0.81, specificity 0.85) for parametrial invasion, while weighted KNN achieved the best performance for lymph node metastasis (sensitivity 0.98, AUC 0.72). (2) For postoperative prognosis, weighted KNN also demonstrated high accuracy for recurrence (accuracy 0.94, AUC 0.86) and mortality (accuracy 0.97, AUC 0.77), with relatively balanced sensitivity of 0.80 and 0.33, respectively. (3) An artificial intelligence-assisted contouring and prognosis prediction system was developed to support preoperative evaluation and postoperative prognosis prediction. The integration of clinical data and magnetic resonance images provides enhanced diagnostic capability to preoperatively detect parametrial invasion and lymph node metastasis detection and prognostic capability to predict recurrence and mortality for CC, facilitating personalized, precise treatment strategies.

Deep learning-powered temperature prediction for optimizing transcranial MR-guided focused ultrasound treatment.

Xiong Y, Yang M, Arkin M, Li Y, Duan C, Bian X, Lu H, Zhang L, Wang S, Ren X, Li X, Zhang M, Zhou X, Pan L, Lou X

pubmed logopapersSep 12 2025
Precise temperature control is challenging during transcranial MR-guided focused ultrasound (MRgFUS) treatment. The aim of this study was to develop a deep learning model integrating the treatment parameters for each sonication, along with patient-specific clinical information and skull metrics, for prediction of the MRgFUS therapeutic temperature. This is a retrospective analysis of sonications from patients with essential tremor or Parkinson's disease who underwent unilateral MRgFUS thalamotomy or pallidothalamic tractotomy at a single hospital from January 2019 to June 2023. For model training, a dataset of 600 sonications (72 patients) was used, while a validation dataset comprising 199 sonications (18 patients) was used to assess model performance. Additionally, an external dataset of 146 sonications (20 patients) was used for external validation. The developed deep learning model, called Fust-Net, achieved high predictive accuracy, with normalized mean absolute errors of 1.655°C for the internal dataset and 2.432°C for the external dataset, which closely matched the actual temperature. The graded evaluation showed that Fust-Net achieved an effective temperature prediction rate of 82.6%. These results showcase the exciting potential of Fust-Net for achieving precise temperature control during MRgFUS treatment, opening new doors for enhanced precision and safety in clinical applications.

A Comparison and Evaluation of Fine-tuned Convolutional Neural Networks to Large Language Models for Image Classification and Segmentation of Brain Tumors on MRI

Felicia Liu, Jay J. Yoo, Farzad Khalvati

arxiv logopreprintSep 12 2025
Large Language Models (LLMs) have shown strong performance in text-based healthcare tasks. However, their utility in image-based applications remains unexplored. We investigate the effectiveness of LLMs for medical imaging tasks, specifically glioma classification and segmentation, and compare their performance to that of traditional convolutional neural networks (CNNs). Using the BraTS 2020 dataset of multi-modal brain MRIs, we evaluated a general-purpose vision-language LLM (LLaMA 3.2 Instruct) both before and after fine-tuning, and benchmarked its performance against custom 3D CNNs. For glioma classification (Low-Grade vs. High-Grade), the CNN achieved 80% accuracy and balanced precision and recall. The general LLM reached 76% accuracy but suffered from a specificity of only 18%, often misclassifying Low-Grade tumors. Fine-tuning improved specificity to 55%, but overall performance declined (e.g., accuracy dropped to 72%). For segmentation, three methods - center point, bounding box, and polygon extraction, were implemented. CNNs accurately localized gliomas, though small tumors were sometimes missed. In contrast, LLMs consistently clustered predictions near the image center, with no distinction of glioma size, location, or placement. Fine-tuning improved output formatting but failed to meaningfully enhance spatial accuracy. The bounding polygon method yielded random, unstructured outputs. Overall, CNNs outperformed LLMs in both tasks. LLMs showed limited spatial understanding and minimal improvement from fine-tuning, indicating that, in their current form, they are not well-suited for image-based tasks. More rigorous fine-tuning or alternative training strategies may be needed for LLMs to achieve better performance, robustness, and utility in the medical space.

Toward Reliable Thalamic Segmentation: a rigorous evaluation of automated methods for structural MRI

Argyropoulos, G. P. D., Butler, C. R., Saranathan, M.

medrxiv logopreprintSep 12 2025
Automated thalamic nuclear segmentation has contributed towards a shift in neuroimaging analyses from treating the thalamus as a homogeneous, passive relay, to a set of individual nuclei, embedded within distinct brain-wide circuits. However, many studies continue to widely rely on FreeSurfers segmentation of T1-weighted structural MRIs, despite their poor intrathalamic nuclear contrast. Meanwhile, a convolutional neural network tool has been developed for FreeSurfer, using information from both diffusion and T1-weighted MRIs. Another popular thalamic nuclear segmentation technique is HIPS-THOMAS, a multi-atlas-based method that leverages white-matter-like contrast synthesized from T1-weighted MRIs. However, rigorous comparisons amongst methods remain scant, and the thalamic atlases against which these methods have been assessed have their own limitations. These issues may compromise the quality of cross-species comparisons, structural and functional connectivity studies in health and disease, as well as the efficacy of neuromodulatory interventions targeting the thalamus. Here, we report, for the first time, comparisons amongst HIPS-THOMAS, the standard FreeSurfer segmentation, and its more recent development, against two thalamic atlases as silver-standard ground-truths. We used two cohorts of healthy adults, and one cohort of patients in the chronic phase of autoimmune limbic encephalitis. In healthy adults, HIPS-THOMAS surpassed, not only the standard FreeSurfer segmentation, but also its more recent, diffusion-based update. The improvements made with the latter relative to the former were limited to a few nuclei. Finally, the standard FreeSurfer method underperformed, relative to the other two, in distinguishing between patients and healthy controls based on the affected anteroventral and pulvinar nuclei. In light of the above findings, we provide recommendations on the use of automated segmentation methods of the human thalamus using structural brain imaging.

Three-Dimensional Radiomics and Machine Learning for Predicting Postoperative Outcomes in Laminoplasty for Cervical Spondylotic Myelopathy: A Clinical-Radiomics Model.

Zheng B, Zhu Z, Ma K, Liang Y, Liu H

pubmed logopapersSep 12 2025
This study aims to explore a method based on three-dimensional cervical spinal cord reconstruction, radiomics feature extraction, and machine learning to build a postoperative prognosis prediction model for patients with cervical spondylotic myelopathy (CSM). It also evaluates the predictive performance of different cervical spinal cord segmentation strategies and machine learning algorithms. A retrospective analysis is conducted on 126 CSM patients who underwent posterior single-door laminoplasty from January 2017 to December 2022. Three different cervical spinal cord segmentation strategies (narrowest segment, surgical segment, and entire cervical cord C1-C7) are applied to preoperative MRI images for radiomics feature extraction. Good clinical prognosis is defined as a postoperative JOA recovery rate ≥50%. By comparing the performance of 8 machine learning algorithms, the optimal cervical spinal cord segmentation strategy and classifier are selected. Subsequently, clinical features (smoking history, diabetes, preoperative JOA score, and cSVA) are combined with radiomics features to construct a clinical-radiomics prediction model. Among the three cervical spinal cord segmentation strategies, the SVM model based on the narrowest segment performed best (AUC=0.885). Among clinical features, smoking history, diabetes, preoperative JOA score, and cSVA are important indicators for prognosis prediction. When clinical features are combined with radiomics features, the fusion model achieved excellent performance on the test set (accuracy=0.895, AUC=0.967), significantly outperforming either the clinical model or the radiomics model alone. This study validates the feasibility and superiority of three-dimensional radiomics combined with machine learning in predicting postoperative prognosis for CSM. The combination of radiomics features based on the narrowest segment and clinical features can yield a highly accurate prognosis prediction model, providing new insights for clinical assessment and individualized treatment decisions. Future studies need to further validate the stability and generalizability of this model in multi-center, large-sample cohorts.

Cardiac Magnetic Resonance Imaging in the German National Cohort (NAKO): Automated Segmentation of Short-Axis Cine Images and Post-Processing Quality Control.

Full PM, Schirrmeister RT, Hein M, Russe MF, Reisert M, Ammann C, Greiser KH, Niendorf T, Pischon T, Schulz-Menger J, Maier-Hein KH, Bamberg F, Rospleszcz S, Schlett CL, Schuppert C

pubmed logopapersSep 12 2025
The prospective, multicenter German National Cohort (NAKO) provides a unique dataset of cardiac magnetic resonance (CMR) cine images. Effective processing of these images requires a robust segmentation and quality control pipeline. A deep learning model for semantic segmentation, based on the nnU-Net architecture, was applied to full-cycle short-axis cine images from 29,908 baseline participants. The primary objective was to determine data on structure and function for both ventricles (LV, RV), including end-diastolic volumes (EDV), end-systolic volumes (ESV), and LV myocardial mass. Quality control measures included a visual assessment of outliers in morphofunctional parameters, inter- and intra-ventricular phase differences, and time-volume curves (TVC). These were adjudicated using a five-point rating scale, ranging from five (excellent) to one (non-diagnostic), with ratings of three or lower subject to exclusion. The predictive value of outlier criteria for inclusion and exclusion was evaluated using receiver operating characteristics analysis. The segmentation model generated complete data for 29,609 participants (incomplete in 1.0%), of which 5,082 cases (17.0%) underwent visual assessment. Quality assurance yielded a sample of 26,899 (90.8%) participants with excellent or good quality, excluding 1,875 participants due to image quality issues and 835 participants due to segmentation quality issues. TVC was the strongest single discriminator between included and excluded participants (AUC: 0.684). Of the two-category combinations, the pairing of TVC and phases provided the greatest improvement over TVC alone (AUC difference: 0.044; p<0.001). The best performance was observed when all three categories were combined (AUC: 0.748). By extending the quality-controlled sample to include mid-level 'acceptable' quality ratings, a total of 28,413 (96.0%) participants could be included. The implemented pipeline facilitated the automated segmentation of an extensive CMR dataset, integrating quality control measures. This methodology ensures that ensuing quantitative analyses are conducted with a diminished risk of bias.

Building a General SimCLR Self-Supervised Foundation Model Across Neurological Diseases to Advance 3D Brain MRI Diagnoses

Emily Kaczmarek, Justin Szeto, Brennan Nichyporuk, Tal Arbel

arxiv logopreprintSep 12 2025
3D structural Magnetic Resonance Imaging (MRI) brain scans are commonly acquired in clinical settings to monitor a wide range of neurological conditions, including neurodegenerative disorders and stroke. While deep learning models have shown promising results analyzing 3D MRI across a number of brain imaging tasks, most are highly tailored for specific tasks with limited labeled data, and are not able to generalize across tasks and/or populations. The development of self-supervised learning (SSL) has enabled the creation of large medical foundation models that leverage diverse, unlabeled datasets ranging from healthy to diseased data, showing significant success in 2D medical imaging applications. However, even the very few foundation models for 3D brain MRI that have been developed remain limited in resolution, scope, or accessibility. In this work, we present a general, high-resolution SimCLR-based SSL foundation model for 3D brain structural MRI, pre-trained on 18,759 patients (44,958 scans) from 11 publicly available datasets spanning diverse neurological diseases. We compare our model to Masked Autoencoders (MAE), as well as two supervised baselines, on four diverse downstream prediction tasks in both in-distribution and out-of-distribution settings. Our fine-tuned SimCLR model outperforms all other models across all tasks. Notably, our model still achieves superior performance when fine-tuned using only 20% of labeled training samples for predicting Alzheimer's disease. We use publicly available code and data, and release our trained model at https://github.com/emilykaczmarek/3D-Neuro-SimCLR, contributing a broadly applicable and accessible foundation model for clinical brain MRI analysis.

Novel BDefRCNLSTM: an efficient ensemble deep learning approaches for enhanced brain tumor detection and categorization with segmentation.

Janapati M, Akthar S

pubmed logopapersSep 11 2025
Brain tumour detection and classification are critical for improving patient prognosis and treatment planning. However, manual identification from magnetic resonance imaging (MRI) scans is time-consuming, error-prone, and reliant on expert interpretation. The increasing complexity of tumour characteristics necessitates automated solutions to enhance accuracy and efficiency. This study introduces a novel ensemble deep learning model, boosted deformable and residual convolutional network with bi-directional convolutional long short-term memory (BDefRCNLSTM), for the classification and segmentation of brain tumours. The proposed framework integrates entropy-based local binary pattern (ELBP) for extracting spatial semantic features and employs the enhanced sooty tern optimisation (ESTO) algorithm for optimal feature selection. Additionally, an improved X-Net model is utilised for precise segmentation of tumour regions. The model is trained and evaluated on Figshare, Brain MRI, and Kaggle datasets using multiple performance metrics. Experimental results demonstrate that the proposed BDefRCNLSTM model achieves over 99% accuracy in both classification and segmentation, outperforming existing state-of-the-art approaches. The findings establish the proposed approach as a clinically viable solution for automated brain tumour diagnosis. The integration of optimised feature selection and advanced segmentation techniques improves diagnostic accuracy, potentially assisting radiologists in making faster and more reliable decisions.

Neurodevelopmental deviations in schizophrenia: Evidences from multimodal connectome-based brain ages.

Fan YS, Yang P, Zhu Y, Jing W, Xu Y, Xu Y, Guo J, Lu F, Yang M, Huang W, Chen H

pubmed logopapersSep 11 2025
Pathologic schizophrenia processes originate early in brain development, leading to detectable brain alterations via structural and functional magnetic resonance imaging (MRI). Recent MRI studies have sought to characterize disease effects from a brain age perspective, but developmental deviations from the typical brain age trajectory in youths with schizophrenia remain unestablished. This study investigated brain development deviations in early-onset schizophrenia (EOS) patients by applying machine learning algorithms to structural and functional MRI data. Multimodal MRI data, including T1-weighted MRI (T1w-MRI), diffusion MRI, and resting-state functional MRI (rs-fMRI) data, were collected from 80 antipsychotic-naive first-episode EOS patients and 91 typically developing (TD) controls. The morphometric similarity connectome (MSC), structural connectome (SC), and functional connectome (FC) were separately constructed by using these three modalities. According to these connectivity features, eight brain age estimation models were first trained with the TD group, the best of which was then used to predict brain ages in patients. Individual brain age gaps were assessed as brain ages minus chronological ages. Both the SC and MSC features performed well in brain age estimation, whereas the FC features did not. Compared with the TD controls, the EOS patients showed increased absolute brain age gaps when using the SC or MSC features, with opposite trends between childhood and adolescence. These increased brain age gaps for EOS patients were positively correlated with the severity of their clinical symptoms. These findings from a multimodal brain age perspective suggest that advanced brain age gaps exist early in youths with schizophrenia.

Mechanistic Learning with Guided Diffusion Models to Predict Spatio-Temporal Brain Tumor Growth

Daria Laslo, Efthymios Georgiou, Marius George Linguraru, Andreas Rauschecker, Sabine Muller, Catherine R. Jutzeler, Sarah Bruningk

arxiv logopreprintSep 11 2025
Predicting the spatio-temporal progression of brain tumors is essential for guiding clinical decisions in neuro-oncology. We propose a hybrid mechanistic learning framework that combines a mathematical tumor growth model with a guided denoising diffusion implicit model (DDIM) to synthesize anatomically feasible future MRIs from preceding scans. The mechanistic model, formulated as a system of ordinary differential equations, captures temporal tumor dynamics including radiotherapy effects and estimates future tumor burden. These estimates condition a gradient-guided DDIM, enabling image synthesis that aligns with both predicted growth and patient anatomy. We train our model on the BraTS adult and pediatric glioma datasets and evaluate on 60 axial slices of in-house longitudinal pediatric diffuse midline glioma (DMG) cases. Our framework generates realistic follow-up scans based on spatial similarity metrics. It also introduces tumor growth probability maps, which capture both clinically relevant extent and directionality of tumor growth as shown by 95th percentile Hausdorff Distance. The method enables biologically informed image generation in data-limited scenarios, offering generative-space-time predictions that account for mechanistic priors.
Page 19 of 1601593 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.