Sort by:
Page 41 of 93924 results

Challenges, optimization strategies, and future horizons of advanced deep learning approaches for brain lesion segmentation.

Zaman A, Yassin MM, Mehmud I, Cao A, Lu J, Hassan H, Kang Y

pubmed logopapersJul 1 2025
Brain lesion segmentation is challenging in medical image analysis, aiming to delineate lesion regions precisely. Deep learning (DL) techniques have recently demonstrated promising results across various computer vision tasks, including semantic segmentation, object detection, and image classification. This paper offers an overview of recent DL algorithms for brain tumor and stroke segmentation, drawing on literature from 2021 to 2024. It highlights the strengths, limitations, current research challenges, and unexplored areas in imaging-based brain lesion classification based on insights from over 250 recent review papers. Techniques addressing difficulties like class imbalance and multi-modalities are presented. Optimization methods for improving performance regarding computational and structural complexity and processing speed are discussed. These include lightweight neural networks, multilayer architectures, and computationally efficient, highly accurate network designs. The paper also reviews generic and latest frameworks of different brain lesion detection techniques and highlights publicly available benchmark datasets and their issues. Furthermore, open research areas, application prospects, and future directions for DL-based brain lesion classification are discussed. Future directions include integrating neural architecture search methods with domain knowledge, predicting patient survival levels, and learning to separate brain lesions using patient statistics. To ensure patient privacy, future research is anticipated to explore privacy-preserving learning frameworks. Overall, the presented suggestions serve as a guideline for researchers and system designers involved in brain lesion detection and stroke segmentation tasks.

Deep learning-based segmentation of the trigeminal nerve and surrounding vasculature in trigeminal neuralgia.

Halbert-Elliott KM, Xie ME, Dong B, Das O, Wang X, Jackson CM, Lim M, Huang J, Yedavalli VS, Bettegowda C, Xu R

pubmed logopapersJul 1 2025
Preoperative workup of trigeminal neuralgia (TN) consists of identification of neurovascular features on MRI. In this study, the authors apply and evaluate the performance of deep learning models for segmentation of the trigeminal nerve and surrounding vasculature to quantify anatomical features of the nerve and vessels. Six U-Net-based neural networks, each with a different encoder backbone, were trained to label constructive interference in steady-state MRI voxels as nerve, vasculature, or background. A retrospective dataset of 50 TN patients at the authors' institution who underwent preoperative high-resolution MRI in 2022 was utilized to train and test the models. Performance was measured by the Dice coefficient and intersection over union (IoU) metrics. Anatomical characteristics, such as surface area of neurovascular contact and distance to the contact point, were computed and compared between the predicted and ground truth segmentations. Of the evaluated models, the best performing was U-Net with an SE-ResNet50 backbone (Dice score = 0.775 ± 0.015, IoU score = 0.681 ± 0.015). When the SE-ResNet50 backbone was used, the average surface area of neurovascular contact in the testing dataset was 6.90 mm2, which was not significantly different from the surface area calculated from manual segmentation (p = 0.83). The average calculated distance from the brainstem to the contact point was 4.34 mm, which was also not significantly different from manual segmentation (p = 0.29). U-Net-based neural networks perform well for segmenting trigeminal nerve and vessels from preoperative MRI volumes. This technology enables the development of quantitative and objective metrics for radiographic evaluation of TN.

Photon-counting detector CT of the brain reduces variability of Hounsfield units and has a mean offset compared with energy-integrating detector CT.

Stein T, Lang F, Rau S, Reisert M, Russe MF, Schürmann T, Fink A, Kellner E, Weiss J, Bamberg F, Urbach H, Rau A

pubmed logopapersJul 1 2025
Distinguishing gray matter (GM) from white matter (WM) is essential for CT of the brain. The recently established photon-counting detector CT (PCD-CT) technology employs a novel detection technique that might allow more precise measurement of tissue attenuation for an improved delineation of attenuation values (Hounsfield units - HU) and improved image quality in comparison with energy-integrating detector CT (EID-CT). To investigate this, we compared HU, GM vs. WM contrast, and image noise using automated deep learning-based brain segmentations. We retrospectively included patients who received either PCD-CT or EID-CT and did not display a cerebral pathology. A deep learning-based segmentation of the GM and WM was used to extract HU. From this, the gray-to-white ratio and contrast-to-noise ratio were calculated. We included 329 patients with EID-CT (mean age 59.8 ± 20.2 years) and 180 with PCD-CT (mean age 64.7 ± 16.5 years). GM and WM showed significantly lower HU in PCD-CT (GM: 40.4 ± 2.2 HU; WM: 33.4 ± 1.5 HU) compared to EID-CT (GM: 45.1 ± 1.6 HU; WM: 37.4 ± 1.6 HU, p < .001). Standard deviations of HU were also lower in PCD-CT (GM and WM both p < .001) and contrast-tonoise ratio was significantly higher in PCD-CT compared to EID-CT (p < .001). Gray-to-white matter ratios were not significantly different across both modalities (p > .99). In an age-matched subset (n = 157 patients from both cohorts), all findings were replicated. This comprehensive comparison of HU in cerebral gray and white matter revealed substantially reduced image noise and an average offset with lower HU in PCD-CT while the ratio between GM and WM remained constant. The potential need to adapt windowing presets based on this finding should be investigated in future studies. CNR = Contrast-to-Noise Ratio; CTDIvol = Volume Computed Tomography Dose Index; EID = Energy-Integrating Detector; GWR = Gray-to-White Matter Ratio; HU = Hounsfield Units; PCD = Photon-Counting Detector; ROI = Region of Interest; VMI = Virtual Monoenergetic Images.

CUAMT: A MRI semi-supervised medical image segmentation framework based on contextual information and mixed uncertainty.

Xiao H, Wang Y, Xiong S, Ren Y, Zhang H

pubmed logopapersJul 1 2025
Semi-supervised medical image segmentation is a class of machine learning paradigms for segmentation model training and inference using both labeled and unlabeled medical images, which can effectively reduce the data labeling workload. However, existing consistency semi-supervised segmentation models mainly focus on investigating more complex consistency strategies and lack efficient utilization of volumetric contextual information, which leads to vague or uncertain understanding of the boundary between the object and the background by the model, resulting in ambiguous or even erroneous boundary segmentation results. For this reason, this study proposes a hybrid uncertainty network CUAMT based on contextual information. In this model, a contextual information extraction module CIE is proposed, which learns the connection between image contexts by extracting semantic features at different scales, and guides the model to enhance learning contextual information. In addition, a hybrid uncertainty module HUM is proposed, which guides the model to focus on segmentation boundary information by combining the global and local uncertainty information of two different networks to improve the segmentation performance of the networks at the boundary. In the left atrial segmentation and brain tumor segmentation dataset, validation experiments were conducted on the proposed model. The experiments show that our model achieves 89.84%, 79.89%, and 8.73 on the Dice metric, Jaccard metric, and 95HD metric, respectively, which significantly outperforms several current SOTA semi-supervised methods. This study confirms that the CIE and HUM strategies are effective. A semi-supervised segmentation framework is proposed for medical image segmentation.

Assessment of AI-accelerated T2-weighted brain MRI, based on clinical ratings and image quality evaluation.

Nonninger JN, Kienast P, Pogledic I, Mallouhi A, Barkhof F, Trattnig S, Robinson SD, Kasprian G, Haider L

pubmed logopapersJul 1 2025
To compare clinical ratings and signal-to-noise ratio (SNR) measures of a commercially available Deep Learning-based MRI reconstruction method (T2<sub>(DR)</sub>) against conventional T2- turbo spin echo brain MRI (T2<sub>(CN)</sub>). 100 consecutive patients with various neurological conditions underwent both T2<sub>(DR)</sub> and T2<sub>(CN)</sub> on a Siemens Vida 3 T scanner with a 64-channel head coil in the same examination. Acquisition times were 3.33 min for T2<sub>(CN)</sub> and 1.04 min for T2<sub>(DR)</sub>. Four neuroradiologists evaluated overall image quality (OIQ), diagnostic safety (DS), and image artifacts (IA), blinded to the acquisition mode. SNR and SNR<sub>eff</sub> (adjusted for acquisition time) were calculated for air, grey- and white matter, and cerebrospinal fluid. The mean patient age was 43.6 years (SD 20.3), with 54 females. The distribution of non-diagnostic ratings did not differ significantly between T2<sub>(CN)</sub> and T2<sub>(DR)</sub> (IA p = 0.108; OIQ: p = 0.700 and DS: p = 0.652). However, when considering the full spectrum of ratings, significant differences favouring T2<sub>(CN)</sub> emerged in OIQ (p = 0.003) and IA (p < 0.001). T2<sub>(CN)</sub> had higher SNR (157.9, SD 123.4) than T2<sub>(DR)</sub> (112.8, SD 82.7), p < 0.001, but T2<sub>(DR)</sub> demonstrated superior SNR<sub>eff</sub> (14.1, SD 10.3) compared to T2<sub>(CN)</sub> (10.8, SD 8.5), p < 0.001. Our results suggest that while T2<sub>(DR)</sub> may be clinically applicable for a diagnostic setting, it does not fully match the quality of high-standard conventional T2<sub>(CN)</sub>, MRI acquisitions.

Machine learning approaches for fine-grained symptom estimation in schizophrenia: A comprehensive review.

Foteinopoulou NM, Patras I

pubmed logopapersJul 1 2025
Schizophrenia is a severe yet treatable mental disorder, and it is diagnosed using a multitude of primary and secondary symptoms. Diagnosis and treatment for each individual depends on the severity of the symptoms. Therefore, there is a need for accurate, personalised assessments. However, the process can be both time-consuming and subjective; hence, there is a motivation to explore automated methods that can offer consistent diagnosis and precise symptom assessments, thereby complementing the work of healthcare practitioners. Machine Learning has demonstrated impressive capabilities across numerous domains, including medicine; the use of Machine Learning in patient assessment holds great promise for healthcare professionals and patients alike, as it can lead to more consistent and accurate symptom estimation. This survey reviews methodologies utilising Machine Learning for diagnosing and assessing schizophrenia. Contrary to previous reviews that primarily focused on binary classification, this work recognises the complexity of the condition and, instead, offers an overview of Machine Learning methods designed for fine-grained symptom estimation. We cover multiple modalities, namely Medical Imaging, Electroencephalograms and Audio-Visual, as the illness symptoms can manifest in a patient's pathology and behaviour. Finally, we analyse the datasets and methodologies used in the studies and identify trends, gaps, as opportunities for future research.

Developments in MRI radiomics research for vascular cognitive impairment.

Chen X, Luo X, Chen L, Liu H, Yin X, Chen Z

pubmed logopapersJul 1 2025
Vascular cognitive impairment (VCI) is an umbrella term for diseases associated with cognitive decline induced by substantive brain damage following pathological changes in the cerebrovascular system. The primary clinical manifestations include behavioral abnormalities and diminished learning and memory cognitive functions. If the location and extent of brain injury are not identified early and therapeutic interventions are not promptly administered, it may lead to irreversible cognitive impairment. Therefore, the early diagnosis of VCI is crucial for its prevention and treatment. Prior to the onset of cognitive impairment in VCI, magnetic resonance imaging (MRI) radiomics can be utilized for early assessment and diagnosis, thereby guiding clinicians in providing precise treatment for patients, which holds significant potential for development. This article reviews the classification of VCI, the concept of radiomics, the application of MRI radiomics in VCI, and the limitations of radiomics in the context of advancements in its application within the central nervous system. CRITICAL RELEVANCE STATEMENT: This article explores how MRI radiomics can be used to detect VCI early, enhancing clinical radiology practice by offering a reliable method for prediction, diagnosis, and identification, which also promotes standardization in research and integration of disciplines. KEY POINTS: MRI radiomics can predict VCI early. MRI radiomics can diagnose VCI. MRI radiomics distinguishes VCI from Alzheimer's disease.

Regression modeling with convolutional neural network for predicting extent of resection from preoperative MRI in giant pituitary adenomas: a pilot study.

Patel BK, Tariciotti L, DiRocco L, Mandile A, Lohana S, Rodas A, Zohdy YM, Maldonado J, Vergara SM, De Andrade EJ, Revuelta Barbero JM, Reyes C, Solares CA, Garzon-Muvdi T, Pradilla G

pubmed logopapersJul 1 2025
Giant pituitary adenomas (GPAs) are challenging skull base tumors due to their size and proximity to critical neurovascular structures. Achieving gross-total resection (GTR) can be difficult, and residual tumor burden is commonly reported. This study evaluated the ability of convolutional neural networks (CNNs) to predict the extent of resection (EOR) from preoperative MRI with the goals of enhancing surgical planning, improving preoperative patient counseling, and enhancing multidisciplinary postoperative coordination of care. A retrospective study of 100 consecutive patients with GPAs was conducted. Patients underwent surgery via the endoscopic endonasal transsphenoidal approach. CNN models were trained on DICOM images from preoperative MR images to predict EOR, using a split of 80 patients for training and 20 for validation. The models included different architectural modules to refine image selection and predict EOR based on tumor-contained images in various anatomical planes. The model design, training, and validation were conducted in a local environment in Python using the TensorFlow machine learning system. The median preoperative tumor volume was 19.4 cm3. The median EOR was 94.5%, with GTR achieved in 49% of cases. The CNN model showed high predictive accuracy, especially when analyzing images from the coronal plane, with a root mean square error of 2.9916 and a mean absolute error of 2.6225. The coefficient of determination (R2) was 0.9823, indicating excellent model performance. CNN-based models may effectively predict the EOR for GPAs from preoperative MRI scans, offering a promising tool for presurgical assessment and patient counseling. Confirmatory studies with large patient samples are needed to definitively validate these findings.

Prediction of early recurrence in primary central nervous system lymphoma based on multimodal MRI-based radiomics: A preliminary study.

Wang X, Wang S, Zhao X, Chen L, Yuan M, Yan Y, Sun X, Liu Y, Sun S

pubmed logopapersJul 1 2025
To evaluate the role of multimodal magnetic resonance imaging radiomics features in predicting early recurrence of primary central nervous system lymphoma (PCNSL) and to investigate their correlation with patient prognosis. A retrospective analysis was conducted on 145 patients with PCNSL who were treated with high-dose methotrexate-based chemotherapy. Clinical data and MRI images were collected, with tumor regions segmented using ITK-SNAP software. Radiomics features were extracted via Pyradiomics, and predictive models were developed using various machine learning algorithms. The predictive performance of these models was assessed using receiver operating characteristic (ROC) curves. Additionally, Cox regression analysis was employed to identify risk factors associated with progression-free survival (PFS). In the cohort of 145 PCNSL patients (72 recurrence, 73 non-recurrence), clinical characteristics were comparable between groups except for multiple lesion frequency (61.1% vs. 39.7%, p < 0.05) and not receiving consolidation therapy (44.4% vs. 13.7%, p < 0.05). A total of 2392 radiomics features were extracted from CET1 and T2WI MRI sequence. Combining clinical variables, 10 features were retained after the feature selection process. The logistic regression (LR) model exhibited superior predictive performance in the test set to predict PCNSL early relapse, with an area under the curve (AUC) of 0.887 (95 % confidence interval: 0.785-0.988). Multivariate Cox regression identified the Cli-Rad score as an independent prognostic factor for PFS. Significant difference in PFS was observed between high- and low-risk groups defined by Cli-Rad score (8.24 months vs. 24.17 months, p < 0.001). The LR model based on multimodal MRI radiomics and clinical features, can effectively predict early recurrence of PCNSL, while the Cli-Rad score could independently forecast PFS among PCNSL patients.

Attention-driven hybrid deep learning and SVM model for early Alzheimer's diagnosis using neuroimaging fusion.

Paduvilan AK, Livingston GAL, Kuppuchamy SK, Dhanaraj RK, Subramanian M, Al-Rasheed A, Getahun M, Soufiene BO

pubmed logopapersJul 1 2025
Alzheimer's Disease (AD) poses a significant global health challenge, necessitating early and accurate diagnosis to enable timely interventions. AD is a progressive neurodegenerative disorder that affects millions worldwide and is one of the leading causes of cognitive impairment in older adults. Early diagnosis is critical for enabling effective treatment strategies, slowing disease progression, and improving the quality of life for patients. Existing diagnostic methods often struggle with limited sensitivity, overfitting, and reduced reliability due to inadequate feature extraction, imbalanced datasets, and suboptimal model architectures. This study addresses these gaps by introducing an innovative methodology that combines SVM with Deep Learning (DL) to improve the classification performance of AD. Deep learning models extract high-level imaging features which are then concatenated with SVM kernels in a late-fusion ensemble. This hybrid design leverages deep representations for pattern recognition and SVM's robustness on small sample sets. This study provides a necessary tool for early-stage identification of possible cases, so enhancing the management and treatment options. This is attained by precisely classifying the disease from neuroimaging data. The approach integrates advanced data pre-processing, dynamic feature optimization, and attention-driven learning mechanisms to enhance interpretability and robustness. The research leverages a dataset of MRI and PET imaging, integrating novel fusion techniques to extract key biomarkers indicative of cognitive decline. Unlike prior approaches, this method effectively mitigates the challenges of data sparsity and dimensionality reduction while improving generalization across diverse datasets. Comparative analysis highlights a 15% improvement in accuracy, a 12% reduction in false positives, and a 10% increase in F1-score against state-of-the-art models such as HNC and MFNNC. The proposed method significantly outperforms existing techniques across metrics like accuracy, sensitivity, specificity, and computational efficiency, achieving an overall accuracy of 98.5%.
Page 41 of 93924 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.