Sort by:
Page 228 of 3143139 results

Exploring the significance of the frontal lobe for diagnosis of schizophrenia using explainable artificial intelligence and group level analysis.

Varaprasad SA, Goel T

pubmed logopapersJun 1 2025
Schizophrenia (SZ) is a complex mental disorder characterized by a profound disruption in cognition and emotion, often resulting in a distorted perception of reality. Magnetic resonance imaging (MRI) is an essential tool for diagnosing SZ which helps to understand the organization of the brain. Functional MRI (fMRI) is a specialized imaging technique to measure and map brain activity by detecting changes in blood flow and oxygenation. The proposed paper correlates the results using an explainable deep learning approach to identify the significant regions of SZ patients using group-level analysis for both structural MRI (sMRI) and fMRI data. The study found that the heat maps for Grad-CAM show clear visualization in the frontal lobe for the classification of SZ and CN with a 97.33% accuracy. The group difference analysis reveals that sMRI data shows intense voxel activity in the right superior frontal gyrus of the frontal lobe in SZ patients. Also, the group difference between SZ and CN during n-back tasks of fMRI data indicates significant voxel activation in the frontal cortex of the frontal lobe. These findings suggest that the frontal lobe plays a crucial role in the diagnosis of SZ, aiding clinicians in planning the treatment.

Prediction of lymph node metastasis in papillary thyroid carcinoma using non-contrast CT-based radiomics and deep learning with thyroid lobe segmentation: A dual-center study.

Wang H, Wang X, Du Y, Wang Y, Bai Z, Wu D, Tang W, Zeng H, Tao J, He J

pubmed logopapersJun 1 2025
This study aimed to develop a predictive model for lymph node metastasis (LNM) in papillary thyroid carcinoma (PTC) patients by deep learning radiomic (DLRad) and clinical features. This study included 271 thyroid lobes from 228 PTC patients who underwent preoperative neck non-contrast CT at Center 1 (May 2021-April 2024). LNM status was confirmed via postoperative pathology, with each thyroid lobe labeled accordingly. The cohort was divided into training (n = 189) and validation (n = 82) cohorts, with additional temporal (n = 59 lobes, Center 1, May-August 2024) and external (n = 66 lobes, Center 2) test cohorts. Thyroid lobes were manually segmented from the isthmus midline, ensuring interobserver consistency (ICC ≥ 0.8). Deep learning and radiomics features were selected using LASSO algorithms to compute DLRad scores. Logistic regression identified independent predictors, forming DLRad, clinical, and combined models. Model performance was evaluated using AUC, calibration, decision curves, and the DeLong test, compared against radiologists' assessments. Independent predictors of LNM included age, gender, multiple nodules, tumor size group, and DLRad. The combined model demonstrated superior diagnostic performance with AUCs of 0.830 (training), 0.799 (validation), 0.819 (temporal test), and 0.756 (external test), outperforming the DLRad model (AUCs: 0.786, 0.730, 0.753, 0.642), clinical model (AUCs: 0.723, 0.745, 0.671, 0.660), and radiologist evaluations (AUCs: 0.529, 0.606, 0.620, 0.503). It also achieved the lowest Brier scores (0.167, 0.184, 0.175, 0.201) and the highest net benefit in decision-curve analysis at threshold probabilities > 20 %. The combined model integrating DLRad and clinical features exhibits good performance in predicting LNM in PTC patients.

Automated neuroradiological support systems for multiple cerebrovascular disease markers - A systematic review and meta-analysis.

Phitidis J, O'Neil AQ, Whiteley WN, Alex B, Wardlaw JM, Bernabeu MO, Hernández MV

pubmed logopapersJun 1 2025
Cerebrovascular diseases (CVD) can lead to stroke and dementia. Stroke is the second leading cause of death world wide and dementia incidence is increasing by the year. There are several markers of CVD that are visible on brain imaging, including: white matter hyperintensities (WMH), acute and chronic ischaemic stroke lesions (ISL), lacunes, enlarged perivascular spaces (PVS), acute and chronic haemorrhagic lesions, and cerebral microbleeds (CMB). Brain atrophy also occurs in CVD. These markers are important for patient management and intervention, since they indicate elevated risk of future stroke and dementia. We systematically reviewed automated systems designed to support radiologists reporting on these CVD imaging findings. We considered commercially available software and research publications which identify at least two CVD markers. In total, we included 29 commercial products and 13 research publications. Two distinct types of commercial support system were available: those which identify acute stroke lesions (haemorrhagic and ischaemic) from computed tomography (CT) scans, mainly for the purpose of patient triage; and those which measure WMH and atrophy regionally and longitudinally. In research, WMH and ISL were the markers most frequently analysed together, from magnetic resonance imaging (MRI) scans; lacunes and PVS were each targeted only twice and CMB only once. For stroke, commercially available systems largely support the emergency setting, whilst research systems consider also follow-up and routine scans. The systems to quantify WMH and atrophy are focused on neurodegenerative disease support, where these CVD markers are also of significance. There are currently no openly validated systems, commercially, or in research, performing a comprehensive joint analysis of all CVD markers (WMH, ISL, lacunes, PVS, haemorrhagic lesions, CMB, and atrophy).

DCE-MRI based deep learning analysis of intratumoral subregion for predicting Ki-67 expression level in breast cancer.

Ding Z, Zhang C, Xia C, Yao Q, Wei Y, Zhang X, Zhao N, Wang X, Shi S

pubmed logopapersJun 1 2025
To evaluate whether deep learning (DL) analysis of intratumor subregion based on dynamic contrast-enhanced MRI (DCE-MRI) can help predict Ki-67 expression level in breast cancer. A total of 290 breast cancer patients from two hospitals were retrospectively collected. A k-means clustering algorithm confirmed subregions of tumor. DL features of whole tumor and subregions were extracted from DCE-MRI images based on 3D ResNet18 pre-trained model. The logistic regression model was constructed after dimension reduction. Model performance was assessed using the area under the curve (AUC), and clinical value was demonstrated through decision curve analysis (DCA). The k-means clustering method clustered the tumor into two subregions (habitat 1 and habitat 2) based on voxel values. Both the habitat 1 model (validation set: AUC = 0.771, 95 %CI: 0.642-0.900 and external test set: AUC = 0.794, 95 %CI: 0.696-0.891) and the habitat 2 model (AUC = 0.734, 95 %CI: 0.605-0.862 and AUC = 0.756, 95 %CI: 0.646-0.866) showed better predictive capabilities for Ki-67 expression level than the whole tumor model (AUC = 0.686, 95 %CI: 0.550-0.823 and AUC = 0.680, 95 %CI: 0.555-0.804). The combined model based on the two subregions further enhanced the predictive capability (AUC = 0.808, 95 %CI: 0.696-0.921 and AUC = 0.842, 95 %CI: 0.758-0.926), and it demonstrated higher clinical value than other models in DCA. The deep learning model derived from subregion of tumor showed better performance for predicting Ki-67 expression level in breast cancer patients. Additionally, the model that integrated two subregions further enhanced the predictive performance.

SSAT-Swin: Deep Learning-Based Spinal Ultrasound Feature Segmentation for Scoliosis Using Self-Supervised Swin Transformer.

Zhang C, Zheng Y, McAviney J, Ling SH

pubmed logopapersJun 1 2025
Scoliosis, a 3-D spinal deformity, requires early detection and intervention. Ultrasound curve angle (UCA) measurement using ultrasound images has emerged as a promising diagnostic tool. However, calculating the UCA directly from ultrasound images remains challenging due to low contrast, high noise, and irregular target shapes. Accurate segmentation results are therefore crucial to enhance image clarity and precision prior to UCA calculation. We propose the SSAT-Swin model, a transformer-based multi-class segmentation framework designed for ultrasound image analysis in scoliosis diagnosis. The model integrates a boundary-enhancement module in the decoder and a channel attention module in the skip connections. Additionally, self-supervised proxy tasks are used during pre-training on 1,170 images, followed by fine-tuning on 109 image-label pairs. The SSAT-Swin achieved Dice scores of 85.6% and Jaccard scores of 74.5%, with a 92.8% scoliosis bone feature detection rate, outperforming state-of-the-art models. Self-supervised learning enhances the model's ability to capture global context information, making it well-suited for addressing the unique challenges of ultrasound images, ultimately advancing scoliosis assessment through more accurate segmentation.

Intraoperative stenosis detection in X-ray coronary angiography via temporal fusion and attention-based CNN.

Chen M, Wang S, Liang K, Chen X, Xu Z, Zhao C, Yuan W, Wan J, Huang Q

pubmed logopapersJun 1 2025
Coronary artery disease (CAD), the leading cause of mortality, is caused by atherosclerotic plaque buildup in the arteries. The gold standard for the diagnosis of CAD is via X-ray coronary angiography (XCA) during percutaneous coronary intervention, where locating coronary artery stenosis is fundamental and essential. However, due to complex vascular features and motion artifacts caused by heartbeat and respiratory movement, manually recognizing stenosis is challenging for physicians, which may prolong the surgery decision-making time and lead to irreversible myocardial damage. Therefore, we aim to provide an automatic method for accurate stenosis localization. In this work, we present a convolutional neural network (CNN) with feature-level temporal fusion and attention modules to detect coronary artery stenosis in XCA images. The temporal fusion module, composed of the deformable convolution and the correlation-based module, is proposed to integrate time-varifying vessel features from consecutive frames. The attention module adopts channel-wise recalibration to capture global context as well as spatial-wise recalibration to enhance stenosis features with local width and morphology information. We compare our method to the commonly used attention methods, state-of-the-art object detection methods, and stenosis detection methods. Experimental results show that our fusion and attention strategy significantly improves performance in discerning stenosis (P<0.05), achieving the best average recall score on two different datasets. This is the first study to integrate both temporal fusion and attention mechanism into a novel feature-level hybrid CNN framework for stenosis detection in XCA images, which is proved effective in improving detection performance and therefore is potentially helpful in intraoperative stenosis localization.

Patellar tilt calculation utilizing artificial intelligence on CT knee imaging.

Sieberer J, Rancu A, Park N, Desroches S, Manafzadeh AR, Tommasini S, Wiznia DH, Fulkerson J

pubmed logopapersJun 1 2025
In the diagnosis of patellar instability, three-dimensional (3D) imaging enables measurement of a wide range of metrics. However, measuring these metrics can be time-consuming and prone to error due to conducting 2D measurements on 3D objects. This study aims to measure patellar tilt in 3D and automate it by utilizing a commercial AI algorithm for landmark placement. CT-scans of 30 patients with at least two dislocation events and 30 controls without patellofemoral disease were acquired. Patellar tilt was measured using three different methods: the established method, and by calculating the angle between 3D-landmarks placed by either a human rater or an AI algorithm. Correlations between the three measurements were calculated using interclass correlation coefficients, and differences with a Kruskal-Wallis test. Significant differences of means between patients and controls were calculated using Mann-Whitney U tests. Significance was assumed at 0.05 adjusted with the Bonferroni method. No significant differences (overall: p = 0.10, patients: 0.51, controls: 0.79) between methods were found. Predicted ICC between the methods ranged from 0.86 to 0.90 with a 95% confidence interval of 0.77-0.94. Differences between patients and controls were significant (p < 0.001) for all three methods. The study offers an alternative 3D approach for calculating patellar tilt comparable to traditional, manual measurements. Furthermore, this analysis offers evidence that a commercially available software can identify the necessary anatomical landmarks for patellar tilt calculation, offering a potential pathway to increased automation of surgical decision-making metrics.

MCNEL: A multi-scale convolutional network and ensemble learning for Alzheimer's disease diagnosis.

Yan F, Peng L, Dong F, Hirota K

pubmed logopapersJun 1 2025
Alzheimer's disease (AD) significantly threatens community well-being and healthcare resource allocation due to its high incidence and mortality. Therefore, early detection and intervention are crucial for reducing AD-related fatalities. However, the existing deep learning-based approaches often struggle to capture complex structural features of magnetic resonance imaging (MRI) data effectively. Common techniques for multi-scale feature fusion, such as direct summation and concatenation methods, often introduce redundant noise that can negatively affect model performance. These challenges highlight the need for developing more advanced methods to improve feature extraction and fusion, aiming to enhance diagnostic accuracy. This study proposes a multi-scale convolutional network and ensemble learning (MCNEL) framework for early and accurate AD diagnosis. The framework adopts enhanced versions of the EfficientNet-B0 and MobileNetV2 models, which are subsequently integrated with the DenseNet121 model to create a hybrid feature extraction tool capable of extracting features from multi-view slices. Additionally, a SimAM-based feature fusion method is developed to synthesize key feature information derived from multi-scale images. To ensure classification accuracy in distinguishing AD from multiple stages of cognitive impairment, this study designs an ensemble learning classifier model using multiple classifiers and a self-adaptive weight adjustment strategy. Extensive experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset validate the effectiveness of our solution, which achieves average accuracies of 96.67% for ADNI-1 and 96.20% for ADNI-2, respectively. The results indicate that the MCNEL outperforms recent comparable algorithms in terms of various evaluation metrics, demonstrating superior performance and robustness in AD diagnosis. This study markedly enhances the diagnostic capabilities for AD, allowing patients to receive timely treatments that can slow down disease progression and improve their quality of life.

UniBrain: Universal Brain MRI diagnosis with hierarchical knowledge-enhanced pre-training.

Lei J, Dai L, Jiang H, Wu C, Zhang X, Zhang Y, Yao J, Xie W, Zhang Y, Li Y, Zhang Y, Wang Y

pubmed logopapersJun 1 2025
Magnetic Resonance Imaging (MRI) has become a pivotal tool in diagnosing brain diseases, with a wide array of computer-aided artificial intelligence methods being proposed to enhance diagnostic accuracy. However, early studies were often limited by small-scale datasets and a narrow range of disease types, which posed challenges in model generalization. This study presents UniBrain, a hierarchical knowledge-enhanced pre-training framework designed for universal brain MRI diagnosis. UniBrain leverages a large-scale dataset comprising 24,770 imaging-report pairs from routine diagnostics for pre-training. Unlike previous approaches that either focused solely on visual representation learning or used brute-force alignment between vision and language, the framework introduces a hierarchical alignment mechanism. This mechanism extracts structured knowledge from free-text clinical reports at multiple granularities, enabling vision-language alignment at both the sequence and case levels, thereby significantly improving feature learning efficiency. A coupled vision-language perception module is further employed for text-guided multi-label classification, which facilitates zero-shot evaluation and fine-tuning of downstream tasks without modifying the model architecture. UniBrain is validated on both in-domain and out-of-domain datasets, consistently surpassing existing state-of-the-art diagnostic models and demonstrating performance on par with radiologists in specific disease categories. It shows strong generalization capabilities across diverse tasks, highlighting its potential for broad clinical application. The code is available at https://github.com/ljy19970415/UniBrain.

MRI and CT radiomics for the diagnosis of acute pancreatitis.

Tartari C, Porões F, Schmidt S, Abler D, Vetterli T, Depeursinge A, Dromain C, Violi NV, Jreige M

pubmed logopapersJun 1 2025
To evaluate the single and combined diagnostic performances of CT and MRI radiomics for diagnosis of acute pancreatitis (AP). We prospectively enrolled 78 patients (mean age 55.7 ± 17 years, 48.7 % male) diagnosed with AP between 2020 and 2022. Patients underwent contrast-enhanced CT (CECT) within 48-72 h of symptoms and MRI ≤ 24 h after CECT. The entire pancreas was manually segmented tridimensionally by two operators on portal venous phase (PVP) CECT images, T2-weighted imaging (WI) MR sequence and non-enhanced and PVP T1-WI MR sequences. A matched control group (n = 77) with normal pancreas was used. Dataset was randomly split into training and test, and various machine learning algorithms were compared. Receiver operating curve analysis was performed. The T2WI model exhibited significantly better diagnostic performance than CECT and non-enhanced and venous T1WI, with sensitivity, specificity and AUC of 73.3 % (95 % CI: 71.5-74.7), 80.1 % (78.2-83.2), and 0.834 (0.819-0.844) for T2WI (p = 0.001), 74.4 % (71.5-76.4), 58.7 % (56.3-61.1), and 0.654 (0.630-0.677) for non-enhanced T1WI, 62.1 % (60.1-64.2), 78.7 % (77.1-81), and 0.787 (0.771-0.810) for venous T1WI, and 66.4 % (64.8-50.9), 48.4 % (46-50.9), and 0.610 (0.586-0.626) for CECT, respectively.The combination of T2WI with CECT enhanced diagnostic performance compared to T2WI, achieving sensitivity, specificity and AUC of 81.4 % (80-80.3), 78.1 % (75.9-80.2), and 0.911 (0.902-0.920) (p = 0.001). The MRI radiomics outperformed the CT radiomics model to detect diagnosis of AP and the combination of MRI with CECT showed better performance than single models. The translation of radiomics into clinical practice may improve detection of AP, particularly MRI radiomics.
Page 228 of 3143139 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.