Sort by:
Page 109 of 1521519 results

Automated neuroradiological support systems for multiple cerebrovascular disease markers - A systematic review and meta-analysis.

Phitidis J, O'Neil AQ, Whiteley WN, Alex B, Wardlaw JM, Bernabeu MO, Hernández MV

pubmed logopapersJun 1 2025
Cerebrovascular diseases (CVD) can lead to stroke and dementia. Stroke is the second leading cause of death world wide and dementia incidence is increasing by the year. There are several markers of CVD that are visible on brain imaging, including: white matter hyperintensities (WMH), acute and chronic ischaemic stroke lesions (ISL), lacunes, enlarged perivascular spaces (PVS), acute and chronic haemorrhagic lesions, and cerebral microbleeds (CMB). Brain atrophy also occurs in CVD. These markers are important for patient management and intervention, since they indicate elevated risk of future stroke and dementia. We systematically reviewed automated systems designed to support radiologists reporting on these CVD imaging findings. We considered commercially available software and research publications which identify at least two CVD markers. In total, we included 29 commercial products and 13 research publications. Two distinct types of commercial support system were available: those which identify acute stroke lesions (haemorrhagic and ischaemic) from computed tomography (CT) scans, mainly for the purpose of patient triage; and those which measure WMH and atrophy regionally and longitudinally. In research, WMH and ISL were the markers most frequently analysed together, from magnetic resonance imaging (MRI) scans; lacunes and PVS were each targeted only twice and CMB only once. For stroke, commercially available systems largely support the emergency setting, whilst research systems consider also follow-up and routine scans. The systems to quantify WMH and atrophy are focused on neurodegenerative disease support, where these CVD markers are also of significance. There are currently no openly validated systems, commercially, or in research, performing a comprehensive joint analysis of all CVD markers (WMH, ISL, lacunes, PVS, haemorrhagic lesions, CMB, and atrophy).

SSAT-Swin: Deep Learning-Based Spinal Ultrasound Feature Segmentation for Scoliosis Using Self-Supervised Swin Transformer.

Zhang C, Zheng Y, McAviney J, Ling SH

pubmed logopapersJun 1 2025
Scoliosis, a 3-D spinal deformity, requires early detection and intervention. Ultrasound curve angle (UCA) measurement using ultrasound images has emerged as a promising diagnostic tool. However, calculating the UCA directly from ultrasound images remains challenging due to low contrast, high noise, and irregular target shapes. Accurate segmentation results are therefore crucial to enhance image clarity and precision prior to UCA calculation. We propose the SSAT-Swin model, a transformer-based multi-class segmentation framework designed for ultrasound image analysis in scoliosis diagnosis. The model integrates a boundary-enhancement module in the decoder and a channel attention module in the skip connections. Additionally, self-supervised proxy tasks are used during pre-training on 1,170 images, followed by fine-tuning on 109 image-label pairs. The SSAT-Swin achieved Dice scores of 85.6% and Jaccard scores of 74.5%, with a 92.8% scoliosis bone feature detection rate, outperforming state-of-the-art models. Self-supervised learning enhances the model's ability to capture global context information, making it well-suited for addressing the unique challenges of ultrasound images, ultimately advancing scoliosis assessment through more accurate segmentation.

Intraoperative stenosis detection in X-ray coronary angiography via temporal fusion and attention-based CNN.

Chen M, Wang S, Liang K, Chen X, Xu Z, Zhao C, Yuan W, Wan J, Huang Q

pubmed logopapersJun 1 2025
Coronary artery disease (CAD), the leading cause of mortality, is caused by atherosclerotic plaque buildup in the arteries. The gold standard for the diagnosis of CAD is via X-ray coronary angiography (XCA) during percutaneous coronary intervention, where locating coronary artery stenosis is fundamental and essential. However, due to complex vascular features and motion artifacts caused by heartbeat and respiratory movement, manually recognizing stenosis is challenging for physicians, which may prolong the surgery decision-making time and lead to irreversible myocardial damage. Therefore, we aim to provide an automatic method for accurate stenosis localization. In this work, we present a convolutional neural network (CNN) with feature-level temporal fusion and attention modules to detect coronary artery stenosis in XCA images. The temporal fusion module, composed of the deformable convolution and the correlation-based module, is proposed to integrate time-varifying vessel features from consecutive frames. The attention module adopts channel-wise recalibration to capture global context as well as spatial-wise recalibration to enhance stenosis features with local width and morphology information. We compare our method to the commonly used attention methods, state-of-the-art object detection methods, and stenosis detection methods. Experimental results show that our fusion and attention strategy significantly improves performance in discerning stenosis (P<0.05), achieving the best average recall score on two different datasets. This is the first study to integrate both temporal fusion and attention mechanism into a novel feature-level hybrid CNN framework for stenosis detection in XCA images, which is proved effective in improving detection performance and therefore is potentially helpful in intraoperative stenosis localization.

Patellar tilt calculation utilizing artificial intelligence on CT knee imaging.

Sieberer J, Rancu A, Park N, Desroches S, Manafzadeh AR, Tommasini S, Wiznia DH, Fulkerson J

pubmed logopapersJun 1 2025
In the diagnosis of patellar instability, three-dimensional (3D) imaging enables measurement of a wide range of metrics. However, measuring these metrics can be time-consuming and prone to error due to conducting 2D measurements on 3D objects. This study aims to measure patellar tilt in 3D and automate it by utilizing a commercial AI algorithm for landmark placement. CT-scans of 30 patients with at least two dislocation events and 30 controls without patellofemoral disease were acquired. Patellar tilt was measured using three different methods: the established method, and by calculating the angle between 3D-landmarks placed by either a human rater or an AI algorithm. Correlations between the three measurements were calculated using interclass correlation coefficients, and differences with a Kruskal-Wallis test. Significant differences of means between patients and controls were calculated using Mann-Whitney U tests. Significance was assumed at 0.05 adjusted with the Bonferroni method. No significant differences (overall: p = 0.10, patients: 0.51, controls: 0.79) between methods were found. Predicted ICC between the methods ranged from 0.86 to 0.90 with a 95% confidence interval of 0.77-0.94. Differences between patients and controls were significant (p < 0.001) for all three methods. The study offers an alternative 3D approach for calculating patellar tilt comparable to traditional, manual measurements. Furthermore, this analysis offers evidence that a commercially available software can identify the necessary anatomical landmarks for patellar tilt calculation, offering a potential pathway to increased automation of surgical decision-making metrics.

MCNEL: A multi-scale convolutional network and ensemble learning for Alzheimer's disease diagnosis.

Yan F, Peng L, Dong F, Hirota K

pubmed logopapersJun 1 2025
Alzheimer's disease (AD) significantly threatens community well-being and healthcare resource allocation due to its high incidence and mortality. Therefore, early detection and intervention are crucial for reducing AD-related fatalities. However, the existing deep learning-based approaches often struggle to capture complex structural features of magnetic resonance imaging (MRI) data effectively. Common techniques for multi-scale feature fusion, such as direct summation and concatenation methods, often introduce redundant noise that can negatively affect model performance. These challenges highlight the need for developing more advanced methods to improve feature extraction and fusion, aiming to enhance diagnostic accuracy. This study proposes a multi-scale convolutional network and ensemble learning (MCNEL) framework for early and accurate AD diagnosis. The framework adopts enhanced versions of the EfficientNet-B0 and MobileNetV2 models, which are subsequently integrated with the DenseNet121 model to create a hybrid feature extraction tool capable of extracting features from multi-view slices. Additionally, a SimAM-based feature fusion method is developed to synthesize key feature information derived from multi-scale images. To ensure classification accuracy in distinguishing AD from multiple stages of cognitive impairment, this study designs an ensemble learning classifier model using multiple classifiers and a self-adaptive weight adjustment strategy. Extensive experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset validate the effectiveness of our solution, which achieves average accuracies of 96.67% for ADNI-1 and 96.20% for ADNI-2, respectively. The results indicate that the MCNEL outperforms recent comparable algorithms in terms of various evaluation metrics, demonstrating superior performance and robustness in AD diagnosis. This study markedly enhances the diagnostic capabilities for AD, allowing patients to receive timely treatments that can slow down disease progression and improve their quality of life.

UniBrain: Universal Brain MRI diagnosis with hierarchical knowledge-enhanced pre-training.

Lei J, Dai L, Jiang H, Wu C, Zhang X, Zhang Y, Yao J, Xie W, Zhang Y, Li Y, Zhang Y, Wang Y

pubmed logopapersJun 1 2025
Magnetic Resonance Imaging (MRI) has become a pivotal tool in diagnosing brain diseases, with a wide array of computer-aided artificial intelligence methods being proposed to enhance diagnostic accuracy. However, early studies were often limited by small-scale datasets and a narrow range of disease types, which posed challenges in model generalization. This study presents UniBrain, a hierarchical knowledge-enhanced pre-training framework designed for universal brain MRI diagnosis. UniBrain leverages a large-scale dataset comprising 24,770 imaging-report pairs from routine diagnostics for pre-training. Unlike previous approaches that either focused solely on visual representation learning or used brute-force alignment between vision and language, the framework introduces a hierarchical alignment mechanism. This mechanism extracts structured knowledge from free-text clinical reports at multiple granularities, enabling vision-language alignment at both the sequence and case levels, thereby significantly improving feature learning efficiency. A coupled vision-language perception module is further employed for text-guided multi-label classification, which facilitates zero-shot evaluation and fine-tuning of downstream tasks without modifying the model architecture. UniBrain is validated on both in-domain and out-of-domain datasets, consistently surpassing existing state-of-the-art diagnostic models and demonstrating performance on par with radiologists in specific disease categories. It shows strong generalization capabilities across diverse tasks, highlighting its potential for broad clinical application. The code is available at https://github.com/ljy19970415/UniBrain.

MRI and CT radiomics for the diagnosis of acute pancreatitis.

Tartari C, Porões F, Schmidt S, Abler D, Vetterli T, Depeursinge A, Dromain C, Violi NV, Jreige M

pubmed logopapersJun 1 2025
To evaluate the single and combined diagnostic performances of CT and MRI radiomics for diagnosis of acute pancreatitis (AP). We prospectively enrolled 78 patients (mean age 55.7 ± 17 years, 48.7 % male) diagnosed with AP between 2020 and 2022. Patients underwent contrast-enhanced CT (CECT) within 48-72 h of symptoms and MRI ≤ 24 h after CECT. The entire pancreas was manually segmented tridimensionally by two operators on portal venous phase (PVP) CECT images, T2-weighted imaging (WI) MR sequence and non-enhanced and PVP T1-WI MR sequences. A matched control group (n = 77) with normal pancreas was used. Dataset was randomly split into training and test, and various machine learning algorithms were compared. Receiver operating curve analysis was performed. The T2WI model exhibited significantly better diagnostic performance than CECT and non-enhanced and venous T1WI, with sensitivity, specificity and AUC of 73.3 % (95 % CI: 71.5-74.7), 80.1 % (78.2-83.2), and 0.834 (0.819-0.844) for T2WI (p = 0.001), 74.4 % (71.5-76.4), 58.7 % (56.3-61.1), and 0.654 (0.630-0.677) for non-enhanced T1WI, 62.1 % (60.1-64.2), 78.7 % (77.1-81), and 0.787 (0.771-0.810) for venous T1WI, and 66.4 % (64.8-50.9), 48.4 % (46-50.9), and 0.610 (0.586-0.626) for CECT, respectively.The combination of T2WI with CECT enhanced diagnostic performance compared to T2WI, achieving sensitivity, specificity and AUC of 81.4 % (80-80.3), 78.1 % (75.9-80.2), and 0.911 (0.902-0.920) (p = 0.001). The MRI radiomics outperformed the CT radiomics model to detect diagnosis of AP and the combination of MRI with CECT showed better performance than single models. The translation of radiomics into clinical practice may improve detection of AP, particularly MRI radiomics.

Deep learning based on ultrasound images predicting cervical lymph node metastasis in postoperative patients with differentiated thyroid carcinoma.

Fan F, Li F, Wang Y, Liu T, Wang K, Xi X, Wang B

pubmed logopapersJun 1 2025
To develop a deep learning (DL) model based on ultrasound (US) images of lymph nodes for predicting cervical lymph node metastasis (CLNM) in postoperative patients with differentiated thyroid carcinoma (DTC). Retrospective collection of 352 lymph nodes from 330 patients with cytopathology findings between June 2021 and December 2023 at our institution. The database was randomly divided into the training and test cohort at an 8:2 ratio. The DL basic model of longitudinal and cross-sectional of lymph nodes was constructed based on ResNet50 respectively, and the results of the 2 basic models were fused (1:1) to construct a longitudinal + cross-sectional DL model. Univariate and multivariate analyses were used to assess US features and construct a conventional US model. Subsequently, a combined model was constructed by integrating DL and US. The diagnostic accuracy of the longitudinal + cross-sectional DL model was higher than that of longitudinal or cross-sectional alone. The area under the curve (AUC) of the combined model (US + DL) was 0.855 (95% CI, 0.767-0.942) and the accuracy, sensitivity, and specificity were 0.786 (95% CI, 0.671-0.875), 0.972 (95% CI, 0.855-0.999), and 0.588 (95% CI, 0.407-0.754), respectively. Compared with US and DL models, the integrated discrimination improvement and net reclassification improvement of the combined models are both positive. This preliminary study shows that the DL model based on US images of lymph nodes has a high diagnostic efficacy for predicting CLNM in postoperative patients with DTC, and the combined model of US+DL is superior to single conventional US and DL for predicting CLNM in this population. We innovatively used DL of lymph node US images to predict the status of cervical lymph nodes in postoperative patients with DTC.

Kellgren-Lawrence grading of knee osteoarthritis using deep learning: Diagnostic performance with external dataset and comparison with four readers.

Vaattovaara E, Panfilov E, Tiulpin A, Niinimäki T, Niinimäki J, Saarakkala S, Nevalainen MT

pubmed logopapersJun 1 2025
To evaluate the performance of a deep learning (DL) model in an external dataset to assess radiographic knee osteoarthritis using Kellgren-Lawrence (KL) grades against versatile human readers. Two-hundred-eight knee anteroposterior conventional radiographs (CRs) were included in this retrospective study. Four readers (three radiologists, one orthopedic surgeon) assessed the KL grades and consensus grade was derived as the mean of these. The DL model was trained using all the CRs from Multicenter Osteoarthritis Study (MOST) and validated on Osteoarthritis Initiative (OAI) dataset and then tested on our external dataset. To assess the agreement between the graders, Cohen's quadratic kappa (k) with 95 ​% confidence intervals were used. Diagnostic performance was measured using confusion matrices and receiver operating characteristic (ROC) analyses. The multiclass (KL grades from 0 to 4) diagnostic performance of the DL model was multifaceted: sensitivities were between 0.372 and 1.000, specificities 0.691-0.974, PPVs 0.227-0.879, NPVs 0.622-1.000, and AUCs 0.786-0.983. The overall balanced accuracy was 0.693, AUC 0.886, and kappa 0.820. If only dichotomous KL grading (i.e. KL0-1 vs. KL2-4) was utilized, superior metrics were seen with an overall balanced accuracy of 0.902 and AUC of 0.967. A substantial agreement between each reader and DL model was found: the inter-rater agreement was 0.737 [0.685-0.790] for the radiology resident, 0.761 [0.707-0.816] for the musculoskeletal radiology fellow, 0.802 [0.761-0.843] for the senior musculoskeletal radiologist, and 0.818 [0.775-0.860] for the orthopedic surgeon. In an external dataset, our DL model can grade knee osteoarthritis with diagnostic accuracy comparable to highly experienced human readers.

Cone-beam computed tomography (CBCT) image-quality improvement using a denoising diffusion probabilistic model conditioned by pseudo-CBCT of pelvic regions.

Hattori M, Chai H, Hiraka T, Suzuki K, Yuasa T

pubmed logopapersJun 1 2025
Cone-beam computed tomography (CBCT) is widely used in radiotherapy to image patient configuration before treatment but its image quality is lower than planning CT due to scattering, motion, and reconstruction methods. This reduces the accuracy of Hounsfield units (HU) and limits its use in adaptive radiation therapy (ART). However, synthetic CT (sCT) generation using deep learning methods for CBCT intensity correction faces challenges due to deformation. To address these issues, we propose enhancing CBCT quality using a conditional denoising diffusion probability model (CDDPM), which is trained on pseudo-CBCT created by adding pseudo-scatter to planning CT. The CDDPM transforms CBCT into high-quality sCT, improving HU accuracy while preserving anatomical configuration. The performance evaluation of the proposed sCT showed a reduction in mean absolute error (MAE) from 81.19 HU for CBCT to 24.89 HU for the sCT. Peak signal-to-noise ratio (PSNR) improved from 31.20 dB for CBCT to 33.81 dB for the sCT. The Dice and Jaccard coefficients between CBCT and sCT for the colon, prostate, and bladder ranged from 0.69 to 0.91. When compared to other deep learning models, the proposed sCT outperformed them in terms of accuracy and anatomical preservation. The dosimetry analysis for prostate cancer revealed a dose error of over 10% with CBCT but nearly 0% with the sCT. Gamma pass rates for the proposed sCT exceeded 90% for all dose criteria, indicating high agreement with CT-based dose distributions. These results show that the proposed sCT improves image quality, dosimetry accuracy, and treatment planning, advancing ART for pelvic cancer.
Page 109 of 1521519 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.