Sort by:
Page 159 of 1601593 results

Same-model and cross-model variability in knee cartilage thickness measurements using 3D MRI systems.

Katano H, Kaneko H, Sasaki E, Hashiguchi N, Nagai K, Ishijima M, Ishibashi Y, Adachi N, Kuroda R, Tomita M, Masumoto J, Sekiya I

pubmed logopapersJan 1 2025
Magnetic Resonance Imaging (MRI) based three-dimensional analysis of knee cartilage has evolved to become fully automatic. However, when implementing these measurements across multiple clinical centers, scanner variability becomes a critical consideration. Our purposes were to quantify and compare same-model variability (between repeated scans on the same MRI system) and cross-model variability (across different MRI systems) in knee cartilage thickness measurements using MRI scanners from five manufacturers, as analyzed with a specific 3D volume analysis software. Ten healthy volunteers (eight males and two females, aged 22-60 years) underwent two scans of their right knee on 3T MRI systems from five manufacturers (Canon, Fujifilm, GE, Philips, and Siemens). The imaging protocol included fat-suppressed spoiled gradient echo and proton density weighted sequences. Cartilage regions were automatically segmented into 7 subregions using a specific deep learning-based 3D volume analysis software. This resulted in 350 measurements for same-model variability and 2,800 measurements for cross-model variability. For same-model variability, 82% of measurements showed variability ≤0.10 mm, and 98% showed variability ≤0.20 mm. For cross-model variability, 51% showed variability ≤0.10 mm, and 84% showed variability ≤0.20 mm. The mean same-model variability (0.06 ± 0.05 mm) was significantly lower than cross-model variability (0.11 ± 0.09 mm) (p < 0.001). This study demonstrates that knee cartilage thickness measurements exhibit significantly higher variability across different MRI systems compared to repeated measurements on the same system, when analyzed using this specific software. This finding has important implications for multi-center studies and longitudinal assessments using different MRI systems and highlights the software-dependent nature of such variability assessments.

Brain tumor classification using MRI images and deep learning techniques.

Wong Y, Su ELM, Yeong CF, Holderbaum W, Yang C

pubmed logopapersJan 1 2025
Brain tumors pose a significant medical challenge, necessitating early detection and precise classification for effective treatment. This study aims to address this challenge by introducing an automated brain tumor classification system that utilizes deep learning (DL) and Magnetic Resonance Imaging (MRI) images. The main purpose of this research is to develop a model that can accurately detect and classify different types of brain tumors, including glioma, meningioma, pituitary tumors, and normal brain scans. A convolutional neural network (CNN) architecture with pretrained VGG16 as the base model is employed, and diverse public datasets are utilized to ensure comprehensive representation. Data augmentation techniques are employed to enhance the training dataset, resulting in a total of 17,136 brain MRI images across the four classes. The accuracy of this model was 99.24%, a higher accuracy than other similar works, demonstrating its potential clinical utility. This higher accuracy was achieved mainly due to the utilization of a large and diverse dataset, the improvement of network configuration, the application of a fine-tuning strategy to adjust pretrained weights, and the implementation of data augmentation techniques in enhancing classification performance for brain tumor detection. In addition, a web application was developed by leveraging HTML and Dash components to enhance usability, allowing for easy image upload and tumor prediction. By harnessing artificial intelligence (AI), the developed system addresses the need to reduce human error and enhance diagnostic accuracy. The proposed approach provides an efficient and reliable solution for brain tumor classification, facilitating early diagnosis and enabling timely medical interventions. This work signifies a potential advancement in brain tumor classification, promising improved patient care and outcomes.

Radiomics and Deep Learning as Important Techniques of Artificial Intelligence - Diagnosing Perspectives in Cytokeratin 19 Positive Hepatocellular Carcinoma.

Wang F, Yan C, Huang X, He J, Yang M, Xian D

pubmed logopapersJan 1 2025
Currently, there are inconsistencies among different studies on preoperative prediction of Cytokeratin 19 (CK19) expression in HCC using traditional imaging, radiomics, and deep learning. We aimed to systematically analyze and compare the performance of non-invasive methods for predicting CK19-positive HCC, thereby providing insights for the stratified management of HCC patients. A comprehensive literature search was conducted in PubMed, EMBASE, Web of Science, and the Cochrane Library from inception to February 2025. Two investigators independently screened and extracted data based on inclusion and exclusion criteria. Eligible studies were included, and key findings were summarized in tables to provide a clear overview. Ultimately, 22 studies involving 3395 HCC patients were included. 72.7% (16/22) focused on traditional imaging, 36.4% (8/22) on radiomics, 9.1% (2/22) on deep learning, and 54.5% (12/22) on combined models. The magnetic resonance imaging was the most commonly used imaging modality (19/22), and over half of the studies (12/22) were published between 2022 and 2025. Moreover, 27.3% (6/22) were multicenter studies, 36.4% (8/22) included a validation set, and only 13.6% (3/22) were prospective. The area under the curve (AUC) range of using clinical and traditional imaging was 0.560 to 0.917. The AUC ranges of radiomics were 0.648 to 0.951, and the AUC ranges of deep learning were 0.718 to 0.820. Notably, the AUC ranges of combined models of clinical, imaging, radiomics and deep learning were 0.614 to 0.995. Nevertheless, the multicenter external data were limited, with only 13.6% (3/22) incorporating validation. The combined model integrating traditional imaging, radiomics and deep learning achieves excellent potential and performance for predicting CK19 in HCC. Based on current limitations, future research should focus on building an easy-to-use dynamic online tool, combining multicenter-multimodal imaging and advanced deep learning approaches to enhance the accuracy and robustness of model predictions.

Enhancing Attention Network Spatiotemporal Dynamics for Motor Rehabilitation in Parkinson's Disease.

Pei G, Hu M, Ouyang J, Jin Z, Wang K, Meng D, Wang Y, Chen K, Wang L, Cao LZ, Funahashi S, Yan T, Fang B

pubmed logopapersJan 1 2025
Optimizing resource allocation for Parkinson's disease (PD) motor rehabilitation necessitates identifying biomarkers of responsiveness and dynamic neuroplasticity signatures underlying efficacy. A cohort study of 52 early-stage PD patients undergoing 2-week multidisciplinary intensive rehabilitation therapy (MIRT) was conducted, which stratified participants into responders and nonresponders. A multimodal analysis of resting-state electroencephalography (EEG) microstates and functional magnetic resonance imaging (fMRI) coactivation patterns was performed to characterize MIRT-induced spatiotemporal network reorganization. Responders demonstrated clinically meaningful improvement in motor symptoms, exceeding the minimal clinically important difference threshold of 3.25 on the Unified PD Rating Scale part III, alongside significant reductions in bradykinesia and a significant enhancement in quality-of-life scores at the 3-month follow-up. Resting-state EEG in responders showed a significant attenuation in microstate C and a significant enhancement in microstate D occurrences, along with significantly increased transitions from microstate A/B to D, which significantly correlated with motor function, especially in bradykinesia gains. Concurrently, fMRI analyses identified a prolonged dwell time of the dorsal attention network coactivation/ventral attention network deactivation pattern, which was significantly inversely associated with microstate C occurrence and significantly linked to motor improvement. The identified brain spatiotemporal neural markers were validated using machine learning models to assess the efficacy of MIRT in motor rehabilitation for PD patients, achieving an average accuracy rate of 86%. These findings suggest that MIRT may facilitate a shift in neural networks from sensory processing to higher-order cognitive control, with the dynamic reallocation of attentional resources. This preliminary study validates the necessity of integrating cognitive-motor strategies for the motor rehabilitation of PD and identifies novel neural markers for assessing treatment efficacy.

Investigating methods to enhance interpretability and performance in cardiac MRI for myocardial scarring diagnosis using convolutional neural network classification and One Match.

Udin MH, Armstrong S, Kai A, Doyle ST, Pokharel S, Ionita CN, Sharma UC

pubmed logopapersJan 1 2025
Machine learning (ML) classification of myocardial scarring in cardiac MRI is often hindered by limited explainability, particularly with convolutional neural networks (CNNs). To address this, we developed One Match (OM), an algorithm that builds on template matching to improve on both the explainability and performance of ML myocardial scaring classification. By incorporating OM, we aim to foster trust in AI models for medical diagnostics and demonstrate that improved interpretability does not have to compromise classification accuracy. Using a cardiac MRI dataset from 279 patients, this study evaluates One Match, which classifies myocardial scarring in images by matching each image to a set of labeled template images. It uses the highest correlation score from these matches for classification and is compared to a traditional sequential CNN. Enhancements such as autodidactic enhancement (AE) and patient-level classifications (PLCs) were applied to improve the predictive accuracy of both methods. Results are reported as follows: accuracy, sensitivity, specificity, precision, and F1-score. The highest classification performance was observed with the OM algorithm when enhanced by both AE and PLCs, 95.3% accuracy, 92.3% sensitivity, 96.7% specificity, 92.3% precision, and 92.3% F1-score, marking a significant improvement over the base configurations. AE alone had a positive impact on OM increasing accuracy from 89.0% to 93.2%, but decreased the accuracy of the CNN from 85.3% to 82.9%. In contrast, PLCs improved accuracy for both the CNN and OM, raising the CNN's accuracy by 4.2% and OM's by 7.4%. This study demonstrates the effectiveness of OM in classifying myocardial scars, particularly when enhanced with AE and PLCs. The interpretability of OM also enabled the examination of misclassifications, providing insights that could accelerate development and foster greater trust among clinical stakeholders.

Radiomic Model Associated with Tumor Microenvironment Predicts Immunotherapy Response and Prognosis in Patients with Locoregionally Advanced Nasopharyngeal Carcinoma.

Sun J, Wu X, Zhang X, Huang W, Zhong X, Li X, Xue K, Liu S, Chen X, Li W, Liu X, Shen H, You J, He W, Jin Z, Yu L, Li Y, Zhang S, Zhang B

pubmed logopapersJan 1 2025
<b>Background:</b> No robust biomarkers have been identified to predict the efficacy of programmed cell death protein 1 (PD-1) inhibitors in patients with locoregionally advanced nasopharyngeal carcinoma (LANPC). We aimed to develop radiomic models using pre-immunotherapy MRI to predict the response to PD-1 inhibitors and the patient prognosis. <b>Methods:</b> This study included 246 LANPC patients (training cohort, <i>n</i> = 117; external test cohort, <i>n</i> = 129) from 10 centers. The best-performing machine learning classifier was employed to create the radiomic models. A combined model was constructed by integrating clinical and radiomic data. A radiomic interpretability study was performed with whole slide images (WSIs) stained with hematoxylin and eosin (H&E) and immunohistochemistry (IHC). A total of 150 patient-level nuclear morphological features (NMFs) and 12 cell spatial distribution features (CSDFs) were extracted from WSIs. The correlation between the radiomic and pathological features was assessed using Spearman correlation analysis. <b>Results:</b> The radiomic model outperformed the clinical and combined models in predicting treatment response (area under the curve: 0.760 vs. 0.559 vs. 0.652). For overall survival estimation, the combined model performed comparably to the radiomic model but outperformed the clinical model (concordance index: 0.858 vs. 0.812 vs. 0.664). Six treatment response-related radiomic features correlated with 50 H&E-derived (146 pairs, |<i>r</i>|= 0.31 to 0.46) and 2 to 26 IHC-derived NMF, particularly for CD45RO (69 pairs, |<i>r</i>|= 0.31 to 0.48), CD8 (84, |<i>r</i>|= 0.30 to 0.59), PD-L1 (73, |<i>r</i>|= 0.32 to 0.48), and CD163 (53, |<i>r</i>| = 0.32 to 0.59). Eight prognostic radiomic features correlated with 11 H&E-derived (16 pairs, |<i>r</i>|= 0.48 to 0.61) and 2 to 31 IHC-derived NMF, particularly for PD-L1 (80 pairs, |<i>r</i>|= 0.44 to 0.64), CD45RO (65, |<i>r</i>|= 0.42 to 0.67), CD19 (35, |<i>r</i>|= 0.44 to 0.58), CD66b (61, |<i>r</i>| = 0.42 to 0.67), and FOXP3 (21, |<i>r</i>| = 0.41 to 0.71). In contrast, fewer CSDFs exhibited correlations with specific radiomic features. <b>Conclusion:</b> The radiomic model and combined model are feasible in predicting immunotherapy response and outcomes in LANPC patients. The radiology-pathology correlation suggests a potential biological basis for the predictive models.

Convolutional neural network using magnetic resonance brain imaging to predict outcome from tuberculosis meningitis.

Dong THK, Canas LS, Donovan J, Beasley D, Thuong-Thuong NT, Phu NH, Ha NT, Ourselin S, Razavi R, Thwaites GE, Modat M

pubmed logopapersJan 1 2025
Tuberculous meningitis (TBM) leads to high mortality, especially amongst individuals with HIV. Predicting the incidence of disease-related complications is challenging, for which purpose the value of brain magnetic resonance imaging (MRI) has not been well investigated. We used a convolutional neural network (CNN) to explore the complementary contribution of brain MRI to the conventional prognostic determinants. We pooled data from two randomised control trials of HIV-positive and HIV-negative adults with clinical TBM in Vietnam to predict the occurrence of death or new neurological complications in the first two months after the subject's first MRI session. We developed and compared three models: a logistic regression with clinical, demographic and laboratory data as reference, a CNN that utilised only T1-weighted MRI volumes, and a model that fused all available information. All models were fine-tuned using two repetitions of 5-fold cross-validation. The final evaluation was based on a random 70/30 training/test split, stratified by the outcome and HIV status. Based on the selected model, we explored the interpretability maps derived from the models. 215 patients were included, with an event prevalence of 22.3%. On the test set our non-imaging model had higher AUC (71.2% [Formula: see text] 1.1%) than the imaging-only model (67.3% [Formula: see text] 2.6%). The fused model was superior to both, with an average AUC = 77.3% [Formula: see text] 4.0% in the test set. The non-imaging variables were more informative in the HIV-positive group, while the imaging features were more predictive in the HIV-negative group. All three models performed better in the HIV-negative cohort. The interpretability maps show the model's focus on the lateral fissures, the corpus callosum, the midbrain, and peri-ventricular tissues. Imaging information can provide added value to predict unwanted outcomes of TBM. However, to confirm this finding, a larger dataset is needed.

MRISeqClassifier: A Deep Learning Toolkit for Precise MRI Sequence Classification.

Pan J, Chen Q, Sun C, Liang R, Bian J, Xu J

pubmed logopapersJan 1 2025
Magnetic Resonance Imaging (MRI) is a crucial diagnostic tool in medicine, widely used to detect and assess various health conditions. Different MRI sequences, such as T1-weighted, T2-weighted, and FLAIR, serve distinct roles by highlighting different tissue characteristics and contrasts. However, distinguishing them based solely on the description file is currently impossible due to confusing or incorrect annotations. Additionally, there is a notable lack of effective tools to differentiate these sequences. In response, we developed a deep learning-based toolkit tailored for small, unrefined MRI datasets. This toolkit enables precise sequence classification and delivers performance comparable to systems trained on large, meticulously curated datasets. Utilizing lightweight model architectures and incorporating a voting ensemble method, the toolkit enhances accuracy and stability. It achieves a 99% accuracy rate using only 10% of the data typically required in other research. The code is available at https://github.com/JinqianPan/MRISeqClassifier.

Ground-truth-free deep learning approach for accelerated quantitative parameter mapping with memory efficient learning.

Fujita N, Yokosawa S, Shirai T, Terada Y

pubmed logopapersJan 1 2025
Quantitative MRI (qMRI) requires the acquisition of multiple images with parameter changes, resulting in longer measurement times than conventional imaging. Deep learning (DL) for image reconstruction has shown a significant reduction in acquisition time and improved image quality. In qMRI, where the image contrast varies between sequences, preparing large, fully-sampled (FS) datasets is challenging. Recently, methods that do not require FS data such as self-supervised learning (SSL) and zero-shot self-supervised learning (ZSSSL) have been proposed. Another challenge is the large GPU memory requirement for DL-based qMRI image reconstruction, owing to the simultaneous processing of multiple contrast images. In this context, Kellman et al. proposed memory-efficient learning (MEL) to save the GPU memory. This study evaluated SSL and ZSSSL frameworks with MEL to accelerate qMRI. Three experiments were conducted using the following sequences: 2D T2 mapping/MSME (Experiment 1), 3D T1 mapping/VFA-SPGR (Experiment 2), and 3D T2 mapping/DESS (Experiment 3). Each experiment used the undersampled k-space data under acceleration factors of 4, 8, and 12. The reconstructed maps were evaluated using quantitative metrics. In this study, we performed three qMRI reconstruction measurements and compared the performance of the SL- and GT-free learning methods, SSL and ZSSSL. Overall, the performances of SSL and ZSSSL were only slightly inferior to those of SL, even under high AF conditions. The quantitative errors in diagnostically important tissues (WM, GM, and meniscus) were small, demonstrating that SL and ZSSSL performed comparably. Additionally, by incorporating a GPU memory-saving implementation, we demonstrated that the network can operate on a GPU with a small memory (<8GB) with minimal speed reduction. This study demonstrates the effectiveness of memory-efficient GT-free learning methods using MEL to accelerate qMRI.

Integrating multimodal imaging and peritumoral features for enhanced prostate cancer diagnosis: A machine learning approach.

Zhou H, Xie M, Shi H, Shou C, Tang M, Zhang Y, Hu Y, Liu X

pubmed logopapersJan 1 2025
Prostate cancer is a common malignancy in men, and accurately distinguishing between benign and malignant nodules at an early stage is crucial for optimizing treatment. Multimodal imaging (such as ADC and T2) plays an important role in the diagnosis of prostate cancer, but effectively combining these imaging features for accurate classification remains a challenge. This retrospective study included MRI data from 199 prostate cancer patients. Radiomic features from both the tumor and peritumoral regions were extracted, and a random forest model was used to select the most contributive features for classification. Three machine learning models-Random Forest, XGBoost, and Extra Trees-were then constructed and trained on four different feature combinations (tumor ADC, tumor T2, tumor ADC+T2, and tumor + peritumoral ADC+T2). The model incorporating multimodal imaging features and peritumoral characteristics showed superior classification performance. The Extra Trees model outperformed the others across all feature combinations, particularly in the tumor + peritumoral ADC+T2 group, where the AUC reached 0.729. The AUC values for the other combinations also exceeded 0.65. While the Random Forest and XGBoost models performed slightly lower, they still demonstrated strong classification abilities, with AUCs ranging from 0.63 to 0.72. SHAP analysis revealed that key features, such as tumor texture and peritumoral gray-level features, significantly contributed to the model's classification decisions. The combination of multimodal imaging data with peritumoral features moderately improved the accuracy of prostate cancer classification. This model provides a non-invasive and effective diagnostic tool for clinical use and supports future personalized treatment decisions.
Page 159 of 1601593 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.