Sort by:
Page 136 of 1381373 results

Ground-truth-free deep learning approach for accelerated quantitative parameter mapping with memory efficient learning.

Fujita N, Yokosawa S, Shirai T, Terada Y

pubmed logopapersJan 1 2025
Quantitative MRI (qMRI) requires the acquisition of multiple images with parameter changes, resulting in longer measurement times than conventional imaging. Deep learning (DL) for image reconstruction has shown a significant reduction in acquisition time and improved image quality. In qMRI, where the image contrast varies between sequences, preparing large, fully-sampled (FS) datasets is challenging. Recently, methods that do not require FS data such as self-supervised learning (SSL) and zero-shot self-supervised learning (ZSSSL) have been proposed. Another challenge is the large GPU memory requirement for DL-based qMRI image reconstruction, owing to the simultaneous processing of multiple contrast images. In this context, Kellman et al. proposed memory-efficient learning (MEL) to save the GPU memory. This study evaluated SSL and ZSSSL frameworks with MEL to accelerate qMRI. Three experiments were conducted using the following sequences: 2D T2 mapping/MSME (Experiment 1), 3D T1 mapping/VFA-SPGR (Experiment 2), and 3D T2 mapping/DESS (Experiment 3). Each experiment used the undersampled k-space data under acceleration factors of 4, 8, and 12. The reconstructed maps were evaluated using quantitative metrics. In this study, we performed three qMRI reconstruction measurements and compared the performance of the SL- and GT-free learning methods, SSL and ZSSSL. Overall, the performances of SSL and ZSSSL were only slightly inferior to those of SL, even under high AF conditions. The quantitative errors in diagnostically important tissues (WM, GM, and meniscus) were small, demonstrating that SL and ZSSSL performed comparably. Additionally, by incorporating a GPU memory-saving implementation, we demonstrated that the network can operate on a GPU with a small memory (<8GB) with minimal speed reduction. This study demonstrates the effectiveness of memory-efficient GT-free learning methods using MEL to accelerate qMRI.

Clinical-radiomics models with machine-learning algorithms to distinguish uncomplicated from complicated acute appendicitis in adults: a multiphase multicenter cohort study.

Li L, Sun Y, Sun Y, Gao Y, Zhang B, Qi R, Sheng F, Yang X, Liu X, Liu L, Lu C, Chen L, Zhang K

pubmed logopapersJan 1 2025
Increasing evidence suggests that non-operative management (NOM) with antibiotics could serve as a safe alternative to surgery for the treatment of uncomplicated acute appendicitis (AA). However, accurately differentiating between uncomplicated and complicated AA remains challenging. Our aim was to develop and validate machine-learning-based diagnostic models to differentiate uncomplicated from complicated AA. This was a multicenter cohort trial conducted from January 2021 and December 2022 across five tertiary hospitals. Three distinct diagnostic models were created, namely, the clinical-parameter-based model, the CT-radiomics-based model, and the clinical-radiomics-fused model. These models were developed using a comprehensive set of eight machine-learning algorithms, which included logistic regression (LR), support vector machine (SVM), random forest (RF), decision tree (DT), gradient boosting (GB), K-nearest neighbors (KNN), Gaussian Naïve Bayes (GNB), and multi-layer perceptron (MLP). The performance and accuracy of these diverse models were compared. All models exhibited excellent diagnostic performance in the training cohort, achieving a maximal AUC of 1.00. For the clinical-parameter model, the GB classifier yielded the optimal AUC of 0.77 (95% confidence interval [CI]: 0.64-0.90) in the testing cohort, while the LR classifier yielded the optimal AUC of 0.76 (95% CI: 0.66-0.86) in the validation cohort. For the CT-radiomics-based model, GB classifier achieved the best AUC of 0.74 (95% CI: 0.60-0.88) in the testing cohort, and SVM yielded an optimal AUC of 0.63 (95% CI: 0.51-0.75) in the validation cohort. For the clinical-radiomics-fused model, RF classifier yielded an optimal AUC of 0.84 (95% CI: 0.74-0.95) in the testing cohort and 0.76 (95% CI: 0.67-0.86) in the validation cohort. An open-access, user-friendly online tool was developed for clinical application. This multicenter study suggests that the clinical-radiomics-fused model, constructed using RF algorithm, effectively differentiated between complicated and uncomplicated AA.

Recognition of flight cadets brain functional magnetic resonance imaging data based on machine learning analysis.

Ye L, Weng S, Yan D, Ma S, Chen X

pubmed logopapersJan 1 2025
The rapid advancement of the civil aviation industry has attracted significant attention to research on pilots. However, the brain changes experienced by flight cadets following their training remain, to some extent, an unexplored territory compared to those of the general population. The aim of this study was to examine the impact of flight training on brain function by employing machine learning(ML) techniques. We collected resting-state functional magnetic resonance imaging (resting-state fMRI) data from 79 flight cadets and ground program cadets, extracting blood oxygenation level-dependent (BOLD) signal, amplitude of low frequency fluctuation (ALFF), regional homogeneity (ReHo), and functional connectivity (FC) metrics as feature inputs for ML models. After conducting feature selection using a two-sample t-test, we established various ML classification models, including Extreme Gradient Boosting (XGBoost), Logistic Regression (LR), Random Forest (RF), Support Vector Machine (SVM), and Gaussian Naive Bayes (GNB). Comparative analysis of the model results revealed that the LR classifier based on BOLD signals could accurately distinguish flight cadets from the general population, achieving an AUC of 83.75% and an accuracy of 0.93. Furthermore, an analysis of the features contributing significantly to the ML classification models indicated that these features were predominantly located in brain regions associated with auditory-visual processing, motor function, emotional regulation, and cognition, primarily within the Default Mode Network (DMN), Visual Network (VN), and SomatoMotor Network (SMN). These findings suggest that flight-trained cadets may exhibit enhanced functional dynamics and cognitive flexibility.

Cervical vertebral body segmentation in X-ray and magnetic resonance imaging based on YOLO-UNet: Automatic segmentation approach and available tool.

Wang H, Lu J, Yang S, Xiao Y, He L, Dou Z, Zhao W, Yang L

pubmed logopapersJan 1 2025
Cervical spine disorders are becoming increasingly common, particularly among sedentary populations. The accurate segmentation of cervical vertebrae is critical for diagnostic and research applications. Traditional segmentation methods are limited in terms of precision and applicability across imaging modalities. The aim of this study is to develop and evaluate a fully automatic segmentation method and a user-friendly tool for detecting cervical vertebral body using a combined neural network model based on the YOLOv11 and U-Net3 + models. A dataset of X-ray and magnetic resonance imaging (MRI) images was collected, enhanced, and annotated to include 2136 X-ray images and 2184 MRI images. The proposed YOLO-UNet ensemble model was trained and compared with four other groups of image extraction models, including YOLOv11, DeepLabV3+, U-Net3 + for direct image segmentation, and the YOLO-DeepLab network. The evaluation metrics included the Dice coefficient, Hausdorff distance, intersection over union, positive predictive value, and sensitivity. The YOLO-UNet model combined the advantages of the YOLO and U-Net models and demonstrated excellent vertebral body segmentation capabilities on both X-ray and MRI datasets, which were closer to the ground truth images. Compared with other models, it achieved greater accuracy and a more accurate depiction of the vertebral body shape, demonstrated better versatility, and exhibited superior performance across all evaluation indicators. The YOLO-UNet network model provided a robust and versatile solution for cervical vertebral body segmentation, demonstrating excellent accuracy and adaptability across imaging modalities on both X-ray and MRI datasets. The accompanying user-friendly tool enhanced usability, making it accessible to both clinical and research users. In this study, the challenge of large-scale medical annotation tasks was addressed, thereby reducing project costs and supporting advancements in medical information technology and clinical research.

Investigating methods to enhance interpretability and performance in cardiac MRI for myocardial scarring diagnosis using convolutional neural network classification and One Match.

Udin MH, Armstrong S, Kai A, Doyle ST, Pokharel S, Ionita CN, Sharma UC

pubmed logopapersJan 1 2025
Machine learning (ML) classification of myocardial scarring in cardiac MRI is often hindered by limited explainability, particularly with convolutional neural networks (CNNs). To address this, we developed One Match (OM), an algorithm that builds on template matching to improve on both the explainability and performance of ML myocardial scaring classification. By incorporating OM, we aim to foster trust in AI models for medical diagnostics and demonstrate that improved interpretability does not have to compromise classification accuracy. Using a cardiac MRI dataset from 279 patients, this study evaluates One Match, which classifies myocardial scarring in images by matching each image to a set of labeled template images. It uses the highest correlation score from these matches for classification and is compared to a traditional sequential CNN. Enhancements such as autodidactic enhancement (AE) and patient-level classifications (PLCs) were applied to improve the predictive accuracy of both methods. Results are reported as follows: accuracy, sensitivity, specificity, precision, and F1-score. The highest classification performance was observed with the OM algorithm when enhanced by both AE and PLCs, 95.3% accuracy, 92.3% sensitivity, 96.7% specificity, 92.3% precision, and 92.3% F1-score, marking a significant improvement over the base configurations. AE alone had a positive impact on OM increasing accuracy from 89.0% to 93.2%, but decreased the accuracy of the CNN from 85.3% to 82.9%. In contrast, PLCs improved accuracy for both the CNN and OM, raising the CNN's accuracy by 4.2% and OM's by 7.4%. This study demonstrates the effectiveness of OM in classifying myocardial scars, particularly when enhanced with AE and PLCs. The interpretability of OM also enabled the examination of misclassifications, providing insights that could accelerate development and foster greater trust among clinical stakeholders.

Metal artifact reduction combined with deep learning image reconstruction algorithm for CT image quality optimization: a phantom study.

Zou H, Wang Z, Guo M, Peng K, Zhou J, Zhou L, Fan B

pubmed logopapersJan 1 2025
Aiming to evaluate the effects of the smart metal artifact reduction (MAR) algorithm and combinations of various scanning parameters, including radiation dose levels, tube voltage, and reconstruction algorithms, on metal artifact reduction and overall image quality, to identify the optimal protocol for clinical application. A phantom with a pacemaker was examined using standard dose (effective dose (ED): 3 mSv) and low dose (ED: 0.5 mSv), with three scan voltages (70, 100, and 120 kVp) selected for each dose. Raw data were reconstructed using 50% adaptive statistical iterative reconstruction-V (ASIR-V), ASIR-V with MAR, high-strength deep learning image reconstruction (DLIR-H), and DLIR-H with MAR. Quantitative analyses (artifact index (AI), noise, signal-to-noise ratio (SNR) of artifact-impaired pulmonary nodules (PNs), and noise power spectrum (NPS) of artifact-free regions) and qualitative evaluation were performed. Quantitatively, the deep learning image recognition (DLIR) algorithm or high tube voltages exhibited lower noise compared to the ASIR-V or low tube voltages (<i>p</i> < 0.001). AI of images with MAR or high tube voltages was significantly lower than that of images without MAR or low tube voltages (<i>p</i> < 0.001). No significant difference was observed in AI between low-dose images with 120 kVp DLIR-H MAR and standard-dose images with 70 kVp ASIR-V MAR (<i>p</i> = 0.143). Only the 70 kVp 3 mSv protocol demonstrated statistically significant differences in SNR for artifact-impaired PNs (<i>p</i> = 0.041). The f<sub>peak</sub> and f<sub>avg</sub> values were similar across various scenarios, indicating that the MAR algorithm did not alter the image texture in artifact-free regions. The qualitative results of the extent of metal artifacts, the confidence in diagnosing artifact-impaired PNs, and the overall image quality were generally consistent with the quantitative results. The MAR algorithm combined with DLIR-H can reduce metal artifacts and enhance the overall image quality, particularly at high kVp tube voltages.

A plaque recognition algorithm for coronary OCT images by Dense Atrous Convolution and attention mechanism.

Meng H, Zhao R, Zhang Y, Zhang B, Zhang C, Wang D, Sun J

pubmed logopapersJan 1 2025
Currently, plaque segmentation in Optical Coherence Tomography (OCT) images of coronary arteries is primarily carried out manually by physicians, and the accuracy of existing automatic segmentation techniques needs further improvement. To furnish efficient and precise decision support, automated detection of plaques in coronary OCT images holds paramount importance. For addressing these challenges, we propose a novel deep learning algorithm featuring Dense Atrous Convolution (DAC) and attention mechanism to realize high-precision segmentation and classification of Coronary artery plaques. Then, a relatively well-established dataset covering 760 original images, expanded to 8,000 using data enhancement. This dataset serves as a significant resource for future research endeavors. The experimental results demonstrate that the dice coefficients of calcified, fibrous, and lipid plaques are 0.913, 0.900, and 0.879, respectively, surpassing those generated by five other conventional medical image segmentation networks. These outcomes strongly attest to the effectiveness and superiority of our proposed algorithm in the task of automatic coronary artery plaque segmentation.

Radiomics machine learning based on asymmetrically prominent cortical and deep medullary veins combined with clinical features to predict prognosis in acute ischemic stroke: a retrospective study.

Li H, Chang C, Zhou B, Lan Y, Zang P, Chen S, Qi S, Ju R, Duan Y

pubmed logopapersJan 1 2025
Acute ischemic stroke (AIS) has a poor prognosis and a high recurrence rate. Predicting the outcomes of AIS patients in the early stages of the disease is therefore important. The establishment of intracerebral collateral circulation significantly improves the survival of brain cells and the outcomes of AIS patients. However, no machine learning method has been applied to investigate the correlation between the dynamic evolution of intracerebral venous collateral circulation and AIS prognosis. Therefore, we employed a support vector machine (SVM) algorithm to analyze asymmetrically prominent cortical veins (APCVs) and deep medullary veins (DMVs) to establish a radiomic model for predicting the prognosis of AIS by combining clinical indicators. The magnetic resonance imaging (MRI) data and clinical indicators of 150 AIS patients were retrospectively analyzed. Regions of interest corresponding to the DMVs and APCVs were delineated, and least absolute shrinkage and selection operator (LASSO) regression was used to select features extracted from these regions. An APCV-DMV radiomic model was created via the SVM algorithm, and independent clinical risk factors associated with AIS were combined with the radiomic model to generate a joint model. The SVM algorithm was selected because of its proven efficacy in handling high-dimensional radiomic data compared with alternative classifiers (<i>e.g.</i>, random forest) in pilot experiments. Nine radiomic features associated with AIS patient outcomes were ultimately selected. In the internal training test set, the AUCs of the clinical, DMV-APCV radiomic and joint models were 0.816, 0.976 and 0.996, respectively. The DeLong test revealed that the predictive performance of the joint model was better than that of the individual models, with a test set AUC of 0.996, sensitivity of 0.905, and specificity of 1.000 (<i>P</i> < 0.05). Using radiomic methods, we propose a novel joint predictive model that combines the imaging histologic features of the APCV and DMV with clinical indicators. This model quantitatively characterizes the morphological and functional attributes of venous collateral circulation, elucidating its important role in accurately evaluating the prognosis of patients with AIS and providing a noninvasive and highly accurate imaging tool for early prognostic prediction.

MRISeqClassifier: A Deep Learning Toolkit for Precise MRI Sequence Classification.

Pan J, Chen Q, Sun C, Liang R, Bian J, Xu J

pubmed logopapersJan 1 2025
Magnetic Resonance Imaging (MRI) is a crucial diagnostic tool in medicine, widely used to detect and assess various health conditions. Different MRI sequences, such as T1-weighted, T2-weighted, and FLAIR, serve distinct roles by highlighting different tissue characteristics and contrasts. However, distinguishing them based solely on the description file is currently impossible due to confusing or incorrect annotations. Additionally, there is a notable lack of effective tools to differentiate these sequences. In response, we developed a deep learning-based toolkit tailored for small, unrefined MRI datasets. This toolkit enables precise sequence classification and delivers performance comparable to systems trained on large, meticulously curated datasets. Utilizing lightweight model architectures and incorporating a voting ensemble method, the toolkit enhances accuracy and stability. It achieves a 99% accuracy rate using only 10% of the data typically required in other research. The code is available at https://github.com/JinqianPan/MRISeqClassifier.

Enhancing Disease Detection in Radiology Reports Through Fine-tuning Lightweight LLM on Weak Labels.

Wei Y, Wang X, Ong H, Zhou Y, Flanders A, Shih G, Peng Y

pubmed logopapersJan 1 2025
Despite significant progress in applying large language models (LLMs) to the medical domain, several limitations still prevent them from practical applications. Among these are the constraints on model size and the lack of cohort-specific labeled datasets. In this work, we investigated the potential of improving a lightweight LLM, such as Llama 3.1-8B, through fine-tuning with datasets using synthetic labels. Two tasks are jointly trained by combining their respective instruction datasets. When the quality of the task-specific synthetic labels is relatively high (e.g., generated by GPT4-o), Llama 3.1-8B achieves satisfactory performance on the open-ended disease detection task, with a micro F1 score of 0.91. Conversely, when the quality of the task-relevant synthetic labels is relatively low (e.g., from the MIMIC-CXR dataset), fine-tuned Llama 3.1-8B is able to surpass its noisy teacher labels (micro F1 score of 0.67 v.s. 0.63) when calibrated against curated labels, indicating the strong inherent underlying capability of the model. These findings demonstrate the potential offine-tuning LLMs with synthetic labels, offering a promising direction for future research on LLM specialization in the medical domain.
Page 136 of 1381373 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.