Sort by:
Page 139 of 1401394 results

Convolutional neural network using magnetic resonance brain imaging to predict outcome from tuberculosis meningitis.

Dong THK, Canas LS, Donovan J, Beasley D, Thuong-Thuong NT, Phu NH, Ha NT, Ourselin S, Razavi R, Thwaites GE, Modat M

pubmed logopapersJan 1 2025
Tuberculous meningitis (TBM) leads to high mortality, especially amongst individuals with HIV. Predicting the incidence of disease-related complications is challenging, for which purpose the value of brain magnetic resonance imaging (MRI) has not been well investigated. We used a convolutional neural network (CNN) to explore the complementary contribution of brain MRI to the conventional prognostic determinants. We pooled data from two randomised control trials of HIV-positive and HIV-negative adults with clinical TBM in Vietnam to predict the occurrence of death or new neurological complications in the first two months after the subject's first MRI session. We developed and compared three models: a logistic regression with clinical, demographic and laboratory data as reference, a CNN that utilised only T1-weighted MRI volumes, and a model that fused all available information. All models were fine-tuned using two repetitions of 5-fold cross-validation. The final evaluation was based on a random 70/30 training/test split, stratified by the outcome and HIV status. Based on the selected model, we explored the interpretability maps derived from the models. 215 patients were included, with an event prevalence of 22.3%. On the test set our non-imaging model had higher AUC (71.2% [Formula: see text] 1.1%) than the imaging-only model (67.3% [Formula: see text] 2.6%). The fused model was superior to both, with an average AUC = 77.3% [Formula: see text] 4.0% in the test set. The non-imaging variables were more informative in the HIV-positive group, while the imaging features were more predictive in the HIV-negative group. All three models performed better in the HIV-negative cohort. The interpretability maps show the model's focus on the lateral fissures, the corpus callosum, the midbrain, and peri-ventricular tissues. Imaging information can provide added value to predict unwanted outcomes of TBM. However, to confirm this finding, a larger dataset is needed.

Radiomics of Dynamic Contrast-Enhanced MRI for Predicting Radiation-Induced Hepatic Toxicity After Intensity Modulated Radiotherapy for Hepatocellular Carcinoma: A Machine Learning Predictive Model Based on the SHAP Methodology.

Liu F, Chen L, Wu Q, Li L, Li J, Su T, Li J, Liang S, Qing L

pubmed logopapersJan 1 2025
To develop an interpretable machine learning (ML) model using dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) radiomic data, dosimetric parameters, and clinical data for predicting radiation-induced hepatic toxicity (RIHT) in patients with hepatocellular carcinoma (HCC) following intensity-modulated radiation therapy (IMRT). A retrospective analysis of 150 HCC patients was performed, with a 7:3 ratio used to divide the data into training and validation cohorts. Radiomic features from the original MRI sequences and Delta-radiomic features were extracted. Seven ML models based on radiomics were developed: logistic regression (LR), random forest (RF), support vector machine (SVM), eXtreme Gradient Boosting (XGBoost), adaptive boosting (AdaBoost), decision tree (DT), and artificial neural network (ANN). The predictive performance of the models was evaluated using receiver operating characteristic (ROC) curve analysis and calibration curves. Shapley additive explanations (SHAP) were employed to interpret the contribution of each variable and its risk threshold. Original radiomic features and Delta-radiomic features were extracted from DCE-MRI images and filtered to generate Radiomics-scores and Delta-Radiomics-scores. These were then combined with independent risk factors (Body Mass Index (BMI), V5, and pre-Child-Pugh score(pre-CP)) identified through univariate and multivariate logistic regression and Spearman correlation analysis to construct the ML models. In the training cohort, the AUC values were 0.8651 for LR, 0.7004 for RF, 0.6349 for SVM, 0.6706 for XGBoost, 0.7341 for AdaBoost, 0.6806 for Decision Tree, and 0.6786 for ANN. The corresponding accuracies were 84.4%, 65.6%, 75.0%, 65.6%, 71.9%, 68.8%, and 71.9%, respectively. The validation cohort further confirmed the superiority of the LR model, which was selected as the optimal model. SHAP analysis revealed that Delta-radiomics made a substantial positive contribution to the model. The interpretable ML model based on radiomics provides a non-invasive tool for predicting RIHT in patients with HCC, demonstrating satisfactory discriminative performance.

Principles for Developing a Large-Scale Point-of-Care Ultrasound Education Program: Insights from a Tertiary University Medical Center in Israel.

Dayan RR, Karni O, Shitrit IB, Gaufberg R, Ilan K, Fuchs L

pubmed logopapersJan 1 2025
Point-of-care ultrasound (POCUS) has transformed bedside diagnostics, yet its operator-dependent nature and lack of structured training remain significant barriers. To address these challenges, Ben Gurion University (BGU) developed a longitudinal six-year POCUS curriculum, emphasizing early integration, competency-based training, and scalable educational strategies to enhance medical education and patient care. To implement a structured and scalable POCUS curriculum that progressively builds technical proficiency, clinical judgment, and diagnostic accuracy, ensuring medical students effectively integrate POCUS into clinical practice. The curriculum incorporates hands-on training, self-directed learning, a structured spiral approach, and peer-led instruction. Early exposure in physics and anatomy courses establishes a foundation, progressing to bedside applications in clinical years. Advanced technologies, including AI-driven feedback and telemedicine, enhance skill retention and address faculty shortages by providing scalable solutions for ongoing assessment and feedback. Since its implementation in 2014, the program has trained hundreds of students, with longitudinal proficiency data from over 700 students. Internal studies have demonstrated that self-directed learning modules match or exceed in-person instruction for ultrasound skill acquisition, AI-driven feedback enhances image acquisition, and early clinical integration of POCUS positively influences patient care. Preliminary findings suggest that telemedicine-based instructor feedback improves cardiac ultrasound proficiency over time, and AI-assisted probe manipulation and self-learning with ultrasound simulators may further optimize training without requiring in-person instruction. A structured longitudinal approach ensures progressive skill acquisition while addressing faculty shortages and training limitations. Cost-effective strategies, such as peer-led instruction, AI feedback, and telemedicine, support skill development and sustainability. Emphasizing clinical integration ensures students learn to use POCUS as a targeted diagnostic adjunct rather than a broad screening tool, reinforcing its role as an essential skill in modern medical education.

XLLC-Net: A lightweight and explainable CNN for accurate lung cancer classification using histopathological images.

Jim JR, Rayed ME, Mridha MF, Nur K

pubmed logopapersJan 1 2025
Lung cancer imaging plays a crucial role in early diagnosis and treatment, where machine learning and deep learning have significantly advanced the accuracy and efficiency of disease classification. This study introduces the Explainable and Lightweight Lung Cancer Net (XLLC-Net), a streamlined convolutional neural network designed for classifying lung cancer from histopathological images. Using the LC25000 dataset, which includes three lung cancer classes and two colon cancer classes, we focused solely on the three lung cancer classes for this study. XLLC-Net effectively discerns complex disease patterns within these classes. The model consists of four convolutional layers and contains merely 3 million parameters, considerably reducing its computational footprint compared to existing deep learning models. This compact architecture facilitates efficient training, completing each epoch in just 60 seconds. Remarkably, XLLC-Net achieves a classification accuracy of 99.62% [Formula: see text] 0.16%, with precision, recall, and F1 score of 99.33% [Formula: see text] 0.30%, 99.67% [Formula: see text] 0.30%, and 99.70% [Formula: see text] 0.30%, respectively. Furthermore, the integration of Explainable AI techniques, such as Saliency Map and GRAD-CAM, enhances the interpretability of the model, offering clear visual insights into its decision-making process. Our results underscore the potential of lightweight DL models in medical imaging, providing high accuracy and rapid training while ensuring model transparency and reliability.

Volumetric atlas of the rat inner ear from microCT and iDISCO+ cleared temporal bones.

Cossellu D, Vivado E, Batti L, Gantar I, Pizzala R, Perin P

pubmed logopapersJan 1 2025
Volumetric atlases are an invaluable tool in neuroscience and otolaryngology, greatly aiding experiment planning and surgical interventions, as well as the interpretation of experimental and clinical data. The rat is a major animal model for hearing and balance studies, and a detailed volumetric atlas for the rat central auditory system (Waxholm) is available. However, the Waxholm rat atlas only contains a low-resolution inner ear featuring five structures. In the present work, we segmented and annotated 34 structures in the rat inner ear, yielding a detailed volumetric inner ear atlas which can be integrated with the Waxholm rat brain atlas. We performed iodine-enhanced microCT and iDISCO+-based clearing and fluorescence lightsheet microscopy imaging on a sample of rat temporal bones. Image stacks were segmented in a semiautomated way, and 34 inner ear volumes were reconstructed from five samples. Using geometrical morphometry, high-resolution segmentations obtained from lightsheet and microCT stacks were registered into the coordinate system of the Waxholm rat atlas. Cleared sample autofluorescence was used for the reconstruction of most inner ear structures, including fluid-filled compartments, nerves and sensory epithelia, blood vessels, and connective tissue structures. Image resolution allowed reconstruction of thin ducts (reuniting, saccular and endolymphatic), and the utriculoendolymphatic valve. The vestibulocochlear artery coursing through bone was found to be associated to the reuniting duct, and to be visible both in cleared and microCT samples, thus allowing to infer duct location from microCT scans. Cleared labyrinths showed minimal shape distortions, as shown by alignment with microCT and Waxholm labyrinths. However, membranous labyrinths could display variable collapse of the superior division, especially the roof of canal ampullae, whereas the inferior division (saccule and cochlea) was well preserved, with the exception of Reissner's membrane that could display ruptures in the second cochlear turn. As an example of atlas use, the volumes reconstructed from segmentations were used to separate macrophage populations from the spiral ganglion, auditory neuron dendrites, and Organ of Corti. We have reconstructed 34 structures from the rat temporal bone, which are available as both image stacks and printable 3D objects in a shared repository for download. These can be used for teaching, localizing cells or other features within the ear, modeling auditory and vestibular sensory physiology and training of automated segmentation machine learning tools.

Application research of artificial intelligence software in the analysis of thyroid nodule ultrasound image characteristics.

Xu C, Wang Z, Zhou J, Hu F, Wang Y, Xu Z, Cai Y

pubmed logopapersJan 1 2025
Thyroid nodule, as a common clinical endocrine disease, has become increasingly prevalent worldwide. Ultrasound, as the premier method of thyroid imaging, plays an important role in accurately diagnosing and managing thyroid nodules. However, there is a high degree of inter- and intra-observer variability in image interpretation due to the different knowledge and experience of sonographers who have huge ultrasound examination tasks everyday. Artificial intelligence based on computer-aided diagnosis technology maybe improve the accuracy and time efficiency of thyroid nodules diagnosis. This study introduced an artificial intelligence software called SW-TH01/II to evaluate ultrasound image characteristics of thyroid nodules including echogenicity, shape, border, margin, and calcification. We included 225 ultrasound images from two hospitals in Shanghai, respectively. The sonographers and software performed characteristics analysis on the same group of images. We analyzed the consistency of the two results and used the sonographers' results as the gold standard to evaluate the accuracy of SW-TH01/II. A total of 449 images were included in the statistical analysis. For the seven indicators, the proportions of agreement between SW-TH01/II and sonographers' analysis results were all greater than 0.8. For the echogenicity (with very hypoechoic), aspect ratio and margin, the kappa coefficient between the two methods were above 0.75 (P < 0.001). The kappa coefficients of echogenicity (echotexture and echogenicity level), border and calcification between the two methods were above 0.6 (P < 0.001). The median time it takes for software and sonographers to interpret an image were 3 (2, 3) seconds and 26.5 (21.17, 34.33) seconds, respectively, and the difference were statistically significant (z = -18.36, P < 0.001). SW-TH01/II has a high degree of accuracy and great time efficiency benefits in judging the characteristics of thyroid nodule. It can provide more objective results and improve the efficiency of ultrasound examination. SW-TH01/II can be used to assist the sonographers in characterizing the thyroid nodule ultrasound images.

Intelligent and precise auxiliary diagnosis of breast tumors using deep learning and radiomics.

Wang T, Zang B, Kong C, Li Y, Yang X, Yu Y

pubmed logopapersJan 1 2025
Breast cancer is the most common malignant tumor among women worldwide, and early diagnosis is crucial for reducing mortality rates. Traditional diagnostic methods have significant limitations in terms of accuracy and consistency. Imaging is a common technique for diagnosing and predicting breast cancer, but human error remains a concern. Increasingly, artificial intelligence (AI) is being employed to assist physicians in reducing diagnostic errors. We developed an intelligent diagnostic model combining deep learning and radiomics to enhance breast tumor diagnosis. The model integrates MobileNet with ResNeXt-inspired depthwise separable and grouped convolutions, improving feature processing and efficiency while reducing parameters. Using AI-Dhabyani and TCIA breast ultrasound datasets, we validated the model internally and externally, comparing it to VGG16, ResNet, AlexNet, and MobileNet. Results: The internal validation set achieved an accuracy of 83.84% with an AUC of 0.92, outperforming other models. The external validation set showed an accuracy of 69.44% with an AUC of 0.75, demonstrating high robustness and generalizability. Conclusions: We developed an intelligent diagnostic model using deep learning and radiomics to improve breast tumor diagnosis. The model combines MobileNet with ResNeXt-inspired depthwise separable and grouped convolutions, enhancing feature processing and efficiency while reducing parameters. It was validated internally and externally using the AI-Dhabyani and TCIA breast ultrasound datasets and compared with VGG16, ResNet, AlexNet, and MobileNet.

Ground-truth-free deep learning approach for accelerated quantitative parameter mapping with memory efficient learning.

Fujita N, Yokosawa S, Shirai T, Terada Y

pubmed logopapersJan 1 2025
Quantitative MRI (qMRI) requires the acquisition of multiple images with parameter changes, resulting in longer measurement times than conventional imaging. Deep learning (DL) for image reconstruction has shown a significant reduction in acquisition time and improved image quality. In qMRI, where the image contrast varies between sequences, preparing large, fully-sampled (FS) datasets is challenging. Recently, methods that do not require FS data such as self-supervised learning (SSL) and zero-shot self-supervised learning (ZSSSL) have been proposed. Another challenge is the large GPU memory requirement for DL-based qMRI image reconstruction, owing to the simultaneous processing of multiple contrast images. In this context, Kellman et al. proposed memory-efficient learning (MEL) to save the GPU memory. This study evaluated SSL and ZSSSL frameworks with MEL to accelerate qMRI. Three experiments were conducted using the following sequences: 2D T2 mapping/MSME (Experiment 1), 3D T1 mapping/VFA-SPGR (Experiment 2), and 3D T2 mapping/DESS (Experiment 3). Each experiment used the undersampled k-space data under acceleration factors of 4, 8, and 12. The reconstructed maps were evaluated using quantitative metrics. In this study, we performed three qMRI reconstruction measurements and compared the performance of the SL- and GT-free learning methods, SSL and ZSSSL. Overall, the performances of SSL and ZSSSL were only slightly inferior to those of SL, even under high AF conditions. The quantitative errors in diagnostically important tissues (WM, GM, and meniscus) were small, demonstrating that SL and ZSSSL performed comparably. Additionally, by incorporating a GPU memory-saving implementation, we demonstrated that the network can operate on a GPU with a small memory (<8GB) with minimal speed reduction. This study demonstrates the effectiveness of memory-efficient GT-free learning methods using MEL to accelerate qMRI.

Clinical-radiomics models with machine-learning algorithms to distinguish uncomplicated from complicated acute appendicitis in adults: a multiphase multicenter cohort study.

Li L, Sun Y, Sun Y, Gao Y, Zhang B, Qi R, Sheng F, Yang X, Liu X, Liu L, Lu C, Chen L, Zhang K

pubmed logopapersJan 1 2025
Increasing evidence suggests that non-operative management (NOM) with antibiotics could serve as a safe alternative to surgery for the treatment of uncomplicated acute appendicitis (AA). However, accurately differentiating between uncomplicated and complicated AA remains challenging. Our aim was to develop and validate machine-learning-based diagnostic models to differentiate uncomplicated from complicated AA. This was a multicenter cohort trial conducted from January 2021 and December 2022 across five tertiary hospitals. Three distinct diagnostic models were created, namely, the clinical-parameter-based model, the CT-radiomics-based model, and the clinical-radiomics-fused model. These models were developed using a comprehensive set of eight machine-learning algorithms, which included logistic regression (LR), support vector machine (SVM), random forest (RF), decision tree (DT), gradient boosting (GB), K-nearest neighbors (KNN), Gaussian Naïve Bayes (GNB), and multi-layer perceptron (MLP). The performance and accuracy of these diverse models were compared. All models exhibited excellent diagnostic performance in the training cohort, achieving a maximal AUC of 1.00. For the clinical-parameter model, the GB classifier yielded the optimal AUC of 0.77 (95% confidence interval [CI]: 0.64-0.90) in the testing cohort, while the LR classifier yielded the optimal AUC of 0.76 (95% CI: 0.66-0.86) in the validation cohort. For the CT-radiomics-based model, GB classifier achieved the best AUC of 0.74 (95% CI: 0.60-0.88) in the testing cohort, and SVM yielded an optimal AUC of 0.63 (95% CI: 0.51-0.75) in the validation cohort. For the clinical-radiomics-fused model, RF classifier yielded an optimal AUC of 0.84 (95% CI: 0.74-0.95) in the testing cohort and 0.76 (95% CI: 0.67-0.86) in the validation cohort. An open-access, user-friendly online tool was developed for clinical application. This multicenter study suggests that the clinical-radiomics-fused model, constructed using RF algorithm, effectively differentiated between complicated and uncomplicated AA.

Recognition of flight cadets brain functional magnetic resonance imaging data based on machine learning analysis.

Ye L, Weng S, Yan D, Ma S, Chen X

pubmed logopapersJan 1 2025
The rapid advancement of the civil aviation industry has attracted significant attention to research on pilots. However, the brain changes experienced by flight cadets following their training remain, to some extent, an unexplored territory compared to those of the general population. The aim of this study was to examine the impact of flight training on brain function by employing machine learning(ML) techniques. We collected resting-state functional magnetic resonance imaging (resting-state fMRI) data from 79 flight cadets and ground program cadets, extracting blood oxygenation level-dependent (BOLD) signal, amplitude of low frequency fluctuation (ALFF), regional homogeneity (ReHo), and functional connectivity (FC) metrics as feature inputs for ML models. After conducting feature selection using a two-sample t-test, we established various ML classification models, including Extreme Gradient Boosting (XGBoost), Logistic Regression (LR), Random Forest (RF), Support Vector Machine (SVM), and Gaussian Naive Bayes (GNB). Comparative analysis of the model results revealed that the LR classifier based on BOLD signals could accurately distinguish flight cadets from the general population, achieving an AUC of 83.75% and an accuracy of 0.93. Furthermore, an analysis of the features contributing significantly to the ML classification models indicated that these features were predominantly located in brain regions associated with auditory-visual processing, motor function, emotional regulation, and cognition, primarily within the Default Mode Network (DMN), Visual Network (VN), and SomatoMotor Network (SMN). These findings suggest that flight-trained cadets may exhibit enhanced functional dynamics and cognitive flexibility.
Page 139 of 1401394 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.