Sort by:
Page 292 of 3463455 results

Referenceless 4D Flow Cardiovascular Magnetic Resonance with deep learning.

Trenti C, Ylipää E, Ebbers T, Carlhäll CJ, Engvall J, Dyverfeldt P

pubmed logopapersJun 2 2025
Despite its potential to improve the assessment of cardiovascular diseases, 4D Flow CMR is hampered by long scan times. 4D Flow CMR is conventionally acquired with three motion encodings and one reference encoding, as the 3-dimensional velocity data are obtained by subtracting the phase of the reference from the phase of the motion encodings. In this study, we aim to use deep learning to predict the reference encoding from the three motion encodings for cardiovascular 4D Flow. A U-Net was trained with adversarial learning (U-Net<sub>ADV</sub>) and with a velocity frequency-weighted loss function (U-Net<sub>VEL</sub>) to predict the reference encoding from the three motion encodings obtained with a non-symmetric velocity-encoding scheme. Whole-heart 4D Flow datasets from 126 patients with different types of cardiomyopathies were retrospectively included. The models were trained on 113 patients with a 5-fold cross-validation, and tested on 13 patients. Flow volumes in the aorta and pulmonary artery, mean and maximum velocity, total and maximum turbulent kinetic energy at peak systole in the cardiac chambers and main vessels were assessed. 3-dimensional velocity data reconstructed with the reference encoding predicted by deep learning agreed well with the velocities obtained with the reference encoding acquired at the scanner for both models. U-Net<sub>ADV</sub> performed more consistently throughout the cardiac cycle and across the test subjects, while U-Net<sub>VEL</sub> performed better for systolic velocities. Comprehensively, the largest error for flow volumes, maximum and mean velocities was -6.031% for maximum velocities in the right ventricle for the U-Net<sub>ADV</sub>, and -6.92% for mean velocities in the right ventricle for U-Net<sub>VEL</sub>. For total turbulent kinetic energy, the highest errors were in the left ventricle (-77.17%) for the U-Net<sub>ADV</sub>, and in the right ventricle (24.96%) for the U-Net<sub>VEL</sub>, while for maximum turbulent kinetic energy were in the pulmonary artery for both models, with a value of -15.5% for U-Net<sub>ADV</sub> and 15.38% for the U-Net<sub>VEL</sub>. Deep learning-enabled referenceless 4D Flow CMR permits velocities and flow volumes quantification comparable to conventional 4D Flow. Omitting the reference encoding reduces the amount of acquired data by 25%, thus allowing shorter scan times or improved resolution, which is valuable for utilization in the clinical routine.

A Deep Learning-Based Artificial Intelligence Model Assisting Thyroid Nodule Diagnosis and Management: Pilot Results for Evaluating Thyroid Malignancy in Pediatric Cohorts.

Ha EJ, Lee JH, Mak N, Duh AK, Tong E, Yeom KW, Meister KD

pubmed logopapersJun 2 2025
<b><i>Purpose:</i></b> Artificial intelligence (AI) models have shown promise in predicting malignant thyroid nodules in adults; however, research on deep learning (DL) for pediatric cases is limited. We evaluated the applicability of a DL-based model for assessing thyroid nodules in children. <b><i>Methods:</i></b> We retrospectively identified two pediatric cohorts (<i>n</i> = 128; mean age 15.5 ± 2.4 years; 103 girls) who had thyroid nodule ultrasonography (US) with histological confirmation at two institutions. The AI-Thyroid DL model, originally trained on adult data, was tested on pediatric nodules in three scenarios axial US images, longitudinal US images, and both. We conducted a subgroup analysis based on the two pediatric cohorts and age groups (≥14 years vs. < 14 years) and compared the model's performance with radiologist interpretations using the Thyroid Imaging Reporting and Data System (TIRADS). <b><i>Results:</i></b> Out of 156 nodules analyzed, 47 (30.1%) were malignant. AI-Thyroid demonstrated respective area under the receiver operating characteristic (AUROC), sensitivity, and specificity values of 0.913-0.929, 78.7-89.4%, and 79.8-91.7%, respectively. The AUROC values did not significantly differ across the image planes (all <i>p</i> > 0.05) and between the two pediatric cohorts (<i>p</i> = 0.804). No significant differences were observed between age groups in terms of sensitivity and specificity (all <i>p</i> > 0.05) while the AUROC values were higher for patients aged <14 years compared to those aged ≥14 years (all <i>p</i> < 0.01). AI-Thyroid yielded the highest AUROC values, followed by ACR-TIRADS and K-TIRADS (<i>p</i> = 0.016 and <i>p</i> < 0.001, respectively). <b><i>Conclusion:</i></b> AI-Thyroid demonstrated high performance in diagnosing pediatric thyroid cancer. Future research should focus on optimizing AI-Thyroid for pediatric use and exploring its role alongside tissue sampling in clinical practice.

SASWISE-UE: Segmentation and synthesis with interpretable scalable ensembles for uncertainty estimation.

Chen W, McMillan AB

pubmed logopapersJun 2 2025
This paper introduces an efficient sub-model ensemble framework aimed at enhancing the interpretability of medical deep learning models, thus increasing their clinical applicability. By generating uncertainty maps, this framework enables end-users to evaluate the reliability of model outputs. We developed a strategy to generate diverse models from a single well-trained checkpoint, facilitating the training of a model family. This involves producing multiple outputs from a single input, fusing them into a final output, and estimating uncertainty based on output disagreements. Implemented using U-Net and UNETR models for segmentation and synthesis tasks, this approach was tested on CT body segmentation and MR-CT synthesis datasets. It achieved a mean Dice coefficient of 0.814 in segmentation and a Mean Absolute Error of 88.17 HU in synthesis, improved from 89.43 HU by pruning. Additionally, the framework was evaluated under image corruption and data undersampling, maintaining correlation between uncertainty and error, which highlights its robustness. These results suggest that the proposed approach not only maintains the performance of well-trained models but also enhances interpretability through effective uncertainty estimation, applicable to both convolutional and transformer models in a range of imaging tasks.

Inferring single-cell spatial gene expression with tissue morphology via explainable deep learning

Zhao, Y., Alizadeh, E., Taha, H. B., Liu, Y., Xu, M., Mahoney, J. M., Li, S.

biorxiv logopreprintJun 2 2025
Deep learning models trained with spatial omics data uncover complex patterns and relationships among cells, genes, and proteins in a high-dimensional space. State-of-the-art in silico spatial multi-cell gene expression methods using histological images of tissue stained with hematoxylin and eosin (H&E) allow us to characterize cellular heterogeneity. We developed a vision transformer (ViT) framework to map histological signatures to spatial single-cell transcriptomic signatures, named SPiRiT. SPiRiT predicts single-cell spatial gene expression using the matched H&E image tiles of human breast cancer and whole mouse pup, evaluated by Xenium (10x Genomics) datasets. Importantly, SPiRiT incorporates rigorous strategies to ensure reproducibility and robustness of predictions and provides trustworthy interpretation through attention-based model explainability. SPiRiT model interpretation revealed the areas, and attention details it uses to predict gene expressions like marker genes in invasive cancer cells. In an apple-to-apple comparison with ST-Net, SPiRiT improved the predictive accuracy by 40%. These gene predictions and expression levels were highly consistent with the tumor region annotation. In summary, SPiRiT highlights the feasibility to infer spatial single-cell gene expression using tissue morphology in multiple-species.

Automated engineered-stone silicosis screening and staging using Deep Learning with X-rays.

Priego-Torres B, Sanchez-Morillo D, Khalili E, Conde-Sánchez MÁ, García-Gámez A, León-Jiménez A

pubmed logopapersJun 1 2025
Silicosis, a debilitating occupational lung disease caused by inhaling crystalline silica, continues to be a significant global health issue, especially with the increasing use of engineered stone (ES) surfaces containing high silica content. Traditional diagnostic methods, dependent on radiological interpretation, have low sensitivity, especially, in the early stages of the disease, and present variability between evaluators. This study explores the efficacy of deep learning techniques in automating the screening and staging of silicosis using chest X-ray images. Utilizing a comprehensive dataset, obtained from the medical records of a cohort of workers exposed to artificial quartz conglomerates, we implemented a preprocessing stage for rib-cage segmentation, followed by classification using state-of-the-art deep learning models. The segmentation model exhibited high precision, ensuring accurate identification of thoracic structures. In the screening phase, our models achieved near-perfect accuracy, with ROC AUC values reaching 1.0, effectively distinguishing between healthy individuals and those with silicosis. The models demonstrated remarkable precision in the staging of the disease. Nevertheless, differentiating between simple silicosis and progressive massive fibrosis, the evolved and complicated form of the disease, presented certain difficulties, especially during the transitional period, when assessment can be significantly subjective. Notwithstanding these difficulties, the models achieved an accuracy of around 81% and ROC AUC scores nearing 0.93. This study highlights the potential of deep learning to generate clinical decision support tools to increase the accuracy and effectiveness in the diagnosis and staging of silicosis, whose early detection would allow the patient to be moved away from all sources of occupational exposure, therefore constituting a substantial advancement in occupational health diagnostics.

Axial Skeletal Assessment in Osteoporosis Using Radiofrequency Echographic Multi-spectrometry: Diagnostic Performance, Clinical Utility, and Future Directions.

As'ad M

pubmed logopapersJun 1 2025
Osteoporosis, a prevalent skeletal disorder, necessitates accurate and accessible diagnostic tools for effective disease management and fracture prevention. While dual-energy X-ray absorptiometry (DXA) remains the clinical standard for bone mineral density (BMD) assessment, its limitations, including ionizing radiation exposure and susceptibility to artifacts, underscore the need for alternative technologies. Ultrasound-based methods have emerged as promising radiation-free alternatives, with radiofrequency echographic multi-spectrometry (REMS) representing a significant advancement in axial skeleton assessment, specifically at the lumbar spine and proximal femur. REMS analyzes unfiltered radiofrequency ultrasound signals, providing not only BMD estimates but also a novel fragility score (FS), which reflects bone quality and microarchitectural integrity. This review critically evaluates the underlying principles, diagnostic performance, and clinical applications of REMS. It compares REMS with DXA, quantitative computed tomography (QCT), and trabecular bone score (TBS), highlighting REMS's potential advantages in artifact-prone scenarios and specific populations, including children and patients with secondary osteoporosis. The clinical utility of REMS in fracture risk prediction and therapy monitoring is explored alongside its operational precision, cost-effectiveness, and portability. In addition, the integration of artificial intelligence (AI) within REMS software has enhanced its capacity for artifact exclusion and automated spectral interpretation, improving usability and reproducibility. Current limitations, such as the need for broader validation and guideline inclusion, are identified, and future research directions are proposed. These include multicenter validation studies, development of pediatric and secondary osteoporosis reference models, and deeper evaluation of AI-driven enhancements. REMS offers a compelling, non-ionizing alternative for axial bone health assessment and may significantly advance the diagnostic landscape for osteoporosis care.

Efficient slice anomaly detection network for 3D brain MRI Volume.

Zhang Z, Mohsenzadeh Y

pubmed logopapersJun 1 2025
Current anomaly detection methods excel with benchmark industrial data but struggle with natural images and medical data due to varying definitions of 'normal' and 'abnormal.' This makes accurate identification of deviations in these fields particularly challenging. Especially for 3D brain MRI data, all the state-of-the-art models are reconstruction-based with 3D convolutional neural networks which are memory-intensive, time-consuming and producing noisy outputs that require further post-processing. We propose a framework called Simple Slice-based Network (SimpleSliceNet), which utilizes a model pre-trained on ImageNet and fine-tuned on a separate MRI dataset as a 2D slice feature extractor to reduce computational cost. We aggregate the extracted features to perform anomaly detection tasks on 3D brain MRI volumes. Our model integrates a conditional normalizing flow to calculate log likelihood of features and employs the contrastive loss to enhance anomaly detection accuracy. The results indicate improved performance, showcasing our model's remarkable adaptability and effectiveness when addressing the challenges exists in brain MRI data. In addition, for the large-scale 3D brain volumes, our model SimpleSliceNet outperforms the state-of-the-art 2D and 3D models in terms of accuracy, memory usage and time consumption. Code is available at: https://github.com/Jarvisarmy/SimpleSliceNet.

Development and validation of a combined clinical and MRI-based biomarker model to differentiate mild cognitive impairment from mild Alzheimer's disease.

Hosseini Z, Mohebbi A, Kiani I, Taghilou A, Mohammadjafari A, Aghamollaii V

pubmed logopapersJun 1 2025
Two of the most common complaints seen in neurology clinics are Alzheimer's disease (AD) and mild cognitive impairment (MCI), characterized by similar symptoms. The aim of this study was to develop and internally validate the diagnostic value of combined neurological and radiological predictors in differentiating mild AD from MCI as the outcome variable, which helps in preventing AD development. A cross-sectional study of 161 participants was conducted in a general healthcare setting, including 30 controls, 71 mild AD, and 60 MCI. Binary logistic regression was used to identify predictors of interest, with collinearity assessment conducted prior to model development. Model performance was assessed through calibration, shrinkage, and decision-curve analyses. Finally, the combined clinical and radiological model was compared to models utilizing only clinical or radiological predictors. The final model included age, sex, education status, Montreal cognitive assessment, Global Cerebral Atrophy Index, Medial Temporal Atrophy Scale, mean hippocampal volume, and Posterior Parietal Atrophy Index, with the area under the curve of 0.978 (0.934-0.996). Internal validation methods did not show substantial reduction in diagnostic performance. Combined model showed higher diagnostic performance compared to clinical and radiological models alone. Decision curve analysis highlighted the usefulness of this model for differentiation across all probability levels. A combined clinical-radiological model has excellent diagnostic performance in differentiating mild AD from MCI. Notably, the model leveraged straightforward neuroimaging markers, which are relatively simple to measure and interpret, suggesting that they could be integrated into practical, formula-driven diagnostic workflows without requiring computationally intensive deep learning models.

Improving predictability, reliability, and generalizability of brain-wide associations for cognitive abilities via multimodal stacking.

Tetereva A, Knodt AR, Melzer TR, van der Vliet W, Gibson B, Hariri AR, Whitman ET, Li J, Lal Khakpoor F, Deng J, Ireland D, Ramrakha S, Pat N

pubmed logopapersJun 1 2025
Brain-wide association studies (BWASs) have attempted to relate cognitive abilities with brain phenotypes, but have been challenged by issues such as predictability, test-retest reliability, and cross-cohort generalizability. To tackle these challenges, we proposed a machine learning "stacking" approach that draws information from whole-brain MRI across different modalities, from task-functional MRI (fMRI) contrasts and functional connectivity during tasks and rest to structural measures, into one prediction model. We benchmarked the benefits of stacking using the Human Connectome Projects: Young Adults (<i>n</i> = 873, 22-35 years old) and Human Connectome Projects-Aging (<i>n</i> = 504, 35-100 years old) and the Dunedin Multidisciplinary Health and Development Study (Dunedin Study, <i>n</i> = 754, 45 years old). For predictability, stacked models led to out-of-sample <i>r</i>∼0.5-0.6 when predicting cognitive abilities at the time of scanning, primarily driven by task-fMRI contrasts. Notably, using the Dunedin Study, we were able to predict participants' cognitive abilities at ages 7, 9, and 11 years using their multimodal MRI at age 45 years, with an out-of-sample <i>r</i> of 0.52. For test-retest reliability, stacked models reached an excellent level of reliability (interclass correlation > 0.75), even when we stacked only task-fMRI contrasts together. For generalizability, a stacked model with nontask MRI built from one dataset significantly predicted cognitive abilities in other datasets. Altogether, stacking is a viable approach to undertake the three challenges of BWAS for cognitive abilities.

Implementation costs and cost-effectiveness of ultraportable chest X-ray with artificial intelligence in active case finding for tuberculosis in Nigeria.

Garg T, John S, Abdulkarim S, Ahmed AD, Kirubi B, Rahman MT, Ubochioma E, Creswell J

pubmed logopapersJun 1 2025
Availability of ultraportable chest x-ray (CXR) and advancements in artificial intelligence (AI)-enabled CXR interpretation are promising developments in tuberculosis (TB) active case finding (ACF) but costing and cost-effectiveness analyses are limited. We provide implementation cost and cost-effectiveness estimates of different screening algorithms using symptoms, CXR and AI in Nigeria. People 15 years and older were screened for TB symptoms and offered a CXR with AI-enabled interpretation using qXR v3 (Qure.ai) at lung health camps. Sputum samples were tested on Xpert MTB/RIF for individuals reporting symptoms or with qXR abnormality scores ≥0.30. We conducted a retrospective costing using a combination of top-down and bottom-up approaches while utilizing itemized expense data from a health system perspective. We estimated costs in five screening scenarios: abnormality score ≥0.30 and ≥0.50; cough ≥ 2 weeks; any symptom; abnormality score ≥0.30 or any symptom. We calculated total implementation costs, cost per bacteriologically-confirmed case detected, and assessed cost-effectiveness using incremental cost-effectiveness ratio (ICER) as additional cost per additional case. Overall, 3205 people with presumptive TB were identified, 1021 were tested, and 85 people with bacteriologically-confirmed TB were detected. Abnormality ≥ 0.30 or any symptom (US$65704) had the highest costs while cough ≥ 2 weeks was the lowest (US$40740). The cost per case was US$1198 for cough ≥ 2 weeks, and lowest for any symptom (US$635). Compared to baseline strategy of cough ≥ 2 weeks, the ICER for any symptom was US$191 per additional case detected and US$ 2096 for Abnormality ≥0.30 OR any symptom algorithm. Using CXR and AI had lower cost per case detected than any symptom screening criteria when asymptomatic TB was higher than 30% of all bacteriologically-confirmed TB detected. Compared to traditional symptom screening, using CXR and AI in combination with symptoms detects more cases at lower cost per case detected and is cost-effective. TB programs should explore adoption of CXR and AI for screening in ACF.
Page 292 of 3463455 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.