Sort by:
Page 39 of 2252246 results

Deep learning-quantified body composition from positron emission tomography/computed tomography and cardiovascular outcomes: a multicentre study.

Miller RJH, Yi J, Shanbhag A, Marcinkiewicz A, Patel KK, Lemley M, Ramirez G, Geers J, Chareonthaitawee P, Wopperer S, Berman DS, Di Carli M, Dey D, Slomka PJ

pubmed logopapersJun 23 2025
Positron emission tomography (PET)/computed tomography (CT) myocardial perfusion imaging (MPI) is a vital diagnostic tool, especially in patients with cardiometabolic syndrome. Low-dose CT scans are routinely performed with PET for attenuation correction and potentially contain valuable data about body tissue composition. Deep learning and image processing were combined to automatically quantify skeletal muscle (SM), bone and adipose tissue from these scans and then evaluate their associations with death or myocardial infarction (MI). In PET MPI from three sites, deep learning quantified SM, bone, epicardial adipose tissue (EAT), subcutaneous adipose tissue (SAT), visceral adipose tissue (VAT), and intermuscular adipose tissue (IMAT). Sex-specific thresholds for abnormal values were established. Associations with death or MI were evaluated using unadjusted and multivariable models adjusted for clinical and imaging factors. This study included 10 085 patients, with median age 68 (interquartile range 59-76) and 5767 (57%) male. Body tissue segmentations were completed in 102 ± 4 s. Higher VAT density was associated with an increased risk of death or MI in both unadjusted [hazard ratio (HR) 1.40, 95% confidence interval (CI) 1.37-1.43] and adjusted (HR 1.24, 95% CI 1.19-1.28) analyses, with similar findings for IMAT, SAT, and EAT. Patients with elevated VAT density and reduced myocardial flow reserve had a significantly increased risk of death or MI (adjusted HR 2.49, 95% CI 2.23-2.77). Volumetric body tissue composition can be obtained rapidly and automatically from standard cardiac PET/CT. This new information provides a detailed, quantitative assessment of sarcopenia and cardiometabolic health for physicians.

Cost-effectiveness of a novel AI technology to quantify coronary inflammation and cardiovascular risk in patients undergoing routine coronary computed tomography angiography.

Tsiachristas A, Chan K, Wahome E, Kearns B, Patel P, Lyasheva M, Syed N, Fry S, Halborg T, West H, Nicol E, Adlam D, Modi B, Kardos A, Greenwood JP, Sabharwal N, De Maria GL, Munir S, McAlindon E, Sohan Y, Tomlins P, Siddique M, Shirodaria C, Blankstein R, Desai M, Neubauer S, Channon KM, Deanfield J, Akehurst R, Antoniades C

pubmed logopapersJun 23 2025
Coronary computed tomography angiography (CCTA) is a first-line investigation for chest pain in patients with suspected obstructive coronary artery disease (CAD). However, many acute cardiac events occur in the absence of obstructive CAD. We assessed the lifetime cost-effectiveness of integrating a novel artificial intelligence-enhanced image analysis algorithm (AI-Risk) that stratifies the risk of cardiac events by quantifying coronary inflammation, combined with the extent of coronary artery plaque and clinical risk factors, by analysing images from routine CCTA. A hybrid decision-tree with population cohort Markov model was developed from 3393 consecutive patients who underwent routine CCTA for suspected obstructive CAD and followed up for major adverse cardiac events over a median (interquartile range) of 7.7(6.4-9.1) years. In a prospective real-world evaluation survey of 744 consecutive patients undergoing CCTA for chest pain investigation, the availability of AI-Risk assessment led to treatment initiation or intensification in 45% of patients. In a further prospective study of 1214 consecutive patients with extensive guidelines recommended cardiovascular risk profiling, AI-Risk stratification led to treatment initiation or intensification in 39% of patients beyond the current clinical guideline recommendations. Treatment guided by AI-Risk modelled over a lifetime horizon could lead to fewer cardiac events (relative reductions of 11%, 4%, 4%, and 12% for myocardial infarction, ischaemic stroke, heart failure, and cardiac death, respectively). Implementing AI-Risk Classification in routine interpretation of CCTA is highly likely to be cost-effective (incremental cost-effectiveness ratio £1371-3244), both in scenarios of current guideline compliance, or when applied only to patients without obstructive CAD. Compared with standard care, the addition of AI-Risk assessment in routine CCTA interpretation is cost-effective, by refining risk-guided medical management.

Machine Learning Models Based on CT Enterography for Differentiating Between Ulcerative Colitis and Colonic Crohn's Disease Using Intestinal Wall, Mesenteric Fat, and Visceral Fat Features.

Wang X, Wang X, Lei J, Rong C, Zheng X, Li S, Gao Y, Wu X

pubmed logopapersJun 23 2025
This study aimed to develop radiomic-based machine learning models using computed tomography enterography (CTE) features derived from the intestinal wall, mesenteric fat, and visceral fat to differentiate between ulcerative colitis (UC) and colonic Crohn's disease (CD). Clinical and imaging data from 116 patients with inflammatory bowel disease (IBD) (68 with UC and 48 with colonic CD) were retrospectively collected. Radiomic features were extracted from venous-phase CTE images. Feature selection was performed via the intraclass correlation coefficient (ICC), correlation analysis, SelectKBest, and least absolute shrinkage and selection operator (LASSO) regression. Support vector machine models were constructed using features from individual and combined regions, with model performance evaluated using the area under the ROC curve (AUC). The combined radiomic model, integrating features from all three regions, exhibited superior classification performance (AUC= 0.857, 95% CI, 0.732-0.982), with a sensitivity of 0.762 (95% CI, 0.547-0.903) and specificity of 0.857 (95% CI, 0.601-0.960) in the testing cohort. The models based on features from the intestinal wall, mesenteric fat, and visceral fat achieved AUCs of 0.847 (95% CI, 0.710-0.984), 0.707 (95% CI, 0.526-0.889), and 0.731 (95% CI, 0.553-0.910), respectively, in the testing cohort. The intestinal wall model demonstrated the best calibration. This study demonstrated the feasibility of constructing machine learning models based on radiomic features of the intestinal wall, mesenteric fat, and visceral fat to distinguish between UC and colonic CD.

Ensemble-based Convolutional Neural Networks for brain tumor classification in MRI: Enhancing accuracy and interpretability using explainable AI.

Sánchez-Moreno L, Perez-Peña A, Duran-Lopez L, Dominguez-Morales JP

pubmed logopapersJun 23 2025
Accurate and efficient classification of brain tumors, including gliomas, meningiomas, and pituitary adenomas, is critical for early diagnosis and treatment planning. Magnetic resonance imaging (MRI) is a key diagnostic tool, and deep learning models have shown promise in automating tumor classification. However, challenges remain in achieving high accuracy while maintaining interpretability for clinical use. This study explores the use of transfer learning with pre-trained architectures, including VGG16, DenseNet121, and Inception-ResNet-v2, to classify brain tumors from MRI images. An ensemble-based classifier was developed using a majority voting strategy to improve robustness. To enhance clinical applicability, explainability techniques such as Grad-CAM++ and Integrated Gradients were employed, allowing visualization of model decision-making. The ensemble model outperformed individual Convolutional Neural Network (CNN) architectures, achieving an accuracy of 86.17% in distinguishing gliomas, meningiomas, pituitary adenomas, and benign cases. Interpretability techniques provided heatmaps that identified key regions influencing model predictions, aligning with radiological features and enhancing trust in the results. The proposed ensemble-based deep learning framework improves the accuracy and interpretability of brain tumor classification from MRI images. By combining multiple CNN architectures and integrating explainability methods, this approach offers a more reliable and transparent diagnostic tool to support medical professionals in clinical decision-making.

MRI Radiomics and Automated Habitat Analysis Enhance Machine Learning Prediction of Bone Metastasis and High-Grade Gleason Scores in Prostate Cancer.

Yang Y, Zheng B, Zou B, Liu R, Yang R, Chen Q, Guo Y, Yu S, Chen B

pubmed logopapersJun 23 2025
To explore the value of machine learning models based on MRI radiomics and automated habitat analysis in predicting bone metastasis and high-grade pathological Gleason scores in prostate cancer. This retrospective study enrolled 214 patients with pathologically diagnosed prostate cancer from May 2013 to January 2025, including 93 cases with bone metastasis and 159 cases with high-grade Gleason scores. Clinical, pathological and MRI data were collected. An nnUNet model automatically segmented the prostate in MRI scans. K-means clustering identified subregions within the entire prostate in T2-FS images. Senior radiologists manually segmented regions of interest (ROIs) in prostate lesions. Radiomics features were extracted from these habitat subregions and lesion ROIs. These features combined with clinical features were utilized to build multiple machine learning classifiers to predict bone metastasis and high-grade Gleason scores while a K-means clustering method was applied to obtain habitat subregions within the whole prostate. Finally, the models underwent interpretable analysis based on feature importance. The nnUNet model achieved a mean Dice coefficient of 0.970 for segmentation. Habitat analysis using 2 clusters yielded the highest average silhouette coefficient (0.57). Machine learning models based on a combination of lesion radiomics, habitat radiomics, and clinical features achieved the best performance in both prediction tasks. The Extra Trees Classifier achieved the highest AUC (0.900) for predicting bone metastasis, while the CatBoost Classifier performed best (AUC 0.895) for predicting high-grade Gleason scores. The interpretability analysis of the optimal models showed that the PSA clinical feature was crucial for predictions, while both habitat radiomics and lesion radiomics also played important roles. The study proposed an automated prostate habitat analysis for prostate cancer, enabling a comprehensive analysis of tumor heterogeneity. The machine learning models developed achieved excellent performance in predicting the risk of bone metastasis and high-grade Gleason scores in prostate cancer. This approach overcomes the limitations of manual feature extraction, and the inadequate analysis of heterogeneity often encountered in traditional radiomics, thereby improving model performance.

Towards a comprehensive characterization of arteries and veins in retinal imaging.

Andreini P, Bonechi S

pubmed logopapersJun 23 2025
Retinal fundus imaging is crucial for diagnosing and monitoring eye diseases, which are often linked to systemic health conditions such as diabetes and hypertension. Current deep learning techniques often narrowly focus on segmenting retinal blood vessels, lacking a more comprehensive analysis and characterization of the retinal vascular system. This study fills this gap by proposing a novel, integrated approach that leverages multiple stages to accurately determine vessel paths and extract informative features from them. The segmentation of veins and arteries, achieved through a deep semantic segmentation network, is used by a newly designed algorithm to reconstruct individual vessel paths. The reconstruction process begins at the optic disc, identified by a localization network, and uses a recurrent neural network to predict the vessel paths at various junctions. The different stages of the proposed approach are validated both qualitatively and quantitatively, demonstrating robust performance. The proposed approach enables the extraction of critical features at the individual vessel level, such as vessel tortuosity and diameter. This work lays the foundation for a comprehensive retinal image evaluation, going beyond isolated tasks like vessel segmentation, with significant potential for clinical diagnosis.

From BERT to generative AI - Comparing encoder-only vs. large language models in a cohort of lung cancer patients for named entity recognition in unstructured medical reports.

Arzideh K, Schäfer H, Allende-Cid H, Baldini G, Hilser T, Idrissi-Yaghir A, Laue K, Chakraborty N, Doll N, Antweiler D, Klug K, Beck N, Giesselbach S, Friedrich CM, Nensa F, Schuler M, Hosch R

pubmed logopapersJun 23 2025
Extracting clinical entities from unstructured medical documents is critical for improving clinical decision support and documentation workflows. This study examines the performance of various encoder and decoder models trained for Named Entity Recognition (NER) of clinical parameters in pathology and radiology reports, highlighting the applicability of Large Language Models (LLMs) for this task. Three NER methods were evaluated: (1) flat NER using transformer-based models, (2) nested NER with a multi-task learning setup, and (3) instruction-based NER utilizing LLMs. A dataset of 2013 pathology reports and 413 radiology reports, annotated by medical students, was used for training and testing. The performance of encoder-based NER models (flat and nested) was superior to that of LLM-based approaches. The best-performing flat NER models achieved F1-scores of 0.87-0.88 on pathology reports and up to 0.78 on radiology reports, while nested NER models performed slightly lower. In contrast, multiple LLMs, despite achieving high precision, yielded significantly lower F1-scores (ranging from 0.18 to 0.30) due to poor recall. A contributing factor appears to be that these LLMs produce fewer but more accurate entities, suggesting they become overly conservative when generating outputs. LLMs in their current form are unsuitable for comprehensive entity extraction tasks in clinical domains, particularly when faced with a high number of entity types per document, though instructing them to return more entities in subsequent refinements may improve recall. Additionally, their computational overhead does not provide proportional performance gains. Encoder-based NER models, particularly those pre-trained on biomedical data, remain the preferred choice for extracting information from unstructured medical documents.

VHU-Net: Variational Hadamard U-Net for Body MRI Bias Field Correction

Xin Zhu

arxiv logopreprintJun 23 2025
Bias field artifacts in magnetic resonance imaging (MRI) scans introduce spatially smooth intensity inhomogeneities that degrade image quality and hinder downstream analysis. To address this challenge, we propose a novel variational Hadamard U-Net (VHU-Net) for effective body MRI bias field correction. The encoder comprises multiple convolutional Hadamard transform blocks (ConvHTBlocks), each integrating convolutional layers with a Hadamard transform (HT) layer. Specifically, the HT layer performs channel-wise frequency decomposition to isolate low-frequency components, while a subsequent scaling layer and semi-soft thresholding mechanism suppress redundant high-frequency noise. To compensate for the HT layer's inability to model inter-channel dependencies, the decoder incorporates an inverse HT-reconstructed transformer block, enabling global, frequency-aware attention for the recovery of spatially consistent bias fields. The stacked decoder ConvHTBlocks further enhance the capacity to reconstruct the underlying ground-truth bias field. Building on the principles of variational inference, we formulate a new evidence lower bound (ELBO) as the training objective, promoting sparsity in the latent space while ensuring accurate bias field estimation. Comprehensive experiments on abdominal and prostate MRI datasets demonstrate the superiority of VHU-Net over existing state-of-the-art methods in terms of intensity uniformity, signal fidelity, and tissue contrast. Moreover, the corrected images yield substantial downstream improvements in segmentation accuracy. Our framework offers computational efficiency, interpretability, and robust performance across multi-center datasets, making it suitable for clinical deployment.

MOSCARD -- Causal Reasoning and De-confounding for Multimodal Opportunistic Screening of Cardiovascular Adverse Events

Jialu Pi, Juan Maria Farina, Rimita Lahiri, Jiwoong Jeong, Archana Gurudu, Hyung-Bok Park, Chieh-Ju Chao, Chadi Ayoub, Reza Arsanjani, Imon Banerjee

arxiv logopreprintJun 23 2025
Major Adverse Cardiovascular Events (MACE) remain the leading cause of mortality globally, as reported in the Global Disease Burden Study 2021. Opportunistic screening leverages data collected from routine health check-ups and multimodal data can play a key role to identify at-risk individuals. Chest X-rays (CXR) provide insights into chronic conditions contributing to major adverse cardiovascular events (MACE), while 12-lead electrocardiogram (ECG) directly assesses cardiac electrical activity and structural abnormalities. Integrating CXR and ECG could offer a more comprehensive risk assessment than conventional models, which rely on clinical scores, computed tomography (CT) measurements, or biomarkers, which may be limited by sampling bias and single modality constraints. We propose a novel predictive modeling framework - MOSCARD, multimodal causal reasoning with co-attention to align two distinct modalities and simultaneously mitigate bias and confounders in opportunistic risk estimation. Primary technical contributions are - (i) multimodal alignment of CXR with ECG guidance; (ii) integration of causal reasoning; (iii) dual back-propagation graph for de-confounding. Evaluated on internal, shift data from emergency department (ED) and external MIMIC datasets, our model outperformed single modality and state-of-the-art foundational models - AUC: 0.75, 0.83, 0.71 respectively. Proposed cost-effective opportunistic screening enables early intervention, improving patient outcomes and reducing disparities.

A Deep Learning Based Method for Fast Registration of Cardiac Magnetic Resonance Images

Benjamin Graham

arxiv logopreprintJun 23 2025
Image registration is used in many medical image analysis applications, such as tracking the motion of tissue in cardiac images, where cardiac kinematics can be an indicator of tissue health. Registration is a challenging problem for deep learning algorithms because ground truth transformations are not feasible to create, and because there are potentially multiple transformations that can produce images that appear correlated with the goal. Unsupervised methods have been proposed to learn to predict effective transformations, but these methods take significantly longer to predict than established baseline methods. For a deep learning method to see adoption in wider research and clinical settings, it should be designed to run in a reasonable time on common, mid-level hardware. Fast methods have been proposed for the task of image registration but often use patch-based methods which can affect registration accuracy for a highly dynamic organ such as the heart. In this thesis, a fast, volumetric registration model is proposed for the use of quantifying cardiac strain. The proposed Deep Learning Neural Network (DLNN) is designed to utilize an architecture that can compute convolutions incredibly efficiently, allowing the model to achieve registration fidelity similar to other state-of-the-art models while taking a fraction of the time to perform inference. The proposed fast and lightweight registration (FLIR) model is used to predict tissue motion which is then used to quantify the non-uniform strain experienced by the tissue. For acquisitions taken from the same patient at approximately the same time, it would be expected that strain values measured between the acquisitions would have very small differences. Using this metric, strain values computed using the FLIR method are shown to be very consistent.
Page 39 of 2252246 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.