Sort by:
Page 573 of 7597590 results

Tsiachristas A, Chan K, Wahome E, Kearns B, Patel P, Lyasheva M, Syed N, Fry S, Halborg T, West H, Nicol E, Adlam D, Modi B, Kardos A, Greenwood JP, Sabharwal N, De Maria GL, Munir S, McAlindon E, Sohan Y, Tomlins P, Siddique M, Shirodaria C, Blankstein R, Desai M, Neubauer S, Channon KM, Deanfield J, Akehurst R, Antoniades C

pubmed logopapersJun 23 2025
Coronary computed tomography angiography (CCTA) is a first-line investigation for chest pain in patients with suspected obstructive coronary artery disease (CAD). However, many acute cardiac events occur in the absence of obstructive CAD. We assessed the lifetime cost-effectiveness of integrating a novel artificial intelligence-enhanced image analysis algorithm (AI-Risk) that stratifies the risk of cardiac events by quantifying coronary inflammation, combined with the extent of coronary artery plaque and clinical risk factors, by analysing images from routine CCTA. A hybrid decision-tree with population cohort Markov model was developed from 3393 consecutive patients who underwent routine CCTA for suspected obstructive CAD and followed up for major adverse cardiac events over a median (interquartile range) of 7.7(6.4-9.1) years. In a prospective real-world evaluation survey of 744 consecutive patients undergoing CCTA for chest pain investigation, the availability of AI-Risk assessment led to treatment initiation or intensification in 45% of patients. In a further prospective study of 1214 consecutive patients with extensive guidelines recommended cardiovascular risk profiling, AI-Risk stratification led to treatment initiation or intensification in 39% of patients beyond the current clinical guideline recommendations. Treatment guided by AI-Risk modelled over a lifetime horizon could lead to fewer cardiac events (relative reductions of 11%, 4%, 4%, and 12% for myocardial infarction, ischaemic stroke, heart failure, and cardiac death, respectively). Implementing AI-Risk Classification in routine interpretation of CCTA is highly likely to be cost-effective (incremental cost-effectiveness ratio £1371-3244), both in scenarios of current guideline compliance, or when applied only to patients without obstructive CAD. Compared with standard care, the addition of AI-Risk assessment in routine CCTA interpretation is cost-effective, by refining risk-guided medical management.

Wang X, Wang X, Lei J, Rong C, Zheng X, Li S, Gao Y, Wu X

pubmed logopapersJun 23 2025
This study aimed to develop radiomic-based machine learning models using computed tomography enterography (CTE) features derived from the intestinal wall, mesenteric fat, and visceral fat to differentiate between ulcerative colitis (UC) and colonic Crohn's disease (CD). Clinical and imaging data from 116 patients with inflammatory bowel disease (IBD) (68 with UC and 48 with colonic CD) were retrospectively collected. Radiomic features were extracted from venous-phase CTE images. Feature selection was performed via the intraclass correlation coefficient (ICC), correlation analysis, SelectKBest, and least absolute shrinkage and selection operator (LASSO) regression. Support vector machine models were constructed using features from individual and combined regions, with model performance evaluated using the area under the ROC curve (AUC). The combined radiomic model, integrating features from all three regions, exhibited superior classification performance (AUC= 0.857, 95% CI, 0.732-0.982), with a sensitivity of 0.762 (95% CI, 0.547-0.903) and specificity of 0.857 (95% CI, 0.601-0.960) in the testing cohort. The models based on features from the intestinal wall, mesenteric fat, and visceral fat achieved AUCs of 0.847 (95% CI, 0.710-0.984), 0.707 (95% CI, 0.526-0.889), and 0.731 (95% CI, 0.553-0.910), respectively, in the testing cohort. The intestinal wall model demonstrated the best calibration. This study demonstrated the feasibility of constructing machine learning models based on radiomic features of the intestinal wall, mesenteric fat, and visceral fat to distinguish between UC and colonic CD.

Sánchez-Moreno L, Perez-Peña A, Duran-Lopez L, Dominguez-Morales JP

pubmed logopapersJun 23 2025
Accurate and efficient classification of brain tumors, including gliomas, meningiomas, and pituitary adenomas, is critical for early diagnosis and treatment planning. Magnetic resonance imaging (MRI) is a key diagnostic tool, and deep learning models have shown promise in automating tumor classification. However, challenges remain in achieving high accuracy while maintaining interpretability for clinical use. This study explores the use of transfer learning with pre-trained architectures, including VGG16, DenseNet121, and Inception-ResNet-v2, to classify brain tumors from MRI images. An ensemble-based classifier was developed using a majority voting strategy to improve robustness. To enhance clinical applicability, explainability techniques such as Grad-CAM++ and Integrated Gradients were employed, allowing visualization of model decision-making. The ensemble model outperformed individual Convolutional Neural Network (CNN) architectures, achieving an accuracy of 86.17% in distinguishing gliomas, meningiomas, pituitary adenomas, and benign cases. Interpretability techniques provided heatmaps that identified key regions influencing model predictions, aligning with radiological features and enhancing trust in the results. The proposed ensemble-based deep learning framework improves the accuracy and interpretability of brain tumor classification from MRI images. By combining multiple CNN architectures and integrating explainability methods, this approach offers a more reliable and transparent diagnostic tool to support medical professionals in clinical decision-making.

Yang Y, Zheng B, Zou B, Liu R, Yang R, Chen Q, Guo Y, Yu S, Chen B

pubmed logopapersJun 23 2025
To explore the value of machine learning models based on MRI radiomics and automated habitat analysis in predicting bone metastasis and high-grade pathological Gleason scores in prostate cancer. This retrospective study enrolled 214 patients with pathologically diagnosed prostate cancer from May 2013 to January 2025, including 93 cases with bone metastasis and 159 cases with high-grade Gleason scores. Clinical, pathological and MRI data were collected. An nnUNet model automatically segmented the prostate in MRI scans. K-means clustering identified subregions within the entire prostate in T2-FS images. Senior radiologists manually segmented regions of interest (ROIs) in prostate lesions. Radiomics features were extracted from these habitat subregions and lesion ROIs. These features combined with clinical features were utilized to build multiple machine learning classifiers to predict bone metastasis and high-grade Gleason scores while a K-means clustering method was applied to obtain habitat subregions within the whole prostate. Finally, the models underwent interpretable analysis based on feature importance. The nnUNet model achieved a mean Dice coefficient of 0.970 for segmentation. Habitat analysis using 2 clusters yielded the highest average silhouette coefficient (0.57). Machine learning models based on a combination of lesion radiomics, habitat radiomics, and clinical features achieved the best performance in both prediction tasks. The Extra Trees Classifier achieved the highest AUC (0.900) for predicting bone metastasis, while the CatBoost Classifier performed best (AUC 0.895) for predicting high-grade Gleason scores. The interpretability analysis of the optimal models showed that the PSA clinical feature was crucial for predictions, while both habitat radiomics and lesion radiomics also played important roles. The study proposed an automated prostate habitat analysis for prostate cancer, enabling a comprehensive analysis of tumor heterogeneity. The machine learning models developed achieved excellent performance in predicting the risk of bone metastasis and high-grade Gleason scores in prostate cancer. This approach overcomes the limitations of manual feature extraction, and the inadequate analysis of heterogeneity often encountered in traditional radiomics, thereby improving model performance.

Andreini P, Bonechi S

pubmed logopapersJun 23 2025
Retinal fundus imaging is crucial for diagnosing and monitoring eye diseases, which are often linked to systemic health conditions such as diabetes and hypertension. Current deep learning techniques often narrowly focus on segmenting retinal blood vessels, lacking a more comprehensive analysis and characterization of the retinal vascular system. This study fills this gap by proposing a novel, integrated approach that leverages multiple stages to accurately determine vessel paths and extract informative features from them. The segmentation of veins and arteries, achieved through a deep semantic segmentation network, is used by a newly designed algorithm to reconstruct individual vessel paths. The reconstruction process begins at the optic disc, identified by a localization network, and uses a recurrent neural network to predict the vessel paths at various junctions. The different stages of the proposed approach are validated both qualitatively and quantitatively, demonstrating robust performance. The proposed approach enables the extraction of critical features at the individual vessel level, such as vessel tortuosity and diameter. This work lays the foundation for a comprehensive retinal image evaluation, going beyond isolated tasks like vessel segmentation, with significant potential for clinical diagnosis.

Arzideh K, Schäfer H, Allende-Cid H, Baldini G, Hilser T, Idrissi-Yaghir A, Laue K, Chakraborty N, Doll N, Antweiler D, Klug K, Beck N, Giesselbach S, Friedrich CM, Nensa F, Schuler M, Hosch R

pubmed logopapersJun 23 2025
Extracting clinical entities from unstructured medical documents is critical for improving clinical decision support and documentation workflows. This study examines the performance of various encoder and decoder models trained for Named Entity Recognition (NER) of clinical parameters in pathology and radiology reports, highlighting the applicability of Large Language Models (LLMs) for this task. Three NER methods were evaluated: (1) flat NER using transformer-based models, (2) nested NER with a multi-task learning setup, and (3) instruction-based NER utilizing LLMs. A dataset of 2013 pathology reports and 413 radiology reports, annotated by medical students, was used for training and testing. The performance of encoder-based NER models (flat and nested) was superior to that of LLM-based approaches. The best-performing flat NER models achieved F1-scores of 0.87-0.88 on pathology reports and up to 0.78 on radiology reports, while nested NER models performed slightly lower. In contrast, multiple LLMs, despite achieving high precision, yielded significantly lower F1-scores (ranging from 0.18 to 0.30) due to poor recall. A contributing factor appears to be that these LLMs produce fewer but more accurate entities, suggesting they become overly conservative when generating outputs. LLMs in their current form are unsuitable for comprehensive entity extraction tasks in clinical domains, particularly when faced with a high number of entity types per document, though instructing them to return more entities in subsequent refinements may improve recall. Additionally, their computational overhead does not provide proportional performance gains. Encoder-based NER models, particularly those pre-trained on biomedical data, remain the preferred choice for extracting information from unstructured medical documents.

Xin Zhu

arxiv logopreprintJun 23 2025
Bias field artifacts in magnetic resonance imaging (MRI) scans introduce spatially smooth intensity inhomogeneities that degrade image quality and hinder downstream analysis. To address this challenge, we propose a novel variational Hadamard U-Net (VHU-Net) for effective body MRI bias field correction. The encoder comprises multiple convolutional Hadamard transform blocks (ConvHTBlocks), each integrating convolutional layers with a Hadamard transform (HT) layer. Specifically, the HT layer performs channel-wise frequency decomposition to isolate low-frequency components, while a subsequent scaling layer and semi-soft thresholding mechanism suppress redundant high-frequency noise. To compensate for the HT layer's inability to model inter-channel dependencies, the decoder incorporates an inverse HT-reconstructed transformer block, enabling global, frequency-aware attention for the recovery of spatially consistent bias fields. The stacked decoder ConvHTBlocks further enhance the capacity to reconstruct the underlying ground-truth bias field. Building on the principles of variational inference, we formulate a new evidence lower bound (ELBO) as the training objective, promoting sparsity in the latent space while ensuring accurate bias field estimation. Comprehensive experiments on abdominal and prostate MRI datasets demonstrate the superiority of VHU-Net over existing state-of-the-art methods in terms of intensity uniformity, signal fidelity, and tissue contrast. Moreover, the corrected images yield substantial downstream improvements in segmentation accuracy. Our framework offers computational efficiency, interpretability, and robust performance across multi-center datasets, making it suitable for clinical deployment.

Jialu Pi, Juan Maria Farina, Rimita Lahiri, Jiwoong Jeong, Archana Gurudu, Hyung-Bok Park, Chieh-Ju Chao, Chadi Ayoub, Reza Arsanjani, Imon Banerjee

arxiv logopreprintJun 23 2025
Major Adverse Cardiovascular Events (MACE) remain the leading cause of mortality globally, as reported in the Global Disease Burden Study 2021. Opportunistic screening leverages data collected from routine health check-ups and multimodal data can play a key role to identify at-risk individuals. Chest X-rays (CXR) provide insights into chronic conditions contributing to major adverse cardiovascular events (MACE), while 12-lead electrocardiogram (ECG) directly assesses cardiac electrical activity and structural abnormalities. Integrating CXR and ECG could offer a more comprehensive risk assessment than conventional models, which rely on clinical scores, computed tomography (CT) measurements, or biomarkers, which may be limited by sampling bias and single modality constraints. We propose a novel predictive modeling framework - MOSCARD, multimodal causal reasoning with co-attention to align two distinct modalities and simultaneously mitigate bias and confounders in opportunistic risk estimation. Primary technical contributions are - (i) multimodal alignment of CXR with ECG guidance; (ii) integration of causal reasoning; (iii) dual back-propagation graph for de-confounding. Evaluated on internal, shift data from emergency department (ED) and external MIMIC datasets, our model outperformed single modality and state-of-the-art foundational models - AUC: 0.75, 0.83, 0.71 respectively. Proposed cost-effective opportunistic screening enables early intervention, improving patient outcomes and reducing disparities.

Benjamin Graham

arxiv logopreprintJun 23 2025
Image registration is used in many medical image analysis applications, such as tracking the motion of tissue in cardiac images, where cardiac kinematics can be an indicator of tissue health. Registration is a challenging problem for deep learning algorithms because ground truth transformations are not feasible to create, and because there are potentially multiple transformations that can produce images that appear correlated with the goal. Unsupervised methods have been proposed to learn to predict effective transformations, but these methods take significantly longer to predict than established baseline methods. For a deep learning method to see adoption in wider research and clinical settings, it should be designed to run in a reasonable time on common, mid-level hardware. Fast methods have been proposed for the task of image registration but often use patch-based methods which can affect registration accuracy for a highly dynamic organ such as the heart. In this thesis, a fast, volumetric registration model is proposed for the use of quantifying cardiac strain. The proposed Deep Learning Neural Network (DLNN) is designed to utilize an architecture that can compute convolutions incredibly efficiently, allowing the model to achieve registration fidelity similar to other state-of-the-art models while taking a fraction of the time to perform inference. The proposed fast and lightweight registration (FLIR) model is used to predict tissue motion which is then used to quantify the non-uniform strain experienced by the tissue. For acquisitions taken from the same patient at approximately the same time, it would be expected that strain values measured between the acquisitions would have very small differences. Using this metric, strain values computed using the FLIR method are shown to be very consistent.

Jung, J., Kim, H., Bae, S., Park, J. Y.

medrxiv logopreprintJun 23 2025
BackgroundGenerative Pre-trained Transformer 4 (GPT-4) has demonstrated strong performance in standardized medical examinations but has limitations in real-world clinical settings. The newly released multimodal GPT-4o model, which integrates text and image inputs to enhance diagnostic capabilities, and the multimodal o1 model, which incorporates advanced reasoning, may address these limitations. ObjectiveThis study aimed to compare the performance of GPT-4o and o1 against clinicians in real-world clinical case challenges. MethodsThis retrospective, cross-sectional study used Medscape case challenge questions from May 2011 to June 2024 (n = 1,426). Each case included text and images of patient history, physical examination findings, diagnostic test results, and imaging studies. Clinicians were required to choose one answer from among multiple options, with the most frequent response defined as the clinicians decision. Data-based decisions were made using GPT models (3.5 Turbo, 4 Turbo, 4 Omni, and o1) to interpret the text and images, followed by a process to provide a formatted answer. We compared the performances of the clinicians and GPT models using Mixed-effects logistic regression analysis. ResultsOf the 1,426 questions, clinicians achieved an overall accuracy of 85.0%, whereas GPT-4o and o1 demonstrated higher accuracies of 88.4% and 94.3% (mean difference 3.4%; P = .005 and mean difference 9.3%; P < .001), respectively. In the multimodal performance analysis, which included cases involving images (n = 917), GPT-4o achieved an accuracy of 88.3%, and o1 achieved 93.9%, both significantly outperforming clinicians (mean difference 4.2%; P = .005 and mean difference 9.8%; P < .001). o1 showed the highest accuracy across all question categories, achieving 92.6% in diagnosis (mean difference 14.5%; P < .001), 97.0% in disease characteristics (mean difference 7.2%; P < .001), 92.6% in examination (mean difference 7.3%; P = .002), and 94.8% in treatment (mean difference 4.3%; P = .005), consistently outperforming clinicians. In terms of medical specialty, o1 achieved 93.6% accuracy in internal medicine (mean difference 10.3%; P < .001), 96.6% in major surgery (mean difference 9.2%; P = .030), 97.3% in psychiatry (mean difference 10.6%; P = .030), and 95.4% in minor specialties (mean difference 10.0%; P < .001), significantly surpassing clinicians. Across five trials, GPT-4o and o1 provided the correct answer 5/5 times in 86.2% and 90.7% of the cases, respectively. ConclusionsThe GPT-4o and o1 models achieved higher accuracy than clinicians in clinical case challenge questions, particularly in disease diagnosis. The GPT-4o and o1 could serve as valuable tools to assist healthcare professionals in clinical settings.
Page 573 of 7597590 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,600+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.