Sort by:
Page 15 of 58575 results

Multi-modal Risk Stratification in Heart Failure with Preserved Ejection Fraction Using Clinical and CMR-derived Features: An Approach Incorporating Model Explainability.

Zhang S, Lin Y, Han D, Pan Y, Geng T, Ge H, Zhao J

pubmed logopapersJul 17 2025
Heart failure with preserved ejection fraction (HFpEF) poses significant diagnostic and prognostic challenges due to its clinical heterogeneity. This study proposes a multi-modal, explainable machine learning framework that integrates clinical variables and cardiac magnetic resonance (CMR)-derived features, particularly epicardial adipose tissue (EAT) volume, to improve risk stratification and outcome prediction in patients with HFpEF. A retrospective cohort of 301 participants (171 in the HFpEF group and 130 in the control group) was analyzed. Baseline characteristics, CMR-derived EAT volume, and laboratory biomarkers were integrated into machine learning models. Model performance was evaluated using accuracy, precision, recall, and F1-score. Additionally, receiver operating characteristic area under the curve (ROC-AUC) and precision-recall area under the curve (PR-AUC) were employed to assess discriminative power across varying decision thresholds. Hyperparameter optimization and ensemble techniques were applied to enhance predictive performance. HFpEF patients exhibited significantly higher EAT volume (70.9±27.3 vs. 41.9±18.3 mL, p<0.001) and NT-proBNP levels (1574 [963,2722] vs. 33 [10,100] pg/mL, p<0.001), along with a greater prevalence of comorbidities. The voting classifier demonstrated the highest accuracy for HFpEF diagnosis (0.94), with a precision of 0.96, recall of 0.94, and an F1-score of 0.95. For prognostic tasks, AdaBoost, XGBoost and Random Forest yielded superior performance in predicting adverse clinical outcomes, including rehospitalization and all-cause mortality (accuracy: 0.95). Key predictive features identified included EAT volume, right atrioventricular groove (Right AVG), tricuspid regurgitation velocity (TRV), and metabolic syndrome. Explainable models combining clinical and CMR-derived features, especially EAT volume, improve support for HFpEF diagnosis and outcome prediction. These findings highlight the value of a data-driven, interpretable approach to characterizing HFpEF phenotypes and may facilitate individualized risk assessment in selected populations.

Evolving techniques in the endoscopic evaluation and management of pancreas cystic lesions.

Maloof T, Karaisz F, Abdelbaki A, Perumal KD, Krishna SG

pubmed logopapersJul 17 2025
Accurate diagnosis of pancreatic cystic lesions (PCLs) is essential to guide appropriate management and reduce unnecessary surgeries. Despite multiple guidelines in PCL management, a substantial proportion of patients still undergo major resections for benign cysts, and a majority of resected intraductal papillary mucinous neoplasms (IPMNs) show only low-grade dysplasia, leading to significant clinical, financial, and psychological burdens. This review highlights emerging endoscopic approaches that enhance diagnostic accuracy and support organ-sparing, minimally invasive management of PCLs. Recent studies suggest that endoscopic ultrasound (EUS) and its accessory techniques, such as contrast-enhanced EUS and needle-based confocal laser endomicroscopy, as well as next-generation sequencing analysis of cyst fluid, not only accurately characterize PCLs but are also well tolerated and cost-effective. Additionally, emerging therapeutics such as EUS-guided radiofrequency ablation (RFA) and EUS-chemoablation are promising as minimally invasive treatments for high-risk mucinous PCLs in patients who are not candidates for surgery. Accurate diagnosis of PCLs remains challenging, leading to many patients undergoing unnecessary surgery. Emerging endoscopic imaging biomarkers, artificial intelligence analysis, and molecular biomarkers enhance diagnostic precision. Additionally, novel endoscopic ablative therapies offer safe, minimally invasive, organ-sparing treatment options, thereby reducing the healthcare resource burdens associated with overtreatment.

Multimodal Large Language Model With Knowledge Retrieval Using Flowchart Embedding for Forming Follow-Up Recommendations for Pancreatic Cystic Lesions.

Zhu Z, Liu J, Hong CW, Houshmand S, Wang K, Yang Y

pubmed logopapersJul 16 2025
<b>BACKGROUND</b>. The American College of Radiology (ACR) Incidental Findings Committee (IFC) algorithm provides guidance for pancreatic cystic lesion (PCL) management. Its implementation using plain-text large language model (LLM) solutions is challenging given that key components include multimodal data (e.g., figures and tables). <b>OBJECTIVE</b>. The purpose of the study is to evaluate a multimodal LLM approach incorporating knowledge retrieval using flowchart embedding for forming follow-up recommendations for PCL management. <b>METHODS</b>. This retrospective study included patients who underwent abdominal CT or MRI from September 1, 2023, to September 1, 2024, and whose report mentioned a PCL. The reports' Findings sections were inputted to a multimodal LLM (GPT-4o). For task 1 (198 patients: mean age, 69.0 ± 13.0 [SD] years; 110 women, 88 men), the LLM assessed PCL features (presence of PCL, PCL size and location, presence of main pancreatic duct communication, presence of worrisome features or high-risk stigmata) and formed a follow-up recommendation using three knowledge retrieval methods (default knowledge, plain-text retrieval-augmented generation [RAG] from the ACR IFC algorithm PDF document, and flowchart embedding using the LLM's image-to-text conversion for in-context integration of the document's flowcharts and tables). For task 2 (85 patients: mean initial age, 69.2 ± 10.8 years; 48 women, 37 men), an additional relevant prior report was inputted; the LLM assessed for interval PCL change and provided an adjusted follow-up schedule accounting for prior imaging using flowchart embedding. Three radiologists assessed LLM accuracy in task 1 for PCL findings in consensus and follow-up recommendations independently; one radiologist assessed accuracy in task 2. <b>RESULTS</b>. For task 1, the LLM with flowchart embedding had accuracy for PCL features of 98.0-99.0%. The accuracy of the LLM follow-up recommendations based on default knowledge, plain-text RAG, and flowchart embedding for radiologist 1 was 42.4%, 23.7%, and 89.9% (<i>p</i> < .001), respectively; radiologist 2 was 39.9%, 24.2%, and 91.9% (<i>p</i> < .001); and radiologist 3 was 40.9%, 25.3%, and 91.9% (<i>p</i> < .001). For task 2, the LLM using flowchart embedding showed an accuracy for interval PCL change of 96.5% and for adjusted follow-up schedules of 81.2%. <b>CONCLUSION</b>. Multimodal flowchart embedding aided the LLM's automated provision of follow-up recommendations adherent to a clinical guidance document. <b>CLINICAL IMPACT</b>. The framework could be extended to other incidental findings through the use of other clinical guidance documents as the model input.

Cross-Modal conditional latent diffusion model for Brain MRI to Ultrasound image translation.

Jiang S, Wang L, Li Y, Yang Z, Zhou Z, Li B

pubmed logopapersJul 16 2025
Intraoperative brain ultrasound (US) provides real-time information on lesions and tissues, making it crucial for brain tumor resection. However, due to limitations such as imaging angles and operator techniques, US data is limited in size and difficult to annotate, hindering advancements in intelligent image processing. In contrast, Magnetic Resonance Imaging (MRI) data is more abundant and easier to annotate. If MRI data and models can be effectively transferred to the US domain, generating high-quality US data would greatly enhance US image processing and improve intraoperative US readability.&#xD;Approach. We propose a Cross-Modal Conditional Latent Diffusion Model (CCLD) for brain MRI-to-US image translation. We employ a noise mask restoration strategy to pretrain an efficient encoder-decoder, enhancing feature extraction, compression, and reconstruction capabilities while reducing computational costs. Furthermore, CCLD integrates the Frequency-Decomposed Feature Optimization Module (FFOM) and the Adaptive Multi-Frequency Feature Fusion Module (AMFM) to effectively leverage MRI structural information and US texture characteristics, ensuring structural accuracy while enhancing texture details in the synthetic US images.&#xD;Main results. Compared with state-of-the-art methods, our approach achieves superior performance on the ReMIND dataset, obtaining the best Learned Perceptual Image Patch Similarity (LPIPS) score of 19.1%, Mean Absolute Error (MAE) of 4.21%, as well as the highest Peak Signal-to-Noise Ratio (PSNR) of 25.36 dB and Structural Similarity Index (SSIM) of 86.91%. &#xD;Significance. Experimental results demonstrate that CCLD effectively improves the quality and realism of synthetic ultrasound images, offering a new research direction for the generation of high-quality US datasets and the enhancement of ultrasound image readability.&#xD.

Illuminating radiogenomic signatures in pediatric-type diffuse gliomas: insights into molecular, clinical, and imaging correlations. Part II: low-grade group.

Kurokawa R, Hagiwara A, Ito R, Ueda D, Saida T, Sakata A, Nishioka K, Sugawara S, Takumi K, Watabe T, Ide S, Kawamura M, Sofue K, Hirata K, Honda M, Yanagawa M, Oda S, Iima M, Naganawa S

pubmed logopapersJul 16 2025
The fifth edition of the World Health Organization classification of central nervous system tumors represents a significant advancement in the molecular-genetic classification of pediatric-type diffuse gliomas. This article comprehensively summarizes the clinical, molecular, and radiological imaging features in pediatric-type low-grade gliomas (pLGGs), including MYB- or MYBL1-altered tumors, polymorphous low-grade neuroepithelial tumor of the young (PLNTY), and diffuse low-grade glioma, MAPK pathway-altered. Most pLGGs harbor alterations in the RAS/MAPK pathway, functioning as "one pathway disease". Specific magnetic resonance imaging features, such as the T2-fluid-attenuated inversion recovery (FLAIR) mismatch sign in MYB- or MYBL1-altered tumors and the transmantle-like sign in PLNTYs, may serve as non-invasive biomarkers for underlying molecular alterations. Recent advances in radiogenomics have enabled the differentiation of BRAF fusion from BRAF V600E mutant tumors based on magnetic resonance imaging characteristics. Machine learning approaches have further enhanced our ability to predict molecular subtypes from imaging features. These radiology-molecular correlations offer potential clinical utility in treatment planning and prognostication, especially as targeted therapies against the MAPK pathway emerge. Continued research is needed to refine our understanding of genotype-phenotype correlations in less common molecular alterations and to validate these imaging biomarkers in larger cohorts.

Late gadolinium enhancement imaging and sudden cardiac death.

Prasad SK, Akbari T, Bishop MJ, Halliday BP, Leyva-Leon F, Marchlinski F

pubmed logopapersJul 16 2025
The prediction and management of sudden cardiac death risk continue to pose significant challenges in cardiovascular care despite advances in therapies over the last two decades. Late gadolinium enhancement (LGE) on cardiac magnetic resonance-a marker of myocardial fibrosis-is a powerful non-invasive tool with the potential to aid the prediction of sudden death and direct the use of preventative therapies in several cardiovascular conditions. In this state-of-the-art review, we provide a critical appraisal of the current evidence base underpinning the utility of LGE in both ischaemic and non-ischaemic cardiomyopathies together with a focus on future perspectives and the role for machine learning and digital twin technologies.

An end-to-end interpretable machine-learning-based framework for early-stage diagnosis of gallbladder cancer using multi-modality medical data.

Zhao H, Miao C, Zhu Y, Shu Y, Wu X, Yin Z, Deng X, Gong W, Yang Z, Zou W

pubmed logopapersJul 16 2025
The accurate early-stage diagnosis of gallbladder cancer (GBC) is regarded as one of the major challenges in the field of oncology. However, few studies have focused on the comprehensive classification of GBC based on multiple modalities. This study aims to develop a comprehensive diagnostic framework for GBC based on both imaging and non-imaging medical data. This retrospective study reviewed 298 clinical patients with gallbladder disease or volunteers from two devices. A novel end-to-end interpretable diagnostic framework for GBC is proposed to handle multiple medical modalities, including CT imaging, demographics, tumor markers, coagulation function tests, and routine blood tests. To achieve better feature extraction and fusion of the imaging modality, a novel global-hybrid-local network, namely GHL-Net, has also been developed. The ensemble learning strategy is employed to fuse multi-modality data and obtain the final classification result. In addition, two interpretable methods are applied to help clinicians understand the model-based decisions. Model performance was evaluated through accuracy, precision, specificity, sensitivity, F1-score, area under the curve (AUC), and matthews correlation coefficient (MCC). In both binary and multi-class classification scenarios, the proposed method showed better performance compared to other comparison methods in both datasets. Especially in the binary classification scenario, the proposed method achieved the highest accuracy, sensitivity, specificity, precision, F1-score, ROC-AUC, PR-AUC, and MCC of 95.24%, 93.55%, 96.87%, 96.67%, 95.08%, 0.9591, 0.9636, and 0.9051, respectively. The visualization results obtained based on the interpretable methods also demonstrated a high clinical relevance of the intermediate decision-making processes. Ablation studies then provided an in-depth understanding of our methodology. The machine learning-based framework can effectively improve the accuracy of GBC diagnosis and is expected to have a more significant impact in other cancer diagnosis scenarios.

From Referral to Reporting: The Potential of Large Language Models in the Radiological Workflow.

Fink A, Rau S, Kästingschäfer K, Weiß J, Bamberg F, Russe MF

pubmed logopapersJul 16 2025
Large language models (LLMs) hold great promise for optimizing and supporting radiology workflows amidst rising workloads. This review examines potential applications in daily radiology practice, as well as remaining challenges and potential solutions.Presentation of potential applications and challenges, illustrated with practical examples and concrete optimization suggestions.LLM-based assistance systems have potential applications in almost all language-based process steps of the radiological workflow. Significant progress has been made in areas such as report generation, particularly with retrieval-augmented generation (RAG) and multi-step reasoning approaches. However, challenges related to hallucinations, reproducibility, and data protection, as well as ethical concerns, need to be addressed before widespread implementation.LLMs have immense potential in radiology, particularly for supporting language-based process steps, with technological advances such as RAG and cloud-based approaches potentially accelerating clinical implementation. · LLMs can optimize reporting and other language-based processes in radiology with technologies such as RAG and multi-step reasoning approaches.. · Challenges such as hallucinations, reproducibility, privacy, and ethical concerns must be addressed before widespread adoption.. · RAG and cloud-based approaches could help overcome these challenges and advance the clinical implementation of LLMs.. · Fink A, Rau S, Kästingschäfer K et al. From Referral to Reporting: The Potential of Large Language Models in the Radiological Workflow. Rofo 2025; DOI 10.1055/a-2641-3059.

LRMR: LLM-Driven Relational Multi-node Ranking for Lymph Node Metastasis Assessment in Rectal Cancer

Yaoxian Dong, Yifan Gao, Haoyue Li, Yanfen Cui, Xin Gao

arxiv logopreprintJul 15 2025
Accurate preoperative assessment of lymph node (LN) metastasis in rectal cancer guides treatment decisions, yet conventional MRI evaluation based on morphological criteria shows limited diagnostic performance. While some artificial intelligence models have been developed, they often operate as black boxes, lacking the interpretability needed for clinical trust. Moreover, these models typically evaluate nodes in isolation, overlooking the patient-level context. To address these limitations, we introduce LRMR, an LLM-Driven Relational Multi-node Ranking framework. This approach reframes the diagnostic task from a direct classification problem into a structured reasoning and ranking process. The LRMR framework operates in two stages. First, a multimodal large language model (LLM) analyzes a composite montage image of all LNs from a patient, generating a structured report that details ten distinct radiological features. Second, a text-based LLM performs pairwise comparisons of these reports between different patients, establishing a relative risk ranking based on the severity and number of adverse features. We evaluated our method on a retrospective cohort of 117 rectal cancer patients. LRMR achieved an area under the curve (AUC) of 0.7917 and an F1-score of 0.7200, outperforming a range of deep learning baselines, including ResNet50 (AUC 0.7708). Ablation studies confirmed the value of our two main contributions: removing the relational ranking stage or the structured prompting stage led to a significant performance drop, with AUCs falling to 0.6875 and 0.6458, respectively. Our work demonstrates that decoupling visual perception from cognitive reasoning through a two-stage LLM framework offers a powerful, interpretable, and effective new paradigm for assessing lymph node metastasis in rectal cancer.

Generative AI enables medical image segmentation in ultra low-data regimes.

Zhang L, Jindal B, Alaa A, Weinreb R, Wilson D, Segal E, Zou J, Xie P

pubmed logopapersJul 14 2025
Semantic segmentation of medical images is pivotal in applications like disease diagnosis and treatment planning. While deep learning automates this task effectively, it struggles in ultra low-data regimes for the scarcity of annotated segmentation masks. To address this, we propose a generative deep learning framework that produces high-quality image-mask pairs as auxiliary training data. Unlike traditional generative models that separate data generation from model training, ours uses multi-level optimization for end-to-end data generation. This allows segmentation performance to guide the generation process, producing data tailored to improve segmentation outcomes. Our method demonstrates strong generalization across 11 medical image segmentation tasks and 19 datasets, covering various diseases, organs, and modalities. It improves performance by 10-20% (absolute) in both same- and out-of-domain settings and requires 8-20 times less training data than existing approaches. This greatly enhances the feasibility and cost-effectiveness of deep learning in data-limited medical imaging scenarios.
Page 15 of 58575 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.