Sort by:
Page 41 of 79781 results

A Pan-Organ Vision-Language Model for Generalizable 3D CT Representations.

Beeche C, Kim J, Tavolinejad H, Zhao B, Sharma R, Duda J, Gee J, Dako F, Verma A, Morse C, Hou B, Shen L, Sagreiya H, Davatzikos C, Damrauer S, Ritchie MD, Rader D, Long Q, Chen T, Kahn CE, Chirinos J, Witschey WR

pubmed logopapersJul 3 2025
Generalizable foundation models for computed tomographic (CT) medical imaging data are emerging AI tools anticipated to vastly improve clinical workflow efficiency. However, existing models are typically trained within narrow imaging contexts, including limited anatomical coverage, contrast settings, and clinical indications. These constraints reduce their ability to generalize across the broad spectrum of real-world presentations encountered in volumetric CT imaging data. We introduce Percival, a vision-language foundation model trained on over 400,000 CT volumes and paired radiology reports from more than 50,000 participants enrolled in the Penn Medicine BioBank. Percival employs a dual-encoder architecture with a transformer-based image encoder and a BERT-style language encoder, aligned via symmetric contrastive learning. Percival was validated on over 20,000 participants imaging data encompassing over 100,000 CT volumes. In image-text recall tasks, Percival outperforms models trained on limited anatomical windows. To assess Percival's clinical knowledge, we evaluated the biologic, phenotypic and prognostic relevance using laboratory-wide, phenome-wide association studies and survival analyses, uncovering a rich latent structure aligned with physiological measurements and disease phenotypes.

Differentiated thyroid cancer and positron emission computed tomography: when, how and why?

Coca Pelaz A, Rodrigo JP, Zafereo M, Nixon I, Guntinas-Lichius O, Randolph G, Civantos FJ, Pace-Asciak P, Jara MA, Kuker R, Ferlito A

pubmed logopapersJul 3 2025
Fluorodeoxyglucose positron emission tomography/computed tomography (FDG PET/CT) has become an indispensable tool in oncology, offering both metabolic and anatomical insights into tumor behavior. Most differentiated thyroid carcinomas (DTC) are indolent and therefore FDG PET/CT is not routinely incorporated into management. However, in biologically aggressive DTCs, FDG PET/CT plays a crucial role in detecting recurrence and metastases. This narrative review with articles from the last 25 years from PubMed database, explores the evolving role of FDG PET/CT, focusing on its utility in recurrence detection, staging, and follow-up of radioactive iodine (RAI)-refractory cases. Current guidelines recommend FDG PET/CT primarily for high-risk patients with elevated thyroglobulin levels and negative RAI scans (TENIS syndrome). We also examine advancements in PET imaging, novel radiotracers and theragnostic approaches that enhance diagnostic accuracy and treatment monitoring. While FDG PET/CT has proven valuable in biologically aggressive DTC, its routine use remains limited by cost, accessibility, and concerns regarding radiation exposure in younger patients requiring repeated imaging studies. Future developments in molecular imaging, including novel tracers and artificial intelligence-driven analysis, are expected to refine its role, leading to more personalized and effective management, though economic and reimbursement challenges remain important considerations for broader adoption.

Interpretable and generalizable deep learning model for preoperative assessment of microvascular invasion and outcome in hepatocellular carcinoma based on MRI: a multicenter study.

Dong X, Jia X, Zhang W, Zhang J, Xu H, Xu L, Ma C, Hu H, Luo J, Zhang J, Wang Z, Ji W, Yang D, Yang Z

pubmed logopapersJul 3 2025
This study aimed to develop an interpretable, domain-generalizable deep learning model for microvascular invasion (MVI) assessment in hepatocellular carcinoma (HCC). Utilizing a retrospective dataset of 546 HCC patients from five centers, we developed and validated a clinical-radiological model and deep learning models aimed at MVI prediction. The models were developed on a dataset of 263 cases consisting of data from three centers, internally validated on a set of 66 patients, and externally tested on two independent sets. An adversarial network-based deep learning (AD-DL) model was developed to learn domain-invariant features from multiple centers within the training set. The area under the receiver operating characteristic curve (AUC) was calculated using pathological MVI status. With the best-performed model, early recurrence-free survival (ERFS) stratification was validated on the external test set by the log-rank test, and the differentially expressed genes (DEGs) associated with MVI status were tested on the RNA sequencing analysis of the Cancer Imaging Archive. The AD-DL model demonstrated the highest diagnostic performance and generalizability with an AUC of 0.793 in the internal test set, 0.801 in external test set 1, and 0.773 in external test set 2. The model's prediction of MVI status also demonstrated a significant correlation with ERFS (p = 0.048). DEGs associated with MVI status were primarily enriched in the metabolic processes and the Wnt signaling pathway, and the epithelial-mesenchymal transition process. The AD-DL model allows preoperative MVI prediction and ERFS stratification in HCC patients, which has a good generalizability and biological interpretability. The adversarial network-based deep learning model predicts MVI status well in HCC patients and demonstrates good generalizability. By integrating bioinformatics analysis of the model's predictions, it achieves biological interpretability, facilitating its clinical translation. Current MVI assessment models for HCC lack interpretability and generalizability. The adversarial network-based model's performance surpassed clinical radiology and squeeze-and-excitation network-based models. Biological function analysis was employed to enhance the interpretability and clinical translatability of the adversarial network-based model.

CareAssist GPT improves patient user experience with a patient centered approach to computer aided diagnosis.

Algarni A

pubmed logopapersJul 2 2025
The rapid integration of artificial intelligence (AI) into healthcare has enhanced diagnostic accuracy; however, patient engagement and satisfaction remain significant challenges that hinder the widespread acceptance and effectiveness of AI-driven clinical tools. This study introduces CareAssist-GPT, a novel AI-assisted diagnostic model designed to improve both diagnostic accuracy and the patient experience through real-time, understandable, and empathetic communication. CareAssist-GPT combines high-resolution X-ray images, real-time physiological vital signs, and clinical notes within a unified predictive framework using deep learning. Feature extraction is performed using convolutional neural networks (CNNs), gated recurrent units (GRUs), and transformer-based NLP modules. Model performance was evaluated in terms of accuracy, precision, recall, specificity, and response time, alongside patient satisfaction through a structured user feedback survey. CareAssist-GPT achieved a diagnostic accuracy of 95.8%, improving by 2.4% over conventional models. It reported high precision (94.3%), recall (93.8%), and specificity (92.7%), with an AUC-ROC of 0.97. The system responded within 500 ms-23.1% faster than existing tools-and achieved a patient satisfaction score of 9.3 out of 10, demonstrating its real-time usability and communicative effectiveness. CareAssist-GPT significantly enhances the diagnostic process by improving accuracy and fostering patient trust through transparent, real-time explanations. These findings position it as a promising patient-centered AI solution capable of transforming healthcare delivery by bridging the gap between advanced diagnostics and human-centered communication.

Artificial Intelligence-Driven Cancer Diagnostics: Enhancing Radiology and Pathology through Reproducibility, Explainability, and Multimodality.

Khosravi P, Fuchs TJ, Ho DJ

pubmed logopapersJul 2 2025
The integration of artificial intelligence (AI) in cancer research has significantly advanced radiology, pathology, and multimodal approaches, offering unprecedented capabilities in image analysis, diagnosis, and treatment planning. AI techniques provide standardized assistance to clinicians, in which many diagnostic and predictive tasks are manually conducted, causing low reproducibility. These AI methods can additionally provide explainability to help clinicians make the best decisions for patient care. This review explores state-of-the-art AI methods, focusing on their application in image classification, image segmentation, multiple instance learning, generative models, and self-supervised learning. In radiology, AI enhances tumor detection, diagnosis, and treatment planning through advanced imaging modalities and real-time applications. In pathology, AI-driven image analysis improves cancer detection, biomarker discovery, and diagnostic consistency. Multimodal AI approaches can integrate data from radiology, pathology, and genomics to provide comprehensive diagnostic insights. Emerging trends, challenges, and future directions in AI-driven cancer research are discussed, emphasizing the transformative potential of these technologies in improving patient outcomes and advancing cancer care. This article is part of a special series: Driving Cancer Discoveries with Computational Research, Data Science, and Machine Learning/AI.

A multi-modal graph-based framework for Alzheimer's disease detection.

Mashhadi N, Marinescu R

pubmed logopapersJul 2 2025
We propose a compositional graph-based Machine Learning (ML) framework for Alzheimer's disease (AD) detection that constructs complex ML predictors from modular components. In our directed computational graph, datasets are represented as nodes [Formula: see text], and deep learning (DL) models are represented as directed edges [Formula: see text], allowing us to model complex image-processing pipelines [Formula: see text] as end-to-end DL predictors. Each directed path in the graph functions as a DL predictor, supporting both forward propagation for transforming data representations, as well as backpropagation for model finetuning, saliency map computation, and input data optimization. We demonstrate our model on Alzheimer's disease prediction, a complex problem that requires integrating multimodal data containing scans of different modalities and contrasts, genetic data and cognitive tests. We built a graph of 11 nodes (data) and 14 edges (ML models), where each model has been trained on handling a specific task (e.g. skull-stripping MRI scans, AD detection,image2image translation, ...). By using a modular and adaptive approach, our framework effectively integrates diverse data types, handles distribution shifts, and scales to arbitrary complexity, offering a practical tool that remains accurate even when modalities are missing for advancing Alzheimer's disease diagnosis and potentially other complex medical prediction tasks.

Robust brain age estimation from structural MRI with contrastive learning

Carlo Alberto Barbano, Benoit Dufumier, Edouard Duchesnay, Marco Grangetto, Pietro Gori

arxiv logopreprintJul 2 2025
Estimating brain age from structural MRI has emerged as a powerful tool for characterizing normative and pathological aging. In this work, we explore contrastive learning as a scalable and robust alternative to supervised approaches for brain age estimation. We introduce a novel contrastive loss function, $\mathcal{L}^{exp}$, and evaluate it across multiple public neuroimaging datasets comprising over 20,000 scans. Our experiments reveal four key findings. First, scaling pre-training on diverse, multi-site data consistently improves generalization performance, cutting external mean absolute error (MAE) nearly in half. Second, $\mathcal{L}^{exp}$ is robust to site-related confounds, maintaining low scanner-predictability as training size increases. Third, contrastive models reliably capture accelerated aging in patients with cognitive impairment and Alzheimer's disease, as shown through brain age gap analysis, ROC curves, and longitudinal trends. Lastly, unlike supervised baselines, $\mathcal{L}^{exp}$ maintains a strong correlation between brain age accuracy and downstream diagnostic performance, supporting its potential as a foundation model for neuroimaging. These results position contrastive learning as a promising direction for building generalizable and clinically meaningful brain representations.

Retrieval-augmented generation elevates local LLM quality in radiology contrast media consultation.

Wada A, Tanaka Y, Nishizawa M, Yamamoto A, Akashi T, Hagiwara A, Hayakawa Y, Kikuta J, Shimoji K, Sano K, Kamagata K, Nakanishi A, Aoki S

pubmed logopapersJul 2 2025
Large language models (LLMs) demonstrate significant potential in healthcare applications, but clinical deployment is limited by privacy concerns and insufficient medical domain training. This study investigated whether retrieval-augmented generation (RAG) can improve locally deployable LLM for radiology contrast media consultation. In 100 synthetic iodinated contrast media consultations we compared Llama 3.2-11B (baseline and RAG) with three cloud-based models-GPT-4o mini, Gemini 2.0 Flash and Claude 3.5 Haiku. A blinded radiologist ranked the five replies per case, and three LLM-based judges scored accuracy, safety, structure, tone, applicability and latency. Under controlled conditions, RAG eliminated hallucinations (0% vs 8%; χ²₍Yates₎ = 6.38, p = 0.012) and improved mean rank by 1.3 (Z = -4.82, p < 0.001), though performance gaps with cloud models persist. The RAG-enhanced model remained faster (2.6 s vs 4.9-7.3 s) while the LLM-based judges preferred it over GPT-4o mini, though the radiologist ranked GPT-4o mini higher. RAG thus provides meaningful improvements for local clinical LLMs while maintaining the privacy benefits of on-premise deployment.

[AI-based applications in medical image computing].

Kepp T, Uzunova H, Ehrhardt J, Handels H

pubmed logopapersJul 2 2025
The processing of medical images plays a central role in modern diagnostics and therapy. Automated processing and analysis of medical images can efficiently accelerate clinical workflows and open new opportunities for improved patient care. However, the high variability, complexity, and varying quality of medical image data pose significant challenges. In recent years, the greatest progress in medical image analysis has been achieved through artificial intelligence (AI), particularly by using deep neural networks in the context of deep learning. These methods are successfully applied in medical image analysis, including segmentation, registration, and image synthesis.AI-based segmentation allows for the precise delineation of organs, tissues, or pathological changes. The application of AI-based image registration supports the accelerated creation of 3D planning models for complex surgeries by aligning relevant anatomical structures from different imaging modalities (e.g., CT, MRI, and PET) or time points. Generative AI methods can be used to generate additional image data for the improved training of AI models, thereby expanding the potential applications of deep learning methods in medicine. Examples from radiology, ophthalmology, dermatology, and surgery are described to illustrate their practical relevance and the potential of AI in image-based diagnostics and therapy.

Large language model trained on clinical oncology data predicts cancer progression.

Zhu M, Lin H, Jiang J, Jinia AJ, Jee J, Pichotta K, Waters M, Rose D, Schultz N, Chalise S, Valleru L, Morin O, Moran J, Deasy JO, Pilai S, Nichols C, Riely G, Braunstein LZ, Li A

pubmed logopapersJul 2 2025
Subspecialty knowledge barriers have limited the adoption of large language models (LLMs) in oncology. We introduce Woollie, an open-source, oncology-specific LLM trained on real-world data from Memorial Sloan Kettering Cancer Center (MSK) across lung, breast, prostate, pancreatic, and colorectal cancers, with external validation using University of California, San Francisco (UCSF) data. Woollie surpasses ChatGPT in medical benchmarks and excels in eight non-medical benchmarks. Analyzing 39,319 radiology impression notes from 4002 patients, it achieved an overall area under the receiver operating characteristic curve (AUROC) of 0.97 for cancer progression prediction on MSK data, including a notable 0.98 AUROC for pancreatic cancer. On UCSF data, it achieved an overall AUROC of 0.88, excelling in lung cancer detection with an AUROC of 0.95. As the first oncology specific LLM validated across institutions, Woollie demonstrates high accuracy and consistency across cancer types, underscoring its potential to enhance cancer progression analysis.
Page 41 of 79781 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.