Sort by:
Page 613 of 7637630 results

Dieuwertje Alblas, Patryk Rygiel, Julian Suk, Kaj O. Kappe, Marieke Hofman, Christoph Brune, Kak Khee Yeung, Jelmer M. Wolterink

arxiv logopreprintJun 10 2025
Abdominal aortic aneurysms (AAAs) are progressive focal dilatations of the abdominal aorta. AAAs may rupture, with a survival rate of only 20\%. Current clinical guidelines recommend elective surgical repair when the maximum AAA diameter exceeds 55 mm in men or 50 mm in women. Patients that do not meet these criteria are periodically monitored, with surveillance intervals based on the maximum AAA diameter. However, this diameter does not take into account the complex relation between the 3D AAA shape and its growth, making standardized intervals potentially unfit. Personalized AAA growth predictions could improve monitoring strategies. We propose to use an SE(3)-symmetric transformer model to predict AAA growth directly on the vascular model surface enriched with local, multi-physical features. In contrast to other works which have parameterized the AAA shape, this representation preserves the vascular surface's anatomical structure and geometric fidelity. We train our model using a longitudinal dataset of 113 computed tomography angiography (CTA) scans of 24 AAA patients at irregularly sampled intervals. After training, our model predicts AAA growth to the next scan moment with a median diameter error of 1.18 mm. We further demonstrate our model's utility to identify whether a patient will become eligible for elective repair within two years (acc = 0.93). Finally, we evaluate our model's generalization on an external validation set consisting of 25 CTAs from 7 AAA patients from a different hospital. Our results show that local directional AAA growth prediction from the vascular surface is feasible and may contribute to personalized surveillance strategies.

Vivien van Veldhuizen, Vanessa Botha, Chunyao Lu, Melis Erdal Cesur, Kevin Groot Lipman, Edwin D. de Jong, Hugo Horlings, Clárisa Sanchez, Cees Snoek, Ritse Mann, Eric Marcus, Jonas Teuwen

arxiv logopreprintJun 10 2025
Foundation models (FMs) are changing the way medical images are analyzed by learning from large collections of unlabeled data. Instead of relying on manually annotated examples, FMs are pre-trained to learn general-purpose visual features that can later be adapted to specific clinical tasks with little additional supervision. In this review, we examine how FMs are being developed and applied in pathology, radiology, and ophthalmology, drawing on evidence from over 150 studies. We explain the core components of FM pipelines, including model architectures, self-supervised learning methods, and strategies for downstream adaptation. We also review how FMs are being used in each imaging domain and compare design choices across applications. Finally, we discuss key challenges and open questions to guide future research.

Shivang Chopra, Gabriela Sanchez-Rodriguez, Lingchao Mao, Andrew J Feola, Jing Li, Zsolt Kira

arxiv logopreprintJun 10 2025
Different medical imaging modalities capture diagnostic information at varying spatial resolutions, from coarse global patterns to fine-grained localized structures. However, most existing vision-language frameworks in the medical domain apply a uniform strategy for local feature extraction, overlooking the modality-specific demands. In this work, we present MedMoE, a modular and extensible vision-language processing framework that dynamically adapts visual representation based on the diagnostic context. MedMoE incorporates a Mixture-of-Experts (MoE) module conditioned on the report type, which routes multi-scale image features through specialized expert branches trained to capture modality-specific visual semantics. These experts operate over feature pyramids derived from a Swin Transformer backbone, enabling spatially adaptive attention to clinically relevant regions. This framework produces localized visual representations aligned with textual descriptions, without requiring modality-specific supervision at inference. Empirical results on diverse medical benchmarks demonstrate that MedMoE improves alignment and retrieval performance across imaging modalities, underscoring the value of modality-specialized visual representations in clinical vision-language systems.

Alshomrani F

pubmed logopapersJun 10 2025
Accurate brain tumor classification is essential in neuro-oncology, as it directly informs treatment strategies and influences patient outcomes. This review comprehensively explores machine learning (ML) and deep learning (DL) models that enhance the accuracy and efficiency of brain tumor classification using medical imaging data, particularly Magnetic Resonance Imaging (MRI). As a noninvasive imaging technique, MRI plays a central role in detecting, segmenting, and characterizing brain tumors by providing detailed anatomical views that help distinguish various tumor types, including gliomas, meningiomas, and metastatic brain lesions. The review presents a detailed analysis of diverse ML approaches, from classical algorithms such as Support Vector Machines (SVM) and Decision Trees to advanced DL models, including Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), and hybrid architectures that combine multiple techniques for improved performance. Through comparative analysis of recent studies across various datasets, the review evaluates these methods using metrics such as accuracy, sensitivity, specificity, and AUC-ROC, offering insights into their effectiveness and limitations. Significant challenges in the field are examined, including the scarcity of annotated datasets, computational complexity requirements, model interpretability issues, and barriers to clinical integration. The review proposes future directions to address these challenges, highlighting the potential of multi-modal imaging that combines MRI with other imaging modalities, explainable AI frameworks for enhanced model transparency, and privacy-preserving techniques for securing sensitive patient data. This comprehensive analysis demonstrates the transformative potential of ML and DL in advancing brain tumor diagnosis while emphasizing the necessity for continued research and innovation to overcome current limitations and ensure successful clinical implementation for improved patient care.

Niu X, Li Y, Wang L, Xu G

pubmed logopapersJun 10 2025
It is not rare to experience a biochemical recurrence (BCR) following radical prostatectomy (RP) for prostate cancer (PCa). It has been reported that early detection and management of BCR following surgery could improve survival in PCa. This study aimed to develop a nomogram integrating deep learning-based radiomic features and clinical parameters to predict 1-year BCR after RP and to examine the associations between radiomic scores and the tumor microenvironment (TME). In this retrospective multicenter study, two independent cohorts of patients (n = 349) who underwent RP after multiparametric magnetic resonance imaging (mpMRI) between January 2015 and January 2022 were included in the analysis. Single-cell RNA sequencing data from four prospectively enrolled participants were used to investigate the radiomic score-related TME. The 3D U-Net was trained and optimized for prostate cancer segmentation using diffusion-weighted imaging, and radiomic features of the target lesion were extracted. Predictive nomograms were developed via multivariate Cox proportional hazard regression analysis. The nomograms were assessed for discrimination, calibration, and clinical usefulness. In the development cohort, the clinical-radiomic nomogram had an AUC of 0.892 (95% confidence interval: 0.783--0.939), which was considerably greater than those of the radiomic signature and clinical model. The Hosmer-Lemeshow test demonstrated that the clinical-radiomic model performed well in both the development (P = 0.461) and validation (P = 0.722) cohorts. Decision curve analysis revealed that the clinical-radiomic nomogram displayed better clinical predictive usefulness than the clinical or radiomic signature alone in both cohorts. Radiomic scores were associated with a significant difference in the TME pattern. Our study demonstrated the feasibility of a DWI-based clinical-radiomic nomogram combined with deep learning for the prediction of 1-year BCR. The findings revealed that the radiomic score was associated with a distinctive tumor microenvironment.

Herwald SE, Shah P, Johnston A, Olsen C, Delbrouck JB, Langlotz CP

pubmed logopapersJun 10 2025
The Cures Act Final Rule requires that patients have real-time access to their radiology reports, which contain technical language. Our objective to was to use a novel system called RadGPT, which integrates concept extraction and a large language model (LLM), to help patients understand their radiology reports. RadGPT generated 150 concept explanations and 390 question-and-answer pairs from 30 radiology report impressions from between 2012 and 2020. The extracted concepts were used to create concept-based explanations, as well as concept-based question-and-answer pairs where questions were generated using either a fixed template or an LLM. Additionally, report-based question-and-answer pairs were generated directly from the impression using an LLM without concept extraction. One board-certified radiologist and 4 radiology residents rated the material quality using a standardized rubric. Concept-based LLM-generated questions were significantly higher quality than concept-based template-generated questions (p < 0.001). Excluding those template-based question-and-answer pairs from further analysis, nearly all (> 95%) of RadGPT-generated materials were rated highly, with at least 50% receiving the highest possible ranking from all 5 raters. No answers or explanations were rated as likely to affect the safety or effectiveness of patient care. Report-level LLM-based questions and answers were rated particularly highly, with 92% of report-level LLM-based questions and 61% of the corresponding report-level answers receiving the highest rating from all raters. The educational tool RadGPT generated high-quality explanations and question-and-answer pairs that were personalized for each radiology report, unlikely to produce harmful explanations and likely to enhance patient understanding of radiology information.

Vivien van Veldhuizen, Vanessa Botha, Chunyao Lu, Melis Erdal Cesur, Kevin Groot Lipman, Edwin D. de Jong, Hugo Horlings, Clárisa I. Sanchez, Cees G. M. Snoek, Lodewyk Wessels, Ritse Mann, Eric Marcus, Jonas Teuwen

arxiv logopreprintJun 10 2025
Foundation models (FMs) are changing the way medical images are analyzed by learning from large collections of unlabeled data. Instead of relying on manually annotated examples, FMs are pre-trained to learn general-purpose visual features that can later be adapted to specific clinical tasks with little additional supervision. In this review, we examine how FMs are being developed and applied in pathology, radiology, and ophthalmology, drawing on evidence from over 150 studies. We explain the core components of FM pipelines, including model architectures, self-supervised learning methods, and strategies for downstream adaptation. We also review how FMs are being used in each imaging domain and compare design choices across applications. Finally, we discuss key challenges and open questions to guide future research.

Vivien van Veldhuizen, Vanessa Botha, Chunyao Lu, Melis Erdal Cesur, Kevin Groot Lipman, Edwin D. de Jong, Hugo Horlings, Clárisa I. Sanchez, Cees G. M. Snoek, Lodewyk Wessels, Ritse Mann, Eric Marcus, Jonas Teuwen

arxiv logopreprintJun 10 2025
Foundation models (FMs) are changing the way medical images are analyzed by learning from large collections of unlabeled data. Instead of relying on manually annotated examples, FMs are pre-trained to learn general-purpose visual features that can later be adapted to specific clinical tasks with little additional supervision. In this review, we examine how FMs are being developed and applied in pathology, radiology, and ophthalmology, drawing on evidence from over 150 studies. We explain the core components of FM pipelines, including model architectures, self-supervised learning methods, and strategies for downstream adaptation. We also review how FMs are being used in each imaging domain and compare design choices across applications. Finally, we discuss key challenges and open questions to guide future research.

Yan J, Zhou X, Zheng Q, Wang K, Gao Y, Liu F, Pan L

pubmed logopapersJun 9 2025
The present study aims to construct a random forest (RF) model based on ultrasound radiomics and elastography, offering a new approach for the differentiation of thyroid nodules (TNs). We retrospectively analyzed 152 TNs from 127 patients and developed four machine learning models. The examination was performed using the Resona 9Pro equipped with a 15-4 MHz linear array probe. The region of interest (ROI) was delineated with 3D Slicer. Using the RF algorithm, four models were developed based on sound touch elastography (STE) parameters, strain elastography (SE) parameters, and the selected radiomic features: the STE model, SE model, radiomics model, and the combined model. Decision Curve Analysis (DCA) is employed to assess the clinical benefit of each model. The DeLong test is used to determine whether the area under the curves (AUC) values of different models are statistically significant. A total of 1396 radiomic features were extracted using the Pyradiomics package. After screening, a total of 7 radiomic features were ultimately included in the construction of the model. In STE, SE, radiomics model, and combined model, the AUCs are 0.699 (95% CI: 0.570-0.828), 0.812 (95% CI: 0.683-0.941), 0.851 (95% CI: 0.739-0.964) and 0.911 (95% CI: 0.806-1.000), respectively. In these models, the combined model and the radiomics model exhibited outstanding performance. The combined model, integrating elastography and radiomics, demonstrates superior predictive accuracy compared to single models, offering a promising approach for the diagnosis of TNs.

Stephan D, Bertsch AS, Schumacher S, Puladi B, Burwinkel M, Al-Nawas B, Kämmerer PW, Thiem DG

pubmed logopapersJun 9 2025
Medical reports, particularly radiology findings, are often written for professional communication, making them difficult for patients to understand. This communication barrier can reduce patient engagement and lead to misinterpretation. Artificial intelligence (AI), especially large language models such as ChatGPT, offers new opportunities for simplifying medical documentation to improve patient comprehension. We aimed to evaluate whether AI-generated radiology reports simplified by ChatGPT improve patient understanding, readability, and communication quality compared to original AI-generated reports. In total, 3 versions of radiology reports were created using ChatGPT: an original AI-generated version (text 1), a patient-friendly, simplified version (text 2), and a further simplified and accessibility-optimized version (text 3). A total of 300 patients (n=100, 33.3% per group), excluding patients with medical education, were randomly assigned to review one text version and complete a standardized questionnaire. Readability was assessed using the Flesch Reading Ease (FRE) score and LIX indices. Both simplified texts showed significantly higher readability scores (text 1: FRE score=51.1; text 2: FRE score=55.0; and text 3: FRE score=56.4; P<.001) and lower LIX scores, indicating enhanced clarity. Text 3 had the shortest sentences, had the fewest long words, and scored best on all patient-rated dimensions. Questionnaire results revealed significantly higher ratings for texts 2 and 3 across clarity (P<.001), tone (P<.001), structure, and patient engagement. For example, patients rated the ability to understand findings without help highest for text 3 (mean 1.5, SD 0.7) and lowest for text 1 (mean 3.1, SD 1.4). Both simplified texts significantly improved patients' ability to prepare for clinical conversations and promoted shared decision-making. AI-generated simplification of radiology reports significantly enhances patient comprehension and engagement. These findings highlight the potential of ChatGPT as a tool to improve patient-centered communication. While promising, future research should focus on ensuring clinical accuracy and exploring applications across diverse patient populations to support equitable and effective integration of AI in health care communication.
Page 613 of 7637630 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,700+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.