Sort by:
Page 116 of 1401395 results

A Deep Learning Model for Identifying the Risk of Mesenteric Malperfusion in Acute Aortic Dissection Using Initial Diagnostic Data: Algorithm Development and Validation.

Jin Z, Dong J, Li C, Jiang Y, Yang J, Xu L, Li P, Xie Z, Li Y, Wang D, Ji Z

pubmed logopapersJun 10 2025
Mesenteric malperfusion (MMP) is an uncommon but devastating complication of acute aortic dissection (AAD) that combines 2 life-threatening conditions-aortic dissection and acute mesenteric ischemia. The complex pathophysiology of MMP poses substantial diagnostic and management challenges. Currently, delayed diagnosis remains a critical contributor to poor outcomes because of the absence of reliable individualized risk assessment tools. This study aims to develop and validate a deep learning-based model that integrates multimodal data to identify patients with AAD at high risk of MMP. This multicenter retrospective study included 525 patients with AAD from 2 hospitals. The training and internal validation cohort consisted of 450 patients from Beijing Anzhen Hospital, whereas the external validation cohort comprised 75 patients from Nanjing Drum Tower Hospital. Three machine learning models were developed: the benchmark model using laboratory parameters, the multiorgan feature-based AAD complicating MMP (MAM) model based on computed tomography angiography images, and the integrated model combining both data modalities. Model performance was assessed using the area under the curve, accuracy, sensitivity, specificity, and Brier score. To improve interpretability, gradient-weighted class activation mapping was used to identify and visualize discriminative imaging features. Univariate and multivariate regression analyses were used to evaluate the prognostic significance of the risk score generated by the optimal model. In the external validation cohort, the integrated model demonstrated superior performance, with an area under the curve of 0.780 (95% CI 0.777-0.785), which was significantly greater than those of the benchmark model (0.586, 95% CI 0.574-0.586) and the MAM model (0.732, 95% CI 0.724-0.734). This highlights the benefits of multimodal integration over single-modality approaches. Additional classification metrics revealed that the integrated model had an accuracy of 0.760 (95% CI 0.758-0.764), a sensitivity of 0.667 (95% CI 0.659-0.675), a specificity of 0.783 (95% CI 0.781-0.788), and a Brier score of 0.143 (95% CI 0.143-0.145). Moreover, gradient-weighted class activation mapping visualizations of the MAM model revealed that during positive predictions, the model focused more on key anatomical areas, particularly the superior mesenteric artery origin and intestinal regions with characteristic gas or fluid accumulation. Univariate and multivariate analyses also revealed that the risk score derived from the integrated model was independently associated with inhospital mortality risk among patients with AAD undergoing endovascular or surgical treatment (odds ratio 1.030, 95% CI 1.004-1.056; P=.02). Our findings demonstrate that compared with unimodal approaches, an integrated deep learning model incorporating both imaging and clinical data has greater diagnostic accuracy for MMP in patients with AAD. This model may serve as a valuable tool for early risk identification, facilitating timely therapeutic decision-making. Further prospective validation is warranted to confirm its clinical utility. Chinese Clinical Registry Center ChiCTR2400086050; http://www.chictr.org.cn/showproj.html?proj=226129.

Geometric deep learning for local growth prediction on abdominal aortic aneurysm surfaces

Dieuwertje Alblas, Patryk Rygiel, Julian Suk, Kaj O. Kappe, Marieke Hofman, Christoph Brune, Kak Khee Yeung, Jelmer M. Wolterink

arxiv logopreprintJun 10 2025
Abdominal aortic aneurysms (AAAs) are progressive focal dilatations of the abdominal aorta. AAAs may rupture, with a survival rate of only 20\%. Current clinical guidelines recommend elective surgical repair when the maximum AAA diameter exceeds 55 mm in men or 50 mm in women. Patients that do not meet these criteria are periodically monitored, with surveillance intervals based on the maximum AAA diameter. However, this diameter does not take into account the complex relation between the 3D AAA shape and its growth, making standardized intervals potentially unfit. Personalized AAA growth predictions could improve monitoring strategies. We propose to use an SE(3)-symmetric transformer model to predict AAA growth directly on the vascular model surface enriched with local, multi-physical features. In contrast to other works which have parameterized the AAA shape, this representation preserves the vascular surface's anatomical structure and geometric fidelity. We train our model using a longitudinal dataset of 113 computed tomography angiography (CTA) scans of 24 AAA patients at irregularly sampled intervals. After training, our model predicts AAA growth to the next scan moment with a median diameter error of 1.18 mm. We further demonstrate our model's utility to identify whether a patient will become eligible for elective repair within two years (acc = 0.93). Finally, we evaluate our model's generalization on an external validation set consisting of 25 CTAs from 7 AAA patients from a different hospital. Our results show that local directional AAA growth prediction from the vascular surface is feasible and may contribute to personalized surveillance strategies.

optiGAN: A Deep Learning-Based Alternative to Optical Photon Tracking in Python-Based GATE (10+).

Mummaneni G, Trigila C, Krah N, Sarrut D, Roncali E

pubmed logopapersJun 9 2025
To accelerate optical photon transport simulations in the GATE medical physics framework using a Generative Adversarial Network (GAN), while ensuring high modeling accuracy. Traditionally, detailed optical Monte Carlo methods have been the gold standard for modeling photon interactions in detectors, but their high computational cost remains a challenge. This study explores the integration of optiGAN, a Generative Adversarial Network (GAN) model into GATE 10, the new Python-based version of the GATE medical physics simulation framework released in November 2024.
Approach: The goal of optiGAN is to accelerate optical photon transport simulations while maintaining modelling accuracy. The optiGAN model, based on a GAN architecture, was integrated into GATE 10 as a computationally efficient alternative to traditional optical Monte Carlo simulations. To ensure consistency, optical photon transport modules were implemented in GATE 10 and validated against GATE v9.3 under identical simulation conditions. Subsequently, simulations using full Monte Carlo tracking in GATE 10 were compared to those using GATE 10-optiGAN.
Main results: Validation studies confirmed that GATE 10 produces results consistent with GATE v9.3. Simulations using GATE 10-optiGAN showed over 92% similarity to Monte Carlo-based GATE 10 results, based on the Jensen-Shannon distance across multiple photon transport parameters. optiGAN successfully captured multimodal distributions of photon position, direction, and energy at the photodetector face. Simulation time analysis revealed a reduction of approximately 50% in execution time with GATE 10-optiGAN compared to full Monte Carlo simulations.
Significance: The study confirms both the fidelity of optical photon transport modeling in GATE 10 and the effective integration of deep learning-based acceleration through optiGAN. This advancement enables large-scale, high-fidelity optical simulations with significantly reduced computational cost, supporting broader applications in medical imaging and detector design.

Addressing Limited Generalizability in Artificial Intelligence-Based Brain Aneurysm Detection for Computed Tomography Angiography: Development of an Externally Validated Artificial Intelligence Screening Platform.

Pettersson SD, Filo J, Liaw P, Skrzypkowska P, Klepinowski T, Szmuda T, Fodor TB, Ramirez-Velandia F, Zieliński P, Chang YM, Taussky P, Ogilvy CS

pubmed logopapersJun 9 2025
Brain aneurysm detection models, both in the literature and in industry, continue to lack generalizability during external validation, limiting clinical adoption. This challenge is largely due to extensive exclusion criteria during training data selection. The authors developed the first model to achieve generalizability using novel methodological approaches. Computed tomography angiography (CTA) scans from 2004 to 2023 at the study institution were used for model training, including untreated unruptured intracranial aneurysms without extensive cerebrovascular disease. External validation used digital subtraction angiography-verified CTAs from an international center, while prospective validation occurred at the internal institution over 9 months. A public web platform was created for further model validation. A total of 2194 CTA scans were used for this study. One thousand five hundred eighty-seven patients and 1920 aneurysms with a mean size of 5.3 ± 3.7 mm were included in the training cohort. The mean age of the patients was 69.7 ± 14.9 years, and 1203 (75.8%) were female. The model achieved a training Dice score of 0.88 and a validation Dice score of 0.76. Prospective internal validation on 304 scans yielded a lesion-level (LL) sensitivity of 82.5% (95% CI: 75.5-87.9) and specificity of 89.6 (95% CI: 84.5-93.2). External validation on 303 scans demonstrated an on-par LL sensitivity and specificity of 83.5% (95% CI: 75.1-89.4) and 92.9% (95% CI: 88.8-95.6), respectively. Radiologist LL sensitivity from the external center was 84.5% (95% CI: 76.2-90.2), and 87.5% of the missed aneurysms were detected by the model. The authors developed the first publicly testable artificial intelligence model for aneurysm detection on CTA scans, demonstrating generalizability and state-of-the-art performance in external validation. The model addresses key limitations of previous efforts and enables broader validation through a web-based platform.

Radiomics-based machine learning atherosclerotic carotid artery disease in ultrasound: systematic review with meta-analysis of RQS.

Vacca S, Scicolone R, Pisu F, Cau R, Yang Q, Annoni A, Pontone G, Costa F, Paraskevas KI, Nicolaides A, Suri JS, Saba L

pubmed logopapersJun 9 2025
Stroke, a leading global cause of mortality and neurological disability, is often associated with atherosclerotic carotid artery disease. Distinguishing between symptomatic and asymptomatic carotid artery disease is crucial for appropriate treatment decisions. Radiomics, a quantitative image analysis technique, and machine learning (ML) have emerged as promising tools in Ultrasound (US) imaging, potentially providing a helpful tool in the screening of such lesions. Pubmed, Web of Science and Scopus databases were searched for relevant studies published from January 2005 to May 2023. The Radiomics Quality Score (RQS) was used to assess methodological quality of studies included in the review. The Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) assessed the risk of bias. Sensitivity, specificity, and logarithmic diagnostic odds ratio (logDOR) meta-analyses have been conducted, alongside an influence analysis. RQS assessed methodological quality, revealing an overall low score and consistent findings with other radiology domains. QUADAS-2 indicated an overall low risk, except for two studies with high bias. The meta-analysis demonstrated that radiomics-based ML models for predicting culprit plaques on US had a satisfactory performance, with a sensitivity of 0.84 and specificity of 0.82. The logDOR analysis confirmed the positive results, yielding a pooled logDOR of 3.54. The summary ROC curve provided an AUC of 0.887. Radiomics combined with ML provide high sensitivity and low false positive rate for carotid plaque vulnerability assessment on US. However, current evidence is not definitive, given the low overall study quality and high inter-study heterogeneity. High quality, prospective studies are needed to confirm the potential of these promising techniques.

Improving Patient Communication by Simplifying AI-Generated Dental Radiology Reports With ChatGPT: Comparative Study.

Stephan D, Bertsch AS, Schumacher S, Puladi B, Burwinkel M, Al-Nawas B, Kämmerer PW, Thiem DG

pubmed logopapersJun 9 2025
Medical reports, particularly radiology findings, are often written for professional communication, making them difficult for patients to understand. This communication barrier can reduce patient engagement and lead to misinterpretation. Artificial intelligence (AI), especially large language models such as ChatGPT, offers new opportunities for simplifying medical documentation to improve patient comprehension. We aimed to evaluate whether AI-generated radiology reports simplified by ChatGPT improve patient understanding, readability, and communication quality compared to original AI-generated reports. In total, 3 versions of radiology reports were created using ChatGPT: an original AI-generated version (text 1), a patient-friendly, simplified version (text 2), and a further simplified and accessibility-optimized version (text 3). A total of 300 patients (n=100, 33.3% per group), excluding patients with medical education, were randomly assigned to review one text version and complete a standardized questionnaire. Readability was assessed using the Flesch Reading Ease (FRE) score and LIX indices. Both simplified texts showed significantly higher readability scores (text 1: FRE score=51.1; text 2: FRE score=55.0; and text 3: FRE score=56.4; P<.001) and lower LIX scores, indicating enhanced clarity. Text 3 had the shortest sentences, had the fewest long words, and scored best on all patient-rated dimensions. Questionnaire results revealed significantly higher ratings for texts 2 and 3 across clarity (P<.001), tone (P<.001), structure, and patient engagement. For example, patients rated the ability to understand findings without help highest for text 3 (mean 1.5, SD 0.7) and lowest for text 1 (mean 3.1, SD 1.4). Both simplified texts significantly improved patients' ability to prepare for clinical conversations and promoted shared decision-making. AI-generated simplification of radiology reports significantly enhances patient comprehension and engagement. These findings highlight the potential of ChatGPT as a tool to improve patient-centered communication. While promising, future research should focus on ensuring clinical accuracy and exploring applications across diverse patient populations to support equitable and effective integration of AI in health care communication.

APTOS-2024 challenge report: Generation of synthetic 3D OCT images from fundus photographs

Bowen Liu, Weiyi Zhang, Peranut Chotcomwongse, Xiaolan Chen, Ruoyu Chen, Pawin Pakaymaskul, Niracha Arjkongharn, Nattaporn Vongsa, Xuelian Cheng, Zongyuan Ge, Kun Huang, Xiaohui Li, Yiru Duan, Zhenbang Wang, BaoYe Xie, Qiang Chen, Huazhu Fu, Michael A. Mahr, Jiaqi Qu, Wangyiyang Chen, Shiye Wang, Yubo Tan, Yongjie Li, Mingguang He, Danli Shi, Paisan Ruamviboonsuk

arxiv logopreprintJun 9 2025
Optical Coherence Tomography (OCT) provides high-resolution, 3D, and non-invasive visualization of retinal layers in vivo, serving as a critical tool for lesion localization and disease diagnosis. However, its widespread adoption is limited by equipment costs and the need for specialized operators. In comparison, 2D color fundus photography offers faster acquisition and greater accessibility with less dependence on expensive devices. Although generative artificial intelligence has demonstrated promising results in medical image synthesis, translating 2D fundus images into 3D OCT images presents unique challenges due to inherent differences in data dimensionality and biological information between modalities. To advance generative models in the fundus-to-3D-OCT setting, the Asia Pacific Tele-Ophthalmology Society (APTOS-2024) organized a challenge titled Artificial Intelligence-based OCT Generation from Fundus Images. This paper details the challenge framework (referred to as APTOS-2024 Challenge), including: the benchmark dataset, evaluation methodology featuring two fidelity metrics-image-based distance (pixel-level OCT B-scan similarity) and video-based distance (semantic-level volumetric consistency), and analysis of top-performing solutions. The challenge attracted 342 participating teams, with 42 preliminary submissions and 9 finalists. Leading methodologies incorporated innovations in hybrid data preprocessing or augmentation (cross-modality collaborative paradigms), pre-training on external ophthalmic imaging datasets, integration of vision foundation models, and model architecture improvement. The APTOS-2024 Challenge is the first benchmark demonstrating the feasibility of fundus-to-3D-OCT synthesis as a potential solution for improving ophthalmic care accessibility in under-resourced healthcare settings, while helping to expedite medical research and clinical applications.

A Narrative Review on Large AI Models in Lung Cancer Screening, Diagnosis, and Treatment Planning

Jiachen Zhong, Yiting Wang, Di Zhu, Ziwei Wang

arxiv logopreprintJun 8 2025
Lung cancer remains one of the most prevalent and fatal diseases worldwide, demanding accurate and timely diagnosis and treatment. Recent advancements in large AI models have significantly enhanced medical image understanding and clinical decision-making. This review systematically surveys the state-of-the-art in applying large AI models to lung cancer screening, diagnosis, prognosis, and treatment. We categorize existing models into modality-specific encoders, encoder-decoder frameworks, and joint encoder architectures, highlighting key examples such as CLIP, BLIP, Flamingo, BioViL-T, and GLoRIA. We further examine their performance in multimodal learning tasks using benchmark datasets like LIDC-IDRI, NLST, and MIMIC-CXR. Applications span pulmonary nodule detection, gene mutation prediction, multi-omics integration, and personalized treatment planning, with emerging evidence of clinical deployment and validation. Finally, we discuss current limitations in generalizability, interpretability, and regulatory compliance, proposing future directions for building scalable, explainable, and clinically integrated AI systems. Our review underscores the transformative potential of large AI models to personalize and optimize lung cancer care.

NeXtBrain: Combining local and global feature learning for brain tumor classification.

Pacal I, Akhan O, Deveci RT, Deveci M

pubmed logopapersJun 7 2025
The accurate and timely diagnosis of brain tumors is of paramount clinical significance for effective treatment planning and improved patient outcomes. While deep learning has advanced medical image analysis, concurrently achieving high classification accuracy, robust generalization, and computational efficiency remains a formidable challenge. This is often due to the difficulty in optimally capturing both fine-grained local tumor features and their broader global contextual cues without incurring substantial computational costs. This paper introduces NeXtBrain, a novel hybrid architecture meticulously designed to overcome these limitations. NeXtBrain's core innovations, the NeXt Convolutional Block (NCB) and the NeXt Transformer Block (NTB), synergistically enhance feature learning: NCB leverages Multi-Head Convolutional Attention and a SwiGLU-based MLP to precisely extract subtle local tumor morphologies and detailed textures, while NTB integrates self-attention with convolutional attention and a SwiGLU MLP to effectively model long-range spatial dependencies and global contextual relationships, crucial for differentiating complex tumor characteristics. Evaluated on two publicly available benchmark datasets, Figshare and Kaggle, NeXtBrain was rigorously compared against 17 state-of-the-art (SOTA) models. On Figshare, it achieved 99.78 % accuracy and a 99.77 % F1-score. On Kaggle, it attained 99.78 % accuracy and a 99.81 % F1-score, surpassing leading SOTA ViT, CNN, and hybrid models. Critically, NeXtBrain demonstrates exceptional computational efficiency, achieving these SOTA results with only 23.91 million parameters, requiring just 10.32 GFLOPs, and exhibiting a rapid inference time of 0.007 ms. This efficiency allows it to outperform significantly larger models such as DeiT3-Base with 85.82 M parameters, Swin-Base with 86.75 M parameters in both accuracy and computational demand.

Diagnostic accuracy of radiomics in risk stratification of gastrointestinal stromal tumors: A systematic review and meta-analysis.

Salimi M, Mohammadi H, Ghahramani S, Nemati M, Ashari A, Imani A, Imani MH

pubmed logopapersJun 7 2025
This systematic review and meta-analysis aimed to assess the diagnostic accuracy of radiomics in risk stratification of gastrointestinal stromal tumors (GISTs). It focused on evaluating radiomic models as a non-invasive tool in clinical practice. A comprehensive search was conducted across PubMed, Web of Science, EMBASE, Scopus, and Cochrane Library up to May 17, 2025. Studies involving preoperative imaging and radiomics-based risk stratification of GISTs were included. Quality was assessed using the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool and Radiomics Quality Score (RQS). Pooled sensitivity, specificity, and area under the curve (AUC) were calculated using bivariate random-effects models. Meta-regression and subgroup analyses were performed to explore heterogeneity. A total of 29 studies were included, with 22 (76 %) based on computed tomography scans, while 2 (7 %) were based on endoscopic ultrasound, 3 (10 %) on magnetic resonance imaging, and 2 (7 %) on ultrasound. Of these, 18 studies provided sufficient data for meta-analysis. Pooled sensitivity, specificity, and AUC for radiomics-based GIST risk stratification were 0.84, 0.86, and 0.90 for training cohorts, and 0.84, 0.80, and 0.89 for validation cohorts. QUADAS-2 indicated some bias due to insufficient pre-specified thresholds. The mean RQS score was 13.14 ± 3.19. Radiomics holds promise for non-invasive GIST risk stratification, particularly with advanced imaging techniques. However, radiomic models are still in the early stages of clinical adoption. Further research is needed to improve diagnostic accuracy and validate their role alongside conventional methods like biopsy or surgery.
Page 116 of 1401395 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.