Sort by:
Page 18 of 1411403 results

Role of Artificial Intelligence in Lung Transplantation: Current State, Challenges, and Future Directions.

Duncheskie RP, Omari OA, Anjum F

pubmed logopapersSep 16 2025
Lung transplantation remains a critical treatment for end-stage lung diseases, yet it continues to have 1 of the lowest survival rates among solid organ transplants. Despite its life-saving potential, the field faces several challenges, including organ shortages, suboptimal donor matching, and post-transplant complications. The rapidly advancing field of artificial intelligence (AI) offers significant promise in addressing these challenges. Traditional statistical models, such as linear and logistic regression, have been used to predict post-transplant outcomes but struggle to adapt to new trends and evolving data. In contrast, machine learning algorithms can evolve with new data, offering dynamic and updated predictions. AI holds the potential to enhance lung transplantation at multiple stages. In the pre-transplant phase, AI can optimize waitlist management, refine donor selection, and improve donor-recipient matching, and enhance diagnostic imaging by harnessing vast datasets. Post-transplant, AI can help predict allograft rejection, improve immunosuppressive management, and better forecast long-term patient outcomes, including quality of life. However, the integration of AI in lung transplantation also presents challenges, including data privacy concerns, algorithmic bias, and the need for external clinical validation. This review explores the current state of AI in lung transplantation, summarizes key findings from recent studies, and discusses the potential benefits, challenges, and ethical considerations in this rapidly evolving field, highlighting future research directions.

CT-Based deep learning platform combined with clinical parameters for predicting different discharge outcome in spontaneous intracerebral hemorrhage.

Wu TC, Chan MH, Lin KH, Liu CF, Chen JH, Chang RF

pubmed logopapersSep 16 2025
This study aims to enhance the prognostic prediction of spontaneous intracerebral hemorrhage (sICH) by comparing the accuracy of three models: a CT-based deep learning model, a clinical variable-based machine learning model, and a hybrid model that integrates both approaches. The goal is to evaluate their performance across different outcome thresholds, including poor outcome (mRS 3-6), loss of independence (mRS 4-6), and severe disability or death (mRS 5-6). A retrospective analysis was conducted on 1,853 sICH patients from a stroke center database (2008-2021). Patients were divided into two datasets: Dataset A (958 patients) for training/testing the clinical and hybrid models, and Dataset B (895 patients) for training the deep learning model. The imaging model used a 3D ResNet-50 architecture with attention modules, while the clinical model incorporated 19 clinical variables. The hybrid model combined clinical data with prediction probability from the imaging model. Performance metrics were compared using the DeLong test. The hybrid model consistently outperformed the other models across all outcome thresholds. For predicting severe disability and death, loss of independence, and poor outcome, the hybrid model achieved accuracies of 82.6%, 79.5%, 80.6% with AUC values of 0.897, 0.871, 0.0873, respectively. GCS scores and imaging model prediction probability were the most significant predictors. The hybrid model, combining CT-based deep learning with clinical variables, offers superior prognostic prediction for sICH outcomes. This integrated approach shows promise for improving clinical decision-making, though further validation in prospective studies is needed. Not applicable because this is a retrospective study, not a clinical trial.

A Computational Pipeline for Patient-Specific Modeling of Thoracic Aortic Aneurysm: From Medical Image to Finite Element Analysis

Jiasong Chen, Linchen Qian, Ruonan Gong, Christina Sun, Tongran Qin, Thuy Pham, Caitlin Martin, Mohammad Zafar, John Elefteriades, Wei Sun, Liang Liang

arxiv logopreprintSep 16 2025
The aorta is the body's largest arterial vessel, serving as the primary pathway for oxygenated blood within the systemic circulation. Aortic aneurysms consistently rank among the top twenty causes of mortality in the United States. Thoracic aortic aneurysm (TAA) arises from abnormal dilation of the thoracic aorta and remains a clinically significant disease, ranking as one of the leading causes of death in adults. A thoracic aortic aneurysm ruptures when the integrity of all aortic wall layers is compromised due to elevated blood pressure. Currently, three-dimensional computed tomography (3D CT) is considered the gold standard for diagnosing TAA. The geometric characteristics of the aorta, which can be quantified from medical imaging, and stresses on the aortic wall, which can be obtained by finite element analysis (FEA), are critical in evaluating the risk of rupture and dissection. Deep learning based image segmentation has emerged as a reliable method for extracting anatomical regions of interest from medical images. Voxel based segmentation masks of anatomical structures are typically converted into structured mesh representation to enable accurate simulation. Hexahedral meshes are commonly used in finite element simulations of the aorta due to their computational efficiency and superior simulation accuracy. Due to anatomical variability, patient specific modeling enables detailed assessment of individual anatomical and biomechanics behaviors, supporting precise simulations, accurate diagnoses, and personalized treatment strategies. Finite element (FE) simulations provide valuable insights into the biomechanical behaviors of tissues and organs in clinical studies. Developing accurate FE models represents a crucial initial step in establishing a patient-specific, biomechanically based framework for predicting the risk of TAA.

More performant and scalable: Rethinking contrastive vision-language pre-training of radiology in the LLM era

Yingtai Li, Haoran Lai, Xiaoqian Zhou, Shuai Ming, Wenxin Ma, Wei Wei, Shaohua Kevin Zhou

arxiv logopreprintSep 16 2025
The emergence of Large Language Models (LLMs) presents unprecedented opportunities to revolutionize medical contrastive vision-language pre-training. In this paper, we show how LLMs can facilitate large-scale supervised pre-training, thereby advancing vision-language alignment. We begin by demonstrate that modern LLMs can automatically extract diagnostic labels from radiology reports with remarkable precision (>96\% AUC in our experiments) without complex prompt engineering, enabling the creation of large-scale "silver-standard" datasets at a minimal cost (~\$3 for 50k CT image-report pairs). Further, we find that vision encoder trained on this "silver-standard" dataset achieves performance comparable to those trained on labels extracted by specialized BERT-based models, thereby democratizing the access to large-scale supervised pre-training. Building on this foundation, we proceed to reveal that supervised pre-training fundamentally improves contrastive vision-language alignment. Our approach achieves state-of-the-art performance using only a 3D ResNet-18 with vanilla CLIP training, including 83.8\% AUC for zero-shot diagnosis on CT-RATE, 77.3\% AUC on RAD-ChestCT, and substantial improvements in cross-modal retrieval (MAP@50=53.7\% for image-image, Recall@100=52.2\% for report-image). These results demonstrate the potential of utilizing LLMs to facilitate {\bf more performant and scalable} medical AI systems. Our code is avaiable at https://github.com/SadVoxel/More-performant-and-scalable.

Cross-Distribution Diffusion Priors-Driven Iterative Reconstruction for Sparse-View CT

Haodong Li, Shuo Han, Haiyang Mao, Yu Shi, Changsheng Fang, Jianjia Zhang, Weiwen Wu, Hengyong Yu

arxiv logopreprintSep 16 2025
Sparse-View CT (SVCT) reconstruction enhances temporal resolution and reduces radiation dose, yet its clinical use is hindered by artifacts due to view reduction and domain shifts from scanner, protocol, or anatomical variations, leading to performance degradation in out-of-distribution (OOD) scenarios. In this work, we propose a Cross-Distribution Diffusion Priors-Driven Iterative Reconstruction (CDPIR) framework to tackle the OOD problem in SVCT. CDPIR integrates cross-distribution diffusion priors, derived from a Scalable Interpolant Transformer (SiT), with model-based iterative reconstruction methods. Specifically, we train a SiT backbone, an extension of the Diffusion Transformer (DiT) architecture, to establish a unified stochastic interpolant framework, leveraging Classifier-Free Guidance (CFG) across multiple datasets. By randomly dropping the conditioning with a null embedding during training, the model learns both domain-specific and domain-invariant priors, enhancing generalizability. During sampling, the globally sensitive transformer-based diffusion model exploits the cross-distribution prior within the unified stochastic interpolant framework, enabling flexible and stable control over multi-distribution-to-noise interpolation paths and decoupled sampling strategies, thereby improving adaptation to OOD reconstruction. By alternating between data fidelity and sampling updates, our model achieves state-of-the-art performance with superior detail preservation in SVCT reconstructions. Extensive experiments demonstrate that CDPIR significantly outperforms existing approaches, particularly under OOD conditions, highlighting its robustness and potential clinical value in challenging imaging scenarios.

Head-to-Head Comparison of Two AI Computer-Aided Triage Solutions for Detecting Intracranial Hemorrhage on Non-Contrast Head CT.

Garcia GM, Young P, Dawood L, Elshikh M

pubmed logopapersSep 16 2025
This study aims to provide a comprehensive comparison of the performance and reproducibility of two commercially available artificial intelligence (AI) software computer-aided triage and notification solutions, Vendor A (Aidoc) and Vendor B (Viz.ai), for the detection of intracranial hemorrhage (ICH) on non-contrast enhanced head CT (NCHCT) scans performed within a single academic institution. The retrospective analysis was conducted on a large patient cohort from multiple healthcare settings within a single academic institution, utilizing standardized scanning protocols. Sensitivity, specificity, false positive, and false negative rates were evaluated for both vendors. Outputs assessed included AI-generated case-level classification. Among 4,081 scans, 595 were positive for ICH. Vendor A demonstrated a sensitivity of 94.4% and specificity of 97.4%, PPV of 85.9%, and NPV of 99.1%. Vendor B showed a sensitivity of 59.5% and specificity of 99.0%, PPV of 90.0%, and NPV of 92.6%. Vendor A had 20 false negatives, which primarily involved subdural and intraparenchymal hemorrhages, and 97 false positives, which appear to be related to motion artifact. Vendor B had 145 false negatives, largely comprised of subdural and subarachnoid hemorrhages, and 36 false positives, which appeared to be related to motion artifact and calcified or dense lesions. Concordantly, 18 cases were false negatives and 11 cases were false positives for both AI solutions. The findings of this study provide valuable information for clinicians and healthcare institutions considering the implementation of AI software for computer aided-triage and notification in the detection of intracranial hemorrhage. The discussion encompasses the implications of the results, the importance of evaluating AI findings in context-especially in the absence of explainability tools, potential areas for improvement, and the relevance of standardized scanning protocols in ensuring the reliability of AI-based diagnostic tools in clinical practice. ICH = Intracranial Hemorrhage; NCHCT = Non-contrast Enhanced Head CT; AI = Artificial Intelligence; SDH = Subdural Hemorrhage; SAH = Subarachnoid Hemorrhage; IPH = Intraparenchymal Hemorrhage; IVH = Intraventricular Hemorrhage; PPV = Positive Predictive Value; NPV = Negative Predictive Value; CADt = Computer-Aided Triage; PACS = Picture Archiving and Communication System; FN = False Negative; FP = False Positive; CI = Confidence Interval.

Fractal-driven self-supervised learning enhances early-stage lung cancer GTV segmentation: a novel transfer learning framework.

Tozuka R, Kadoya N, Yasunaga A, Saito M, Komiyama T, Nemoto H, Ando H, Onishi H, Jingu K

pubmed logopapersSep 15 2025
To develop and evaluate a novel deep learning strategy for automated early-stage lung cancer gross tumor volume (GTV) segmentation, utilizing pre-training with mathematically generated non-natural fractal images. This retrospective study included 104 patients (36-91 years old; 81 males; 23 females) with peripheral early-stage non-small cell lung cancer who underwent radiotherapy at our institution from December 2017 to March 2025. First, we utilized encoders from a Convolutional Neural Network and a Vision Transformer (ViT), pre-trained with four learning strategies: from scratch, ImageNet-1K (1,000 classes of natural images), FractalDB-1K (1,000 classes of fractal images), and FractalDB-10K (10,000 classes of fractal images), with the latter three utilizing publicly available models. Second, the models were fine-tuned using CT images and physician-created contour data. Model accuracy was then evaluated using the volumetric Dice Similarity Coefficient (vDSC), surface Dice Similarity Coefficient (sDSC), and 95th percentile Hausdorff Distance (HD95) between the predicted and ground truth GTV contours, averaged across the fourfold cross-validation. Additionally, the segmentation accuracy was compared between simple and complex groups, categorized by the surface-to-volume ratio, to assess the impact of GTV shape complexity. Pre-trained with FractalDB-10K yielded the best segmentation accuracy across all metrics. For the ViT model, the vDSC, sDSC, and HD95 results were 0.800 ± 0.079, 0.732 ± 0.152, and 2.04 ± 1.59 mm for FractalDB-10K; 0.779 ± 0.093, 0.688 ± 0.156, and 2.72 ± 3.12 mm for FractalDB-1K; 0.764 ± 0.102, 0.660 ± 0.156, and 3.03 ± 3.47 mm for ImageNet-1K, respectively. In conditions FractalDB-1K and ImageNet-1K, there was no significant difference in the simple group, whereas the complex group showed a significantly higher vDSC (0.743 ± 0.095 vs 0.714 ± 0.104, p = 0.006). Pre-training with fractal structures achieved comparable or superior accuracy to ImageNet pre-training for early-stage lung cancer GTV auto-segmentation.

Enhanced value of chest computed tomography radiomics features in breast density classification.

Zhou W, Yang Q, Zhang H

pubmed logopapersSep 15 2025
This study investigates the correlation between chest computed tomography (CT) radiomics features and breast density classification, and aiming to develop an automated radiomics model for breast density assessment using chest CT images. The diagnostic performance was evaluated to establish a CT-based alternative for breast density classification in clinical practice. A retrospective analysis was conducted on patients who underwent both mammography and chest CT scans. The breast density classification results based on mammography images were used to guide the development of CT-based breast density classification models. Radiomic features were extracted from breast regions of interest (ROIs) segmented on chest CT images. The diagnostic performance was evaluated to establish a CT-based alternative for breast density classification in clinical practice. Following dimensionality reduction and selection of dominant radiomic features, four four-class classification models were established, including ① Extreme Gradient Boosting (XGBoost), ② One Vs Rest Classifier-Logistic Regression, ③ Gradient Boosting, and ④ Random Forest Classifier. The performance of these models in classifying breast density using CT images was then evaluated. A total of 330 patients, aged 23-79 years, were included for analysis. The breast ROIs were automatically segmented using a U-net neural network model and subsequently refined and calibrated manually. A total of 1427 radiomic features were extracted, and after dimensionality reduction and feature selection, 28 dominant features closely associated with breast density classification were obtained to construct four classification models. Among the tested models-XGBoost, One-vs-Rest Logistic Regression, Gradient Boosting Classifier, and Random Forest Classifier-the XGBoost model achieved the best performance, with a classification accuracy of 86.6%. Analysis of the receiver operating characteristic curves showed Area Under the Curve (AUC) values of 1.00, 0.93, 0.93, and 0.99 for the four breast density categories, along with a micro-averaged AUC of 0.97 and a macro-averaged AUC of 0.96. Chest CT scans, combined with imaging radiomics models, can accurately classify breast density, providing valuable information related to breast cancer risk stratification. The proposed classification model offers a promising tool for automated breast density assessment, which could enhance personalized breast cancer screening and clinical decision-making.

End-to-End Learning of Multi-Organ Implicit Surfaces from 3D Medical Imaging Data

Farahdiba Zarin, Nicolas Padoy, Jérémy Dana, Vinkle Srivastav

arxiv logopreprintSep 15 2025
The fine-grained surface reconstruction of different organs from 3D medical imaging can provide advanced diagnostic support and improved surgical planning. However, the representation of the organs is often limited by the resolution, with a detailed higher resolution requiring more memory and computing footprint. Implicit representations of objects have been proposed to alleviate this problem in general computer vision by providing compact and differentiable functions to represent the 3D object shapes. However, architectural and data-related differences prevent the direct application of these methods to medical images. This work introduces ImplMORe, an end-to-end deep learning method using implicit surface representations for multi-organ reconstruction from 3D medical images. ImplMORe incorporates local features using a 3D CNN encoder and performs multi-scale interpolation to learn the features in the continuous domain using occupancy functions. We apply our method for single and multiple organ reconstructions using the totalsegmentator dataset. By leveraging the continuous nature of occupancy functions, our approach outperforms the discrete explicit representation based surface reconstruction approaches, providing fine-grained surface details of the organ at a resolution higher than the given input image. The source code will be made publicly available at: https://github.com/CAMMA-public/ImplMORe

Artificial Intelligence-Derived Intramuscular Adipose Tissue Assessment Predicts Perineal Wound Complications Following Abdominoperineal Resection.

Besson A, Cao K, Kokelaar R, Hajdarevic E, Wirth L, Yeung J, Yeung JM

pubmed logopapersSep 15 2025
Perineal wound complications following abdominoperineal resection (APR) significantly impacts patient morbidity. Despite various closure techniques, no method has proven superior. Body composition is a key factor influencing postoperative outcomes. AI-assisted CT scan analysis is an accurate and efficient approach to assessing body composition. This study aimed to evaluate whether body composition characteristics can predict perineal wound complications following APR. A retrospective cohort study of APR patients from 2012 to 2024 was conducted, comparing primary closure and inferior gluteal artery myocutaneous (IGAM) flap closure outcomes. Preoperative CT scans were analyzed using a validated AI model to measure lumbosacral skeletal muscle (SM), intramuscular adipose tissue (IMAT), visceral adipose tissue, and subcutaneous adipose tissue. Greater IMAT volume correlated with increased wound dehiscence in males undergoing IGAM closure (40% vs. 4.8% and p = 0.027). Lower SM-to-IMAT volume ratio was associated with higher wound infection rates (60% vs. 19% and p = 0.04). Closure technique did not significantly impact wound infection or dehiscence rates. This study is the first to use AI derived 3D body composition analysis to assess perineal wound complications after APR. IMAT volume significantly influences wound healing in male patients having IGAM reconstruction.
Page 18 of 1411403 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.