Sort by:
Page 12 of 3313307 results

Machine learning models for diagnosing lymph node recurrence in postoperative PTC patients: a radiomic analysis.

Pang F, Wu L, Qiu J, Guo Y, Xie L, Zhuang S, Du M, Liu D, Tan C, Liu T

pubmed logopapersAug 12 2025
Postoperative papillary thyroid cancer (PTC) patients often have enlarged cervical lymph nodes due to inflammation or hyperplasia, which complicates the assessment of recurrence or metastasis. This study aimed to explore the diagnostic capabilities of computed tomography (CT) imaging and radiomic analysis to distinguish the recurrence of cervical lymph nodes in patients with PTC postoperatively. A retrospective analysis of 194 PTC patients who underwent total thyroidectomy was conducted, with 98 cases of cervical lymph node recurrence and 96 cases without recurrence. Using 3D Slicer software, Regions of Interest (ROI) were delineated on enhanced venous phase CT images, analyzing 302 positive and 391 negative lymph nodes. These nodes were randomly divided into training and validation sets in a 3:2 ratio. Python was used to extract radiomic features from the ROIs and to develop radiomic models. Univariate and multivariate analyses identified statistically significant risk factors for cervical lymph node recurrence from clinical data, which, when combined with radiomic scores, formed a nomogram to predict recurrence risk. The diagnostic efficacy and clinical utility of the models were assessed using ROC curves, calibration curves, and Decision Curve Analysis (DCA). This study analyzed 693 lymph nodes (302 positive and 391 negative) and identified 35 significant radiomic features through dimensionality reduction and selection. The three machine learning models, including the Lasso regression, Support Vector Machine (SVM), and RF radiomics models, showed.

Dynamic Survival Prediction using Longitudinal Images based on Transformer

Bingfan Liu, Haolun Shi, Jiguo Cao

arxiv logopreprintAug 12 2025
Survival analysis utilizing multiple longitudinal medical images plays a pivotal role in the early detection and prognosis of diseases by providing insight beyond single-image evaluations. However, current methodologies often inadequately utilize censored data, overlook correlations among longitudinal images measured over multiple time points, and lack interpretability. We introduce SurLonFormer, a novel Transformer-based neural network that integrates longitudinal medical imaging with structured data for survival prediction. Our architecture comprises three key components: a Vision Encoder for extracting spatial features, a Sequence Encoder for aggregating temporal information, and a Survival Encoder based on the Cox proportional hazards model. This framework effectively incorporates censored data, addresses scalability issues, and enhances interpretability through occlusion sensitivity analysis and dynamic survival prediction. Extensive simulations and a real-world application in Alzheimer's disease analysis demonstrate that SurLonFormer achieves superior predictive performance and successfully identifies disease-related imaging biomarkers.

Lung-DDPM+: Efficient Thoracic CT Image Synthesis using Diffusion Probabilistic Model

Yifan Jiang, Ahmad Shariftabrizi, Venkata SK. Manem

arxiv logopreprintAug 12 2025
Generative artificial intelligence (AI) has been playing an important role in various domains. Leveraging its high capability to generate high-fidelity and diverse synthetic data, generative AI is widely applied in diagnostic tasks, such as lung cancer diagnosis using computed tomography (CT). However, existing generative models for lung cancer diagnosis suffer from low efficiency and anatomical imprecision, which limit their clinical applicability. To address these drawbacks, we propose Lung-DDPM+, an improved version of our previous model, Lung-DDPM. This novel approach is a denoising diffusion probabilistic model (DDPM) guided by nodule semantic layouts and accelerated by a pulmonary DPM-solver, enabling the method to focus on lesion areas while achieving a better trade-off between sampling efficiency and quality. Evaluation results on the public LIDC-IDRI dataset suggest that the proposed method achieves 8$\times$ fewer FLOPs (floating point operations per second), 6.8$\times$ lower GPU memory consumption, and 14$\times$ faster sampling compared to Lung-DDPM. Moreover, it maintains comparable sample quality to both Lung-DDPM and other state-of-the-art (SOTA) generative models in two downstream segmentation tasks. We also conducted a Visual Turing Test by an experienced radiologist, showing the advanced quality and fidelity of synthetic samples generated by the proposed method. These experimental results demonstrate that Lung-DDPM+ can effectively generate high-quality thoracic CT images with lung nodules, highlighting its potential for broader applications, such as general tumor synthesis and lesion generation in medical imaging. The code and pretrained models are available at https://github.com/Manem-Lab/Lung-DDPM-PLUS.

Fully Automatic Volume Segmentation Using Deep Learning Approaches to Assess the Thoracic Aorta, Visceral Abdominal Aorta, and Visceral Vasculature.

Pouncey AL, Charles E, Bicknell C, Bérard X, Ducasse E, Caradu C

pubmed logopapersAug 12 2025
Computed tomography angiography (CTA) imaging is essential to evaluate and analyse complex abdominal and thoraco-abdominal aortic aneurysms. However, CTA analyses are labour intensive, time consuming, and prone to interphysician variability. Fully automatic volume segmentation (FAVS) using artificial intelligence with deep learning has been validated for infrarenal aorta imaging but requires further testing for thoracic and visceral aorta segmentation. This study assessed FAVS accuracy against physician controlled manual segmentation (PCMS) in the descending thoracic aorta, visceral abdominal aorta, and visceral vasculature. This was a retrospective, multicentre, observational cohort study. Fifty pre-operative CTAs of patients with abdominal aortic aneurysm were randomly selected. Comparisons between FAVS and PCMS and assessment of inter- and intra-observer reliability of PCMS were performed. Volumetric segmentation performance was evaluated using sensitivity, specificity, Dice similarity coefficient (DSC), and Jaccard index (JI). Visceral vessel identification was compared by analysing branchpoint coordinates. Bland-Altman limits of agreement (BA-LoA) were calculated for proximal visceral diameters (excluding duplicate renals). FAVS demonstrated performance comparable with PCMS for volumetric segmentation, with a median DSC of 0.93 (interquartile range [IQR] 0.03), JI of 0.87 (IQR 0.05), sensitivity of 0.99 (IQR 0.01), and specificity of 1.00 (IQR 0.00). These metrics are similar to interphysician comparisons: median DSC 0.93 (IQR 0.07), JI 0.87 (IQR 0.12), sensitivity 0.90 (IQR 0.08), and specificity 1.00 (IQR 0.00). FAVS correctly identified 99.5% (183/184) of visceral vessels. Branchpoint coordinates for FAVS and PCMS were within the limits of CTA spatial resolution (Δx -0.33 [IQR 2.82], Δy 0.61 [IQR 4.85], Δz 2.10 [IQR 4.69] mm). BA-LoA for proximal visceral diameter measurements showed reasonable agreement: FAVS vs. PCMS mean difference -0.11 ± 5.23 mm compared with interphysician variability of 0.03 ± 5.27 mm. FAVS provides accurate, efficient segmentation of the thoracic and visceral aorta, delivering performance comparable with manual segmentation by expert physicians. This technology may enhance clinical workflows for monitoring and planning treatments for complex abdominal and thoraco-abdominal aortic aneurysms.

Switchable Deep Beamformer for High-quality and Real-time Passive Acoustic Mapping.

Zeng Y, Li J, Zhu H, Lu S, Li J, Cai X

pubmed logopapersAug 12 2025
Passive acoustic mapping (PAM) is a promising tool for monitoring acoustic cavitation activities in the applications of ultrasound therapy. Data-adaptive beamformers for PAM have better image quality compared with time exposure acoustics (TEA) algorithms. However, the computational cost of data-adaptive beamformers is considerably expensive. In this work, we develop a deep beamformer based on a generative adversarial network that can switch between different transducer arrays and reconstruct high-quality PAM images directly from radiofrequency ultrasound signals with low computational cost. The deep beamformer was trained on a dataset consisting of simulated and experimental cavitation signals of single and multiple microbubble clouds measured by different (linear and phased) arrays covering 1-15 MHz. We compared the performance of the deep beamformer to TEA and three different data-adaptive beamformers using simulated and experimental test dataset. Compared with TEA, the deep beamformer reduced the energy spread area by 27.3%-77.8% and improved the image signal-to-noise ratio by 13.9-25.1 dB on average for the different arrays in our data. Compared with the data-adaptive beamformers, the deep beamformer reduced the computational cost by three orders of magnitude achieving 10.5 ms image reconstruction speed in our data, while the image quality was as good as that of the data-adaptive beamformers. These results demonstrate the potential of the deep beamformer for high-resolution monitoring of microbubble cavitation activities for ultrasound therapy.

Multimodal Deep Learning for ARDS Detection

Broecker, S., Adams, J. Y., Kumar, G., Callcut, R., Ni, Y., Strohmer, T.

medrxiv logopreprintAug 12 2025
ObjectivePoor outcomes in acute respiratory distress syndrome (ARDS) can be alleviated with tools that support early diagnosis. Current machine learning methods for detecting ARDS do not take full advantage of the multimodality of ARDS pathophysiology. We developed a multimodal deep learning model that uses imaging data, continuously collected ventilation data, and tabular data derived from a patients electronic health record (EHR) to make ARDS predictions. Materials and MethodsA chest radiograph (x-ray), at least two hours of ventilator waveform (VWD) data within the first 24 hours of intubation, and EHR-derived tabular data were used from 220 patients admitted to the ICU to train a deep learning model. The model uses pretrained encoders for the x-rays and ventilation data and trains a feature extractor on tabular data. Encoded features for a patient are combined to make a single ARDS prediction. Ablation studies for each modality assessed their effect on the models predictive capability. ResultsThe trimodal model achieved an area under the receiver operator curve (AUROC) of 0.86 with a 95% confidence interval of 0.01. This was a statistically significant improvement (p<0.05) over single modality models and bimodal models trained on VWD+tabular and VWD+x-ray data. Discussion and ConclusionOur results demonstrate the potential utility of using deep learning to address complex conditions with heterogeneous data. More work is needed to determine the additive effect of modalities on ARDS detection. Our framework can serve as a blueprint for building performant multimodal deep learning models for conditions with small, heterogeneous datasets.

ChatRadio-Valuer: A Chat Large Language Model for Generalizable Radiology Impression Generation on Multi-institution and Multi-system Data.

Zhong T, Zhao W, Zhang Y, Pan Y, Dong P, Jiang Z, Jiang H, Zhou Y, Kui X, Shang Y, Zhao L, Yang L, Wei Y, Li Z, Zhang J, Yang L, Chen H, Zhao H, Liu Y, Zhu N, Li Y, Wang Y, Yao J, Wang J, Zeng Y, He L, Zheng C, Zhang Z, Li M, Liu Z, Dai H, Wu Z, Zhang L, Zhang S, Cai X, Hu X, Zhao S, Jiang X, Zhang X, Liu W, Li X, Zhu D, Guo L, Shen D, Han J, Liu T, Liu J, Zhang T

pubmed logopapersAug 11 2025
Achieving clinical level performance and widespread deployment for generating radiology impressions encounters a giant challenge for conventional artificial intelligence models tailored to specific diseases and organs. Concurrent with the increasing accessibility of radiology reports and advancements in modern general AI techniques, the emergence and potential of deployable radiology AI exploration have been bolstered. Here, we present ChatRadio-Valuer, the first general radiology diagnosis large language model for localized deployment within hospitals and being close to clinical use for multi-institution and multi-system diseases. ChatRadio-Valuer achieved 15 state-of-the-art results across five human systems and six institutions in clinical-level events (n=332,673) through rigorous and full-spectrum assessment, including engineering metrics, clinical validation, and efficiency evaluation. Notably, it exceeded OpenAI's GPT-3.5 and GPT-4 models, achieving superior performance in comprehensive disease diagnosis compared to the average level of radiology experts. Besides, ChatRadio-Valuer supports zero-shot transfer learning, greatly boosting its effectiveness as a radiology assistant, while ensuring adherence to privacy standards and being readily utilized for large-scale patient populations. Our expeditions suggest the development of localized LLMs would become an imperative avenue in hospital applications.

Construction and validation of a urinary stone composition prediction model based on machine learning.

Guo J, Zhang J, Zhang J, Xu C, Wang X, Liu C

pubmed logopapersAug 11 2025
The composition of urinary calculi serves as a critical determinant for personalized surgical strategies; however, such compositional data are often unavailable preoperatively. This study aims to develop a machine learning-based preoperative prediction model for stone composition and evaluate its clinical utility. A retrospective cohort study design was employed to include patients with urinary calculi admitted to the Department of Urology at the Second Affiliated Hospital of Zhengzhou University from 2019 to 2024. Feature selection was performed using least absolute shrinkage and selection operator (LASSO) regression combined with multivariate logistic regression, and a binary prediction model for urinary calculi was subsequently constructed. Model validation was conducted using metrics such as the area under the curve (AUC), while Shapley Additive Explanations(SHAP) values were applied to interpret the predictive outcomes. Among 708 eligible patients, distinct prediction models were established for four stone types: calcium oxalate stones: Logistic regression achieved optimal performance (AUC = 0.845), with maximum stone CT value, 24-hour urinary oxalate, and stone size as top predictors (SHAP-ranked); infection stones: Logistic regression (AUC = 0.864) prioritized stone size, urinary pH, and recurrence history; uric acid stones: LASSO-ridge-elastic net model demonstrated exceptional accuracy (AUC = 0.961), driven by maximum CT value, 24-hour oxalate, and urinary calcium; calcium-containing stones: Logistic regression attained better prediction (AUC = 0.953), relying on CT value, 24-hour calcium, and stone size. This study developed a machine learning prediction model based on multi-algorithm integration, achieving accurate preoperative discrimination of urinary stone composition. The integration of key imaging features with metabolic indicators enhanced the model's predictive performance.

Ratio of visceral-to-subcutaneous fat area improves long-term mortality prediction over either measure alone: automated CT-based AI measures with longitudinal follow-up in a large adult cohort.

Liu D, Kuchnia AJ, Blake GM, Lee MH, Garrett JW, Pickhardt PJ

pubmed logopapersAug 11 2025
Fully automated AI-based algorithms can quantify adipose tissue on abdominal CT images. The aim of this study was to investigate the clinical value of these biomarkers by determining the association between adipose tissue measures and all-cause mortality. This retrospective study included 151,141 patients who underwent abdominal CT for any reason between 2000 and 2021. A validated AI-based algorithm quantified subcutaneous (SAT) and visceral (VAT) adipose tissue cross-sectional area. A visceral-to-subcutaneous adipose tissue area ratio (VSR) was calculated. Clinical data (age at the time of CT, sex, date of death, date of last contact) was obtained from a database search of the electronic health record. Hazard ratios (HR) and Kaplan-Meier curves assessed the relationship between adipose tissue measures and mortality. The endpoint of interest was all-cause mortality, with additional subgroup analysis including age and gender. 138,169 patients were included in the final analysis. Higher VSR was associated with increased mortality; this association was strongest in younger women (highest compared to lowest risk quartile HR 3.32 in 18-39y). Lower SAT was associated with increased mortality regardless of sex or age group (HR up to 1.63 in 18-39y). Higher VAT was associated with increased mortality in younger age groups, with the trend weakening and reversing with age; this association was stronger in women. AI-based CT measures of SAT, VAT, and VSR are predictive of mortality, with VSR being the highest performing fat area biomarker overall. These metrics tended to perform better for women and younger patients. Incorporating AI tools can augment patient assessment and management, improving outcome.

Leveraging an Image-Enhanced Cross-Modal Fusion Network for Radiology Report Generation.

Guo Y, Hou X, Liu Z, Zhang Y

pubmed logopapersAug 11 2025
Radiology report generation (RRG) tasks leverage computer-aided technology to automatically produce descriptive text reports for medical images, aiming to ease radiologists' workload, reduce misdiagnosis rates, and lessen the pressure on medical resources. However, previous works have yet to focus on enhancing feature extraction of low-quality images, incorporating cross-modal interaction information, and mitigating latency in report generation. We propose an Image-Enhanced Cross-Modal Fusion Network (IFNet) for automatic RRG to tackle these challenges. IFNet includes three key components. First, the image enhancement module enhances the detailed representation of typical and atypical structures in X-ray images, thereby boosting detection success rates. Second, the cross-modal fusion networks efficiently and comprehensively capture the interactions of cross-modal features. Finally, a more efficient transformer report generation module is designed to optimize report generation efficiency while being suitable for low-resource devices. Experimental results on public datasets IU X-ray and MIMIC-CXR demonstrate that IFNet significantly outperforms the current state-of-the-art methods.
Page 12 of 3313307 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.