Sort by:
Page 83 of 6346332 results

Hongslo, A., Gupta, A., Nguyen, Q., Caldwell, J., Choi, B., Harvey, C. J., Thompson, J., Mazzotti, D., Yao, Z., Noheria, A.

medrxiv logopreprintOct 7 2025
BackgroundLarge language models (LLMs) are being explored for multiple applications in medical research, including medical text classification. We evaluate the performance of 5 off-the-shelf LLMs for classifying free-text CT angiography reports for pulmonary embolism (PE)- related diagnostic labels. MethodsWe assessed 1,025 manually labeled CT reports using 5 LLMs (ChatGPT-4o, Llama 3.3 70b, Llama 3.1 8b, Llama 3.2 3b, Mixtral 8x7b) with zero-shot prefix prompts. Labels included acute PE, bilateral PE, and large PE. Voting ensemble models combining multiple LLM outputs were also tested. ResultsLlama 3.3 70b and ChatGPT-4o outperformed smaller models for all classification tasks. Highest accuracies were 96.6% (acute PE), 92.7% (bilateral PE), and 82.6% (large PE). Voting ensemble models offered no or minimal improvement in classification performance. ConclusionsOff-the-shelf LLMs, particularly larger ones, can classify free-text reports with high accuracy using simple prompts. Further work is needed to optimize prompting strategies and evaluate hybrid approaches.

Quintas, I., Bontempi, D., Bors, S., Trofimova, O., Boettger, L., Iuliani, I., Ortin Vela, S., Bergmann, S., Presby, D. M.

medrxiv logopreprintOct 7 2025
Cardiorespiratory fitness (CRF) is a powerful predictor of cardiovascular events and overall mortality, often surpassing traditional risk factors in prognostic value. However, its clinical use remains limited because current assessments rely on specialized equipment, trained personnel, and lengthy procedures that are often impractical for broad or routine application, especially in at-risk populations. Because CRF is closely tied to vascular health, surrogate measures that capture vascular features may offer a practical alternative for its estimation. Retinal Color Fundus Images (CFIs) provide a non-invasive window into systemic vascular health and have already demonstrated their utility in predicting cardiovascular risk factors and diseases, yet CFIs have yet to be explored for their potential to predict CRF. In this study, we develop RetFit, a novel CRF estimator derived from CFIs by leveraging state-of-the-art vision transformers. RetFit enables a non-invasive, easy-to-acquire CRF proxy, addressing some of the limitations inherent to standard CRF measures and linking retinal imaging to the cardiovascular system. We evaluated RetFits clinical relevance by analyzing its associations with cardiovascular risk factors, disease outcomes, and exploring its genetic architecture, benchmarking it against a submaximal-exercise-test CRF (SETCRF) estimate. We find that RetFit is prognostic of both cardiovascular events (hazard ratios as low as 0.878, 95%CI 0.856 to 0.901, p<0.001) and overall mortality (hazard ratios as low as 0.780, 95%CI 0.754 to 0.801, p<0.001) and significantly associates with the majority of disease states and risk factors explored within our analysis. Although RetFit and SETCRF shared a moderate phenotypic correlation with each other (r=0.45), their significant genetic associations were disjoint. Interpretability analyses revealed that RetFits predictions are driven by the retinal vasculature, with the number of arterial bifurcations showing the strongest association with RetFit ({beta}=0.287, 95%CI 0.263 to 0.311, p<0.001). These findings highlight the potential of retinal imaging as a scalable, cost-effective, and accessible alternative for CRF estimation, supporting its use in large-scale screening and risk stratification in both clinical and public health contexts.

Ambekar, A., Roohian, M., Liu, Q., Wang, B., Fan, F., Cassol, C., Lafata, K., Holzman, L., Mariani, L., Hodgin, J., Zee, J., Janowczyk, A., Barisoni, L.

medrxiv logopreprintOct 7 2025
BackgroundConventional assessment of Focal Segmental Glomerulosclerosis and Minimal Change Disease focuses on the presence/extent of segmental (SS) and global (GS) glomerulosclerosis. While SS and GS represent ongoing and terminal process, encoded in non-SS/GS glomeruli is prognostic information that can be extracted before structural changes are visually discernable. This study applies computational image analysis to (a) automate the segmentation and classification of glomeruli into GS, SS and non-GS/SS, (b) extract subvisual pathomic characteristics from non-GS/SS glomeruli, and (c) assess their clinical relevance. MethodsLeveraging the NEPTUNE/CureGN Periodic acid Schiff-stained whole slide images, we (i) developed deep learning (DL) models for the segmentation and classification of glomeruli into GS, SS and non-GS/SS; (ii) compared the association with disease progression and proteinuria remission of DL-derived percent of GS and SS vs. human scoring; (iii) extracted pathomic features from non-GS/SS; (iv) assessed their prognostic value using ridge-penalized Cox regression, with pathomic features ranked by Maximum Relevance Minimum Redundancy algorithm; and (v) estimated associations between selected pathomic features and clinical outcomes using Cox proportional hazard models. ResultsAgreement between computer-aided and visual scoring was good for %GS (ICC = 0.889) and moderate for %SS (ICC = 0.592). The prognostic performance of Cox models of computer-aided visual scoring approaches was comparable (iAUCs 0.779 vs. 0.776 for disease progression and 0.811 vs. 0.817 for complete proteinuria remission, respectively). For non-GS/SS glomeruli, 3 and 4 pathomic features were selected and demonstrated modest prognostic performance for disease progression (iAUC = 0.684) and proteinuria remission (iAUC = 0.661), respectively. After adjusting for demographics, clinical characteristics, %GS and %SS, 2 pathomic features remained statistically significantly associated with proteinuria remission. ConclusionComputational pathology allows for automatic quantification of SS/GS glomeruli that is comparable to manual assessment for outcome prediction, and the uncovering of previously under-recognized clinically useful information from non-GS/SS glomeruli.

Zeshan Khan

arxiv logopreprintOct 7 2025
Within the domain of medical image analysis, three distinct methodologies have demonstrated commendable accuracy: Neural Networks, Decision Trees, and Ensemble-Based Learning Algorithms, particularly in the specialized context of genstro institutional track abnormalities detection. These approaches exhibit efficacy in disease detection scenarios where a substantial volume of data is available. However, the prevalent challenge in medical image analysis pertains to limited data availability and data confidence. This paper introduces TreeNet, a novel layered decision ensemble learning methodology tailored for medical image analysis. Constructed by integrating pivotal features from neural networks, ensemble learning, and tree-based decision models, TreeNet emerges as a potent and adaptable model capable of delivering superior performance across diverse and intricate machine learning tasks. Furthermore, its interpretability and insightful decision-making process enhance its applicability in complex medical scenarios. Evaluation of the proposed approach encompasses key metrics including Accuracy, Precision, Recall, and training and evaluation time. The methodology resulted in an F1-score of up to 0.85 when using the complete training data, with an F1-score of 0.77 when utilizing 50\% of the training data. This shows a reduction of F1-score of 0.08 while in the reduction of 50\% of the training data and training time. The evaluation of the methodology resulted in the 32 Frame per Second which is usable for the realtime applications. This comprehensive assessment underscores the efficiency and usability of TreeNet in the demanding landscape of medical image analysis specially in the realtime analysis.

Zhong X, Zhu G, Chen L, Zhang Y, Feng Q, Ji X, Chen Y

pubmed logopapersOct 6 2025
High pitch helical Computed Tomography (CT) scanning significantly reduces radiation dose while improving temporal resolution, offering substantial clinical benefits. However, the incomplete scanning data commonly leads to artifacts in the reconstructed images, degrading image quality and potentially affecting clinical diagnosis. Existing high pitch reconstruction methods primarily operate within the image domain or combine image-domain networks with traditional iterative algorithms, yet their performance remains limited. To address such limitations, we propose Delta-Net, a deep dual-domain alternating iterative optimization network for high pitch helical CT reconstruction. We introduce a novel optimization objective and develop an alternating iterative optimization framework, where each sub iteration consists of projection domain correction and image domain refinement. To enhance generalization and robustness, deep neural networks are employed to learn domain-specific priors, which are incorporated as regularization terms, with all hyper-parameters automatically optimized during training. Specifically, the image domain residual refinement network (IRN) and projection domain consistency enhanced network (PCN) regularize the intermediate results across both domains. Additionally, to improve the capability of artifact suppression and structure restoration, a structure-aware joint loss is tailored for the optimization of Delta-Net. Quantitative and qualitative evaluations on clinical datasets demonstrate that Delta-Net outperforms other competitive methods in artifact suppression, fine structure recovery, and generalization.

Shi Y, Xia W, Niu C, Wiedeman C, Wang G

pubmed logopapersOct 6 2025
Deep learning methods have impacted almost every research field, demonstrating notable successes in medical imaging tasks such as denoising and super-resolution. However, the prerequisite for deep learning is data at scale, but data sharing is expensive yet at risk of privacy leakage. As cutting-edge AI generative models, diffusion models have now become dominant because of their rigorous foundation and unprecedented outcomes. Here we propose a latent diffusion approach for data synthesis without compromising patient privacy. In our exemplary case studies, we develop a latent diffusion model to generate medical CT, MRI, and PET images using publicly available datasets. We demonstrate that state-of-the-art deep learning-based denoising/super-resolution networks can be trained on our synthetic data to achieve image quality with no significant difference from what the same network can achieve after being trained on the original data. In our advanced diffusion model, we specifically embed a safeguard mechanism to protect patient privacy effectively and efficiently. Our approach allows privacy-proof public sharing of diverse big datasets for development of deep models, potentially enabling federated learning at the level of input data instead of local network weights.

Oda M

pubmed logopapersOct 6 2025
In recent years, generative AI has attracted significant public attention, and its use has been rapidly expanding across a wide range of domains. From creative tasks such as text summarization, idea generation, and source code generation, to the streamlining of medical support tasks like diagnostic report generation and summarization, AI is now deeply involved in many areas. Today's breadth of AI applications is clearly distinct from what was seen before generative AI gained widespread recognition. Representative generative AI services include DALL·E 3 (OpenAI, California, USA) and Stable Diffusion (Stability AI, London, England, UK) for image generation, ChatGPT (OpenAI, California, USA), and Gemini (Google, California, USA) for text generation. The rise of generative AI has been influenced by advances in deep learning models and the scaling up of data, models, and computational resources based on the Scaling Laws. Moreover, the emergence of foundation models, which are trained on large-scale datasets and possess general-purpose knowledge applicable to various downstream tasks, is creating a new paradigm in AI development. These shifts brought about by generative AI and foundation models also profoundly impact medical image processing, fundamentally changing the framework for AI development in healthcare. This paper provides an overview of diffusion models used in image generation AI and large language models (LLMs) used in text generation AI, and introduces their applications in medical support. This paper also discusses foundation models, which are gaining attention alongside generative AI, including their construction methods and applications in the medical field. Finally, the paper explores how to develop foundation models and high-performance AI for medical support by fully utilizing national data and computational resources.

Jain A, Dhande RP, Parihar PH, Kashikar S, Raj N, Toshniwal A

pubmed logopapersOct 6 2025
Fetal growth restriction (FGR), preeclampsia, and other placental disorders are leading contributors to perinatal morbidity and mortality, primarily due to impaired uteroplacental perfusion. Existing imaging modalities, such as Doppler ultrasound and fetal MRI, provide indirect or limited functional insights into placental and fetal perfusion, constraining timely clinical intervention. To evaluate contrast-enhanced ultrasound (CEUS) as a promising, safe, and real-time tool for assessing placental perfusion and its potential application in maternal-fetal medicine through comprehensive analysis of methodological parameters, safety profiles, and emerging computational techniques. A comprehensive synthesis of preclinical and clinical studies was conducted, focusing on the safety, efficacy, and current use of CEUS in pregnancy. Key findings were drawn from animal models (rats, sheep, macaques) and human studies involving 256 pregnant individuals, with detailed analysis of imaging protocols, contrast agent characteristics, and quantification methods. CEUS utilizes intravascular microbubble contrast agents (1-8 μm diameter) that do not cross the placental barrier, enabling safe maternal imaging. However, size distribution analysis reveals sub-micron populations (8-20% by number) requiring careful evaluation. Preclinical models confirm CEUS ability to detect placental perfusion Changes with 54% reduction in perfusion index following uterine artery ligation (p < 0.001). Human studies demonstrate zero clinically significant adverse events among 256 cases, though critical gaps exist including absent biomarker monitoring and long-term follow-up. Emerging AI-enhanced analysis achieves 73-86% diagnostic accuracy using ensemble deep learning architectures. Current limitations include significant protocol heterogeneity (MI 0.05-0.19, frequency 2-9 MHz) and absence of standardization. CEUS presents a compelling solution for perfusion imaging in pregnancy, offering functional, bedside imaging without fetal exposure to contrast agents. However, methodological limitations, knowledge gaps regarding long-term outcomes, and the distinction between conventional microbubbles and emerging nanobubble formulations demand systematic research investment. Clinical translation requires standardized protocols, comprehensive safety monitoring including biomarker assessment, ethical oversight, and long-term outcome studies to support integration into routine obstetric care.

Tanyildizi-Kokkulunk H, Alcin G, Cavdar I, Akyel R, Yigit S, Ciftci-Kusbeci T, Caliskan G

pubmed logopapersOct 6 2025
Accurate differentiation between non-cancerous, benign, and malignant lung cancer remains a diagnostic challenge due to overlapping clinical and imaging characteristics. This study proposes a multimodal machine learning (ML) framework integrating positron emission tomography/computed tomography (PET/CT) anatomic-metabolic parameters, sarcopenia markers, and inflammatory biomarkers to enhance classification performance in lung cancer. A retrospective dataset of 222 patients was analyzed, including demographic variables, functional and morphometric sarcopenia indices, hematological inflammation markers, and PET/CT derived parameters such as maximum and mean standardized uptake value (SUVmax, SUVmean), metabolic tumor volume (MTV), total lesion glycolysis (TLG). Five ML algorithms-Logistic Regression, Multi-Layer Perceptron, Support Vector Machine, Extreme Gradient Boosting, and Random Forest-were evaluated using standardized performance metrics. Synthetic Minority Oversampling Technique was applied to balance class distributions. Feature importance analysis was conducted using the optimal model, and classification was repeated using the top 15 features. Among the models, Random Forest demonstrated superior predictive performance with a test accuracy of 96%, precision, recall, and F1-score of 0.96, and an average AUC of 0.99. Feature importance analysis revealed SUVmax, SUVmean, total lesion glycolysis, and skeletal muscle index as leading predictors. A secondary classification using only the top 15 features yielded even higher test accuracy (97%). These findings underscore the potential of integrating metabolic imaging, physical function, and biochemical inflammation markers in a non-invasive ML-based diagnostic pipeline. The proposed framework demonstrates high accuracy and generalizability and may serve as an effective clinical decision support tool in early lung cancer diagnosis and risk stratification.

Wang T, Cao Y, Lu Z, Huang Y, Lu J, Fan F, Shan H, Zhang Y

pubmed logopapersOct 6 2025
Metal artifacts in computed tomography (CT) imaging significantly hinder diagnostic accuracy and clinical decision-making. While deep learning-based metal artifact reduction (MAR) methods have demonstrated promising progress, their clinical application is still constrained by three major challenges: (1) balancing metal artifact reduction with the preservation of critical anatomical structures, (2) effectively capturing the clinical priors of metal artifacts, and (3) dynamically adapting to polychromatic spectral variations. To address these limitations, in this paper, we propose a Self-supervised MAR method for computed Tomography (SMART) that leverages range-null space decomposition (RND) to model metal and tissue LACs separately, and employs implicit neural representation (INR) to learn their respective clinical characteristics without explicit supervision. Specifically, RND decouples metal and tissue LACs into a residual range component for metal LAC modeling, which captures metal artifacts, thus facilitating metal artifact reduction, and a null component for tissue LAC modeling, which focuses on preserving tissue details. To deal with the lack of paired data in clinical settings, we utilize INR to learn the clinical characteristics of these components in a self-supervised manner. Furthermore, SMART incorporates polychromatic spectra into the implicit representation, allowing dynamic adaptation to spectral variations across different imaging conditions. Extensive experiments on one synthetic and two clinical datasets demonstrate the strong potential of SMART in real-world scenarios. By flexibly adapting to spectral variations, it achieves superior generalizability to out-of-distribution clinical data.
Page 83 of 6346332 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.