Sort by:
Page 163 of 3593587 results

PET Image Reconstruction Using Deep Diffusion Image Prior

Fumio Hashimoto, Kuang Gong

arxiv logopreprintJul 20 2025
Diffusion models have shown great promise in medical image denoising and reconstruction, but their application to Positron Emission Tomography (PET) imaging remains limited by tracer-specific contrast variability and high computational demands. In this work, we proposed an anatomical prior-guided PET image reconstruction method based on diffusion models, inspired by the deep diffusion image prior (DDIP) framework. The proposed method alternated between diffusion sampling and model fine-tuning guided by the PET sinogram, enabling the reconstruction of high-quality images from various PET tracers using a score function pretrained on a dataset of another tracer. To improve computational efficiency, the half-quadratic splitting (HQS) algorithm was adopted to decouple network optimization from iterative PET reconstruction. The proposed method was evaluated using one simulation and two clinical datasets. For the simulation study, a model pretrained on [$^{18}$F]FDG data was tested on amyloid-negative PET data to assess out-of-distribution (OOD) performance. For the clinical-data validation, ten low-dose [$^{18}$F]FDG datasets and one [$^{18}$F]Florbetapir dataset were tested on a model pretrained on data from another tracer. Experiment results show that the proposed PET reconstruction method can generalize robustly across tracer distributions and scanner types, providing an efficient and versatile reconstruction framework for low-dose PET imaging.

OpenBreastUS: Benchmarking Neural Operators for Wave Imaging Using Breast Ultrasound Computed Tomography

Zhijun Zeng, Youjia Zheng, Hao Hu, Zeyuan Dong, Yihang Zheng, Xinliang Liu, Jinzhuo Wang, Zuoqiang Shi, Linfeng Zhang, Yubing Li, He Sun

arxiv logopreprintJul 20 2025
Accurate and efficient simulation of wave equations is crucial in computational wave imaging applications, such as ultrasound computed tomography (USCT), which reconstructs tissue material properties from observed scattered waves. Traditional numerical solvers for wave equations are computationally intensive and often unstable, limiting their practical applications for quasi-real-time image reconstruction. Neural operators offer an innovative approach by accelerating PDE solving using neural networks; however, their effectiveness in realistic imaging is limited because existing datasets oversimplify real-world complexity. In this paper, we present OpenBreastUS, a large-scale wave equation dataset designed to bridge the gap between theoretical equations and practical imaging applications. OpenBreastUS includes 8,000 anatomically realistic human breast phantoms and over 16 million frequency-domain wave simulations using real USCT configurations. It enables a comprehensive benchmarking of popular neural operators for both forward simulation and inverse imaging tasks, allowing analysis of their performance, scalability, and generalization capabilities. By offering a realistic and extensive dataset, OpenBreastUS not only serves as a platform for developing innovative neural PDE solvers but also facilitates their deployment in real-world medical imaging problems. For the first time, we demonstrate efficient in vivo imaging of the human breast using neural operator solvers.

Results from a Swedish model-based analysis of the cost-effectiveness of AI-assisted digital mammography.

Lyth J, Gialias P, Husberg M, Bernfort L, Bjerner T, Wiberg MK, Levin LÅ, Gustafsson H

pubmed logopapersJul 19 2025
To evaluate the cost-effectiveness of AI-assisted digital mammography (AI-DM) compared to conventional biennial breast cancer digital mammography screening (cDM) with double reading of screening mammograms, and to investigate the change in cost-effectiveness based on four different sub-strategies of AI-DM. A decision-analytic state-transition Markov model was used to analyse the decision of whether to use cDM or AI-DM in breast cancer screening. In this Markov model, one-year cycles were used, and the analysis was performed from a healthcare perspective with a lifetime horizon. In the model, we analysed 1000 hypothetical individuals attending mammography screenings assessed with AI-DM compared with 1000 hypothetical individuals assessed with cDM. The total costs, including both screening-related costs and breast cancer-related costs, were €3,468,967 and €3,528,288 for AI-DM and cDM, respectively. AI-DM resulted in a cost saving of €59,320 compared to cDM. Per 1000 individuals, AI-DM gained 10.8 quality-adjusted life years (QALYs) compared to cDM. Gained QALYs at a lower cost means that the AI-DM screening strategy was dominant compared to cDM. Break-even occurred at the second screening at age 42 years. This analysis showed that AI-assisted mammography for biennial breast cancer screening in a Swedish population of women aged 40-74 years is a cost-saving strategy compared to a conventional strategy using double human screen reading. Further clinical studies are needed, as scenario analyses showed that other strategies, more dependent on AI, are also cost-saving. Question To evaluate the cost-effectiveness of AI-DM in comparison to conventional biennial breast cDM screening. Findings AI-DM is cost-effective, and the break-even point occurred at the second screening at age 42 years. Clinical relevance The implementation of AI is clearly cost-effective as it reduces the total cost for the healthcare system and simultaneously results in a gain in QALYs.

Medical radiology report generation: A systematic review of current deep learning methods, trends, and future directions.

Izhar A, Idris N, Japar N

pubmed logopapersJul 19 2025
Medical radiology reports play a crucial role in diagnosing various diseases, yet generating them manually is time-consuming and burdens clinical workflows. Medical radiology report generation aims to automate this process using deep learning to assist radiologists and reduce patient wait times. This study presents the most comprehensive systematic review to date on deep learning-based MRRG, encompassing recent advances that span traditional architectures to large language models. We focus on available datasets, modeling approaches, and evaluation practices. Following PRISMA guidelines, we retrieved 323 articles from major academic databases and included 78 studies after eligibility screening. We critically analyze key components such as model architectures, loss functions, datasets, evaluation metrics, and optimizers - identifying 22 widely used datasets, 14 evaluation metrics, around 20 loss functions, over 25 visual backbones, and more than 30 textual backbones. To support reproducibility and accelerate future research, we also compile links to modern models, toolkits, and pretrained resources. Our findings provide technical insights and outline future directions to address current limitations, promoting collaboration at the intersection of medical imaging, natural language processing, and deep learning to advance trustworthy AI systems in radiology.

Depthwise-Dilated Convolutional Adapters for Medical Object Tracking and Segmentation Using the Segment Anything Model 2

Guoping Xu, Christopher Kabat, You Zhang

arxiv logopreprintJul 19 2025
Recent advances in medical image segmentation have been driven by deep learning; however, most existing methods remain limited by modality-specific designs and exhibit poor adaptability to dynamic medical imaging scenarios. The Segment Anything Model 2 (SAM2) and its related variants, which introduce a streaming memory mechanism for real-time video segmentation, present new opportunities for prompt-based, generalizable solutions. Nevertheless, adapting these models to medical video scenarios typically requires large-scale datasets for retraining or transfer learning, leading to high computational costs and the risk of catastrophic forgetting. To address these challenges, we propose DD-SAM2, an efficient adaptation framework for SAM2 that incorporates a Depthwise-Dilated Adapter (DD-Adapter) to enhance multi-scale feature extraction with minimal parameter overhead. This design enables effective fine-tuning of SAM2 on medical videos with limited training data. Unlike existing adapter-based methods focused solely on static images, DD-SAM2 fully exploits SAM2's streaming memory for medical video object tracking and segmentation. Comprehensive evaluations on TrackRad2025 (tumor segmentation) and EchoNet-Dynamic (left ventricle tracking) datasets demonstrate superior performance, achieving Dice scores of 0.93 and 0.97, respectively. To the best of our knowledge, this work provides an initial attempt at systematically exploring adapter-based SAM2 fine-tuning for medical video segmentation and tracking. Code, datasets, and models will be publicly available at https://github.com/apple1986/DD-SAM2.

Latent Class Analysis Identifies Distinct Patient Phenotypes Associated With Mistaken Treatment Decisions and Adverse Outcomes in Coronary Artery Disease.

Qi J, Wang Z, Ma X, Wang Z, Li Y, Yang L, Shi D, Zhou Y

pubmed logopapersJul 19 2025
This study aimed to identify patient characteristics linked to mistaken treatments and major adverse cardiovascular events (MACE) in percutaneous coronary intervention (PCI) for coronary artery disease (CAD) using deep learning-based fractional flow reserve (DEEPVESSEL-FFR, DVFFR). A retrospective cohort of 3,840 PCI patients was analyzed using latent class analysis (LCA) based on eight factors. Mistaken treatment was defined as negative DVFFR patients undergoing revascularization or positive DVFFR patients not receiving it. MACE included all-cause mortality, rehospitalization for unstable angina, and non-fatal myocardial infarction. Patients were classified into comorbidities (Class 1), smoking-drinking (Class 2), and relatively healthy (Class 3) groups. Mistaken treatment was highest in Class 2 (15.4% vs. 6.7%, <i>P</i> < .001), while MACE was highest in Class 1 (7.0% vs. 4.8%, <i>P</i> < .001). Adjusted analyses showed increased mistaken treatment risk in Class 1 (OR 1.96; 95% CI 1.49-2.57) and Class 2 (OR 1.69; 95% CI 1.28-2.25) compared with Class 3. Class 1 also had higher MACE risk (HR 1.53; 95% CI 1.10-2.12). In conclusion, comorbidities and smoking-drinking classes had higher mistaken treatment and MACE risks compared with the relatively healthy class.

Emerging Role of MRI-Based Artificial Intelligence in Individualized Treatment Strategies for Hepatocellular Carcinoma: A Narrative Review.

Che F, Zhu J, Li Q, Jiang H, Wei Y, Song B

pubmed logopapersJul 19 2025
Hepatocellular carcinoma (HCC) is the most common subtype of primary liver cancer, with significant variability in patient outcomes even within the same stage according to the Barcelona Clinic Liver Cancer staging system. Accurately predicting patient prognosis and potential treatment response prior to therapy initiation is crucial for personalized clinical decision-making. This review focuses on the application of artificial intelligence (AI) in magnetic resonance imaging for guiding individualized treatment strategies in HCC management. Specifically, we emphasize AI-based tools for pre-treatment prediction of therapeutic response and prognosis. AI techniques such as radiomics and deep learning have shown strong potential in extracting high-dimensional imaging features to characterize tumors and liver parenchyma, predict treatment outcomes, and support prognostic stratification. These advances contribute to more individualized and precise treatment planning. However, challenges remain in model generalizability, interpretability, and clinical integration, highlighting the need for standardized imaging datasets and multi-omics fusion to fully realize the potential of AI in personalized HCC care. Evidence level: 5. Technical efficacy: 4.

Automated Quantitative Evaluation of Age-Related Thymic Involution on Plain Chest CT.

Okamura YT, Endo K, Toriihara A, Fukuda I, Isogai J, Sato Y, Yasuoka K, Kagami SI

pubmed logopapersJul 19 2025
The thymus is an important immune organ involved in T-cell generation. Age-related involution of the thymus has been linked to various age-related pathologies in recent studies. However, there has been no method proposed to quantify age-related thymic involution based on a clinical image. The purpose of this study was to establish an objective and automatic method to quantify age-related thymic involution based on plain chest computed tomography (CT) images. We newly defined the thymic region for quantification (TRQ) as the target anatomical region. We manually segmented the TRQ in 135 CT studies, followed by construction of segmentation neural network (NN) models using the data. We developed the estimator of thymic volume (ETV), a quantitative indicator of the thymic tissue volume inside the segmented TRQ, based on simple mathematical modeling. The Hounsfield unit (HU) value and volume of the NN-segmented TRQ were measured, and the ETV was calculated in each CT study from 853 healthy subjects. We investigated how these measures were related to age and sex using quantile additive regression models. A significant correlation between the NN-segmented and manually segmented TRQ was seen for both the HU value and volume (r = 0.996 and r = 0.986, respectively). ETV declined exponentially with age (p < 0.001), consistent with age-related decline in the thymic tissue volume. In conclusion, our method enabled robust quantification of age-related thymic involution. Our method may aid in the prediction and risk classification of pathologies related to thymic involution.

A novel hybrid convolutional and transformer network for lymphoma classification.

Sikkandar MY, Sundaram SG, Almeshari MN, Begum SS, Sankari ES, Alduraywish YA, Obidallah WJ, Alotaibi FM

pubmed logopapersJul 19 2025
Lymphoma poses a critical health challenge worldwide, demanding computer aided solutions towards diagnosis, treatment, and research to significantly enhance patient outcomes and combat this pervasive disease. Accurate classification of lymphoma subtypes from Whole Slide Images (WSIs) remains a complex challenge due to morphological similarities among subtypes and the limitations of models that fail to jointly capture local and global features. Traditional diagnostic methods, limited by subjectivity and inconsistencies, highlight the need for advanced, Artificial Intelligence (AI)-driven solutions. This study proposes a hybrid deep learning framework-Hybrid Convolutional and Transformer Network for Lymphoma Classification (HCTN-LC)-designed to enhance the precision and interpretability of lymphoma subtype classification. The model employs a dual-pathway architecture that combines a lightweight SqueezeNet for local feature extraction with a Vision Transformer (ViT) for capturing global context. A Feature Fusion and Enhancement Module (FFEM) is introduced to dynamically integrate features from both pathways. The model is trained and evaluated on a large WSI dataset encompassing three lymphoma subtypes: CLL, FL, and MCL. HCTN-LC achieves superior performance with an overall accuracy of 99.87%, sensitivity of 99.87%, specificity of 99.93%, and AUC of 0.9991, outperforming several recent hybrid models. Grad-CAM visualizations confirm the model's focus on diagnostically relevant regions. The proposed HCTN-LC demonstrates strong potential for real-time and low-resource clinical deployment, offering a robust and interpretable AI tool for hematopathological diagnosis.

Enhancing cardiac disease detection via a fusion of machine learning and medical imaging.

Yu T, Chen K

pubmed logopapersJul 19 2025
Cardiovascular illnesses continue to be a predominant cause of mortality globally, underscoring the necessity for prompt and precise diagnosis to mitigate consequences and healthcare expenditures. This work presents a complete hybrid methodology that integrates machine learning techniques with medical image analysis to improve the identification of cardiovascular diseases. This research integrates many imaging modalities such as echocardiography, cardiac MRI, and chest radiographs with patient health records, enhancing diagnosis accuracy beyond standard techniques that depend exclusively on numerical clinical data. During the preprocessing phase, essential visual elements are collected from medical pictures utilizing image processing methods and convolutional neural networks (CNNs). These are subsequently integrated with clinical characteristics and input into various machine learning classifiers, including Support Vector Machines (SVM), Random Forest (RF), XGBoost, and Deep Neural Networks (DNNs), to differentiate between healthy persons and patients with cardiovascular illnesses. The proposed method attained a remarkable diagnostic accuracy of up to 96%, exceeding models reliant exclusively on clinical data. This study highlights the capability of integrating artificial intelligence with medical imaging to create a highly accurate and non-invasive diagnostic instrument for cardiovascular disease.
Page 163 of 3593587 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.