Sort by:
Page 160 of 3563559 results

A Study of Anatomical Priors for Deep Learning-Based Segmentation of Pheochromocytoma in Abdominal CT

Tanjin Taher Toma, Tejas Sudharshan Mathai, Bikash Santra, Pritam Mukherjee, Jianfei Liu, Wesley Jong, Darwish Alabyad, Vivek Batheja, Abhishek Jha, Mayank Patel, Darko Pucar, Jayadira del Rivero, Karel Pacak, Ronald M. Summers

arxiv logopreprintJul 21 2025
Accurate segmentation of pheochromocytoma (PCC) in abdominal CT scans is essential for tumor burden estimation, prognosis, and treatment planning. It may also help infer genetic clusters, reducing reliance on expensive testing. This study systematically evaluates anatomical priors to identify configurations that improve deep learning-based PCC segmentation. We employed the nnU-Net framework to evaluate eleven annotation strategies for accurate 3D segmentation of pheochromocytoma, introducing a set of novel multi-class schemes based on organ-specific anatomical priors. These priors were derived from adjacent organs commonly surrounding adrenal tumors (e.g., liver, spleen, kidney, aorta, adrenal gland, and pancreas), and were compared against a broad body-region prior used in previous work. The framework was trained and tested on 105 contrast-enhanced CT scans from 91 patients at the NIH Clinical Center. Performance was measured using Dice Similarity Coefficient (DSC), Normalized Surface Distance (NSD), and instance-wise F1 score. Among all strategies, the Tumor + Kidney + Aorta (TKA) annotation achieved the highest segmentation accuracy, significantly outperforming the previously used Tumor + Body (TB) annotation across DSC (p = 0.0097), NSD (p = 0.0110), and F1 score (25.84% improvement at an IoU threshold of 0.5), measured on a 70-30 train-test split. The TKA model also showed superior tumor burden quantification (R^2 = 0.968) and strong segmentation across all genetic subtypes. In five-fold cross-validation, TKA consistently outperformed TB across IoU thresholds (0.1 to 0.5), reinforcing its robustness and generalizability. These findings highlight the value of incorporating relevant anatomical context into deep learning models to achieve precise PCC segmentation, offering a valuable tool to support clinical assessment and longitudinal disease monitoring in PCC patients.

[A multi-feature fusion-based model for fetal orientation classification from intrapartum ultrasound videos].

Zheng Z, Yang X, Wu S, Zhang S, Lyu G, Liu P, Wang J, He S

pubmed logopapersJul 20 2025
To construct an intelligent analysis model for classifying fetal orientation during intrapartum ultrasound videos based on multi-feature fusion. The proposed model consists of the Input, Backbone Network and Classification Head modules. The Input module carries out data augmentation to improve the sample quality and generalization ability of the model. The Backbone Network was responsible for feature extraction based on Yolov8 combined with CBAM, ECA, PSA attention mechanism and AIFI feature interaction module. The Classification Head consists of a convolutional layer and a softmax function to output the final probability value of each class. The images of the key structures (the eyes, face, head, thalamus, and spine) were annotated with frames by physicians for model training to improve the classification accuracy of the anterior occipital, posterior occipital, and transverse occipital orientations. The experimental results showed that the proposed model had excellent performance in the tire orientation classification task with the classification accuracy reaching 0.984, an area under the PR curve (average accuracy) of 0.993, and area under the ROC curve of 0.984, and a kappa consistency test score of 0.974. The prediction results by the deep learning model were highly consistent with the actual classification results. The multi-feature fusion model proposed in this study can efficiently and accurately classify fetal orientation in intrapartum ultrasound videos.

PET Image Reconstruction Using Deep Diffusion Image Prior

Fumio Hashimoto, Kuang Gong

arxiv logopreprintJul 20 2025
Diffusion models have shown great promise in medical image denoising and reconstruction, but their application to Positron Emission Tomography (PET) imaging remains limited by tracer-specific contrast variability and high computational demands. In this work, we proposed an anatomical prior-guided PET image reconstruction method based on diffusion models, inspired by the deep diffusion image prior (DDIP) framework. The proposed method alternated between diffusion sampling and model fine-tuning guided by the PET sinogram, enabling the reconstruction of high-quality images from various PET tracers using a score function pretrained on a dataset of another tracer. To improve computational efficiency, the half-quadratic splitting (HQS) algorithm was adopted to decouple network optimization from iterative PET reconstruction. The proposed method was evaluated using one simulation and two clinical datasets. For the simulation study, a model pretrained on [$^{18}$F]FDG data was tested on amyloid-negative PET data to assess out-of-distribution (OOD) performance. For the clinical-data validation, ten low-dose [$^{18}$F]FDG datasets and one [$^{18}$F]Florbetapir dataset were tested on a model pretrained on data from another tracer. Experiment results show that the proposed PET reconstruction method can generalize robustly across tracer distributions and scanner types, providing an efficient and versatile reconstruction framework for low-dose PET imaging.

OpenBreastUS: Benchmarking Neural Operators for Wave Imaging Using Breast Ultrasound Computed Tomography

Zhijun Zeng, Youjia Zheng, Hao Hu, Zeyuan Dong, Yihang Zheng, Xinliang Liu, Jinzhuo Wang, Zuoqiang Shi, Linfeng Zhang, Yubing Li, He Sun

arxiv logopreprintJul 20 2025
Accurate and efficient simulation of wave equations is crucial in computational wave imaging applications, such as ultrasound computed tomography (USCT), which reconstructs tissue material properties from observed scattered waves. Traditional numerical solvers for wave equations are computationally intensive and often unstable, limiting their practical applications for quasi-real-time image reconstruction. Neural operators offer an innovative approach by accelerating PDE solving using neural networks; however, their effectiveness in realistic imaging is limited because existing datasets oversimplify real-world complexity. In this paper, we present OpenBreastUS, a large-scale wave equation dataset designed to bridge the gap between theoretical equations and practical imaging applications. OpenBreastUS includes 8,000 anatomically realistic human breast phantoms and over 16 million frequency-domain wave simulations using real USCT configurations. It enables a comprehensive benchmarking of popular neural operators for both forward simulation and inverse imaging tasks, allowing analysis of their performance, scalability, and generalization capabilities. By offering a realistic and extensive dataset, OpenBreastUS not only serves as a platform for developing innovative neural PDE solvers but also facilitates their deployment in real-world medical imaging problems. For the first time, we demonstrate efficient in vivo imaging of the human breast using neural operator solvers.

Results from a Swedish model-based analysis of the cost-effectiveness of AI-assisted digital mammography.

Lyth J, Gialias P, Husberg M, Bernfort L, Bjerner T, Wiberg MK, Levin LÅ, Gustafsson H

pubmed logopapersJul 19 2025
To evaluate the cost-effectiveness of AI-assisted digital mammography (AI-DM) compared to conventional biennial breast cancer digital mammography screening (cDM) with double reading of screening mammograms, and to investigate the change in cost-effectiveness based on four different sub-strategies of AI-DM. A decision-analytic state-transition Markov model was used to analyse the decision of whether to use cDM or AI-DM in breast cancer screening. In this Markov model, one-year cycles were used, and the analysis was performed from a healthcare perspective with a lifetime horizon. In the model, we analysed 1000 hypothetical individuals attending mammography screenings assessed with AI-DM compared with 1000 hypothetical individuals assessed with cDM. The total costs, including both screening-related costs and breast cancer-related costs, were €3,468,967 and €3,528,288 for AI-DM and cDM, respectively. AI-DM resulted in a cost saving of €59,320 compared to cDM. Per 1000 individuals, AI-DM gained 10.8 quality-adjusted life years (QALYs) compared to cDM. Gained QALYs at a lower cost means that the AI-DM screening strategy was dominant compared to cDM. Break-even occurred at the second screening at age 42 years. This analysis showed that AI-assisted mammography for biennial breast cancer screening in a Swedish population of women aged 40-74 years is a cost-saving strategy compared to a conventional strategy using double human screen reading. Further clinical studies are needed, as scenario analyses showed that other strategies, more dependent on AI, are also cost-saving. Question To evaluate the cost-effectiveness of AI-DM in comparison to conventional biennial breast cDM screening. Findings AI-DM is cost-effective, and the break-even point occurred at the second screening at age 42 years. Clinical relevance The implementation of AI is clearly cost-effective as it reduces the total cost for the healthcare system and simultaneously results in a gain in QALYs.

Medical radiology report generation: A systematic review of current deep learning methods, trends, and future directions.

Izhar A, Idris N, Japar N

pubmed logopapersJul 19 2025
Medical radiology reports play a crucial role in diagnosing various diseases, yet generating them manually is time-consuming and burdens clinical workflows. Medical radiology report generation aims to automate this process using deep learning to assist radiologists and reduce patient wait times. This study presents the most comprehensive systematic review to date on deep learning-based MRRG, encompassing recent advances that span traditional architectures to large language models. We focus on available datasets, modeling approaches, and evaluation practices. Following PRISMA guidelines, we retrieved 323 articles from major academic databases and included 78 studies after eligibility screening. We critically analyze key components such as model architectures, loss functions, datasets, evaluation metrics, and optimizers - identifying 22 widely used datasets, 14 evaluation metrics, around 20 loss functions, over 25 visual backbones, and more than 30 textual backbones. To support reproducibility and accelerate future research, we also compile links to modern models, toolkits, and pretrained resources. Our findings provide technical insights and outline future directions to address current limitations, promoting collaboration at the intersection of medical imaging, natural language processing, and deep learning to advance trustworthy AI systems in radiology.

Depthwise-Dilated Convolutional Adapters for Medical Object Tracking and Segmentation Using the Segment Anything Model 2

Guoping Xu, Christopher Kabat, You Zhang

arxiv logopreprintJul 19 2025
Recent advances in medical image segmentation have been driven by deep learning; however, most existing methods remain limited by modality-specific designs and exhibit poor adaptability to dynamic medical imaging scenarios. The Segment Anything Model 2 (SAM2) and its related variants, which introduce a streaming memory mechanism for real-time video segmentation, present new opportunities for prompt-based, generalizable solutions. Nevertheless, adapting these models to medical video scenarios typically requires large-scale datasets for retraining or transfer learning, leading to high computational costs and the risk of catastrophic forgetting. To address these challenges, we propose DD-SAM2, an efficient adaptation framework for SAM2 that incorporates a Depthwise-Dilated Adapter (DD-Adapter) to enhance multi-scale feature extraction with minimal parameter overhead. This design enables effective fine-tuning of SAM2 on medical videos with limited training data. Unlike existing adapter-based methods focused solely on static images, DD-SAM2 fully exploits SAM2's streaming memory for medical video object tracking and segmentation. Comprehensive evaluations on TrackRad2025 (tumor segmentation) and EchoNet-Dynamic (left ventricle tracking) datasets demonstrate superior performance, achieving Dice scores of 0.93 and 0.97, respectively. To the best of our knowledge, this work provides an initial attempt at systematically exploring adapter-based SAM2 fine-tuning for medical video segmentation and tracking. Code, datasets, and models will be publicly available at https://github.com/apple1986/DD-SAM2.

Latent Class Analysis Identifies Distinct Patient Phenotypes Associated With Mistaken Treatment Decisions and Adverse Outcomes in Coronary Artery Disease.

Qi J, Wang Z, Ma X, Wang Z, Li Y, Yang L, Shi D, Zhou Y

pubmed logopapersJul 19 2025
This study aimed to identify patient characteristics linked to mistaken treatments and major adverse cardiovascular events (MACE) in percutaneous coronary intervention (PCI) for coronary artery disease (CAD) using deep learning-based fractional flow reserve (DEEPVESSEL-FFR, DVFFR). A retrospective cohort of 3,840 PCI patients was analyzed using latent class analysis (LCA) based on eight factors. Mistaken treatment was defined as negative DVFFR patients undergoing revascularization or positive DVFFR patients not receiving it. MACE included all-cause mortality, rehospitalization for unstable angina, and non-fatal myocardial infarction. Patients were classified into comorbidities (Class 1), smoking-drinking (Class 2), and relatively healthy (Class 3) groups. Mistaken treatment was highest in Class 2 (15.4% vs. 6.7%, <i>P</i> < .001), while MACE was highest in Class 1 (7.0% vs. 4.8%, <i>P</i> < .001). Adjusted analyses showed increased mistaken treatment risk in Class 1 (OR 1.96; 95% CI 1.49-2.57) and Class 2 (OR 1.69; 95% CI 1.28-2.25) compared with Class 3. Class 1 also had higher MACE risk (HR 1.53; 95% CI 1.10-2.12). In conclusion, comorbidities and smoking-drinking classes had higher mistaken treatment and MACE risks compared with the relatively healthy class.

Emerging Role of MRI-Based Artificial Intelligence in Individualized Treatment Strategies for Hepatocellular Carcinoma: A Narrative Review.

Che F, Zhu J, Li Q, Jiang H, Wei Y, Song B

pubmed logopapersJul 19 2025
Hepatocellular carcinoma (HCC) is the most common subtype of primary liver cancer, with significant variability in patient outcomes even within the same stage according to the Barcelona Clinic Liver Cancer staging system. Accurately predicting patient prognosis and potential treatment response prior to therapy initiation is crucial for personalized clinical decision-making. This review focuses on the application of artificial intelligence (AI) in magnetic resonance imaging for guiding individualized treatment strategies in HCC management. Specifically, we emphasize AI-based tools for pre-treatment prediction of therapeutic response and prognosis. AI techniques such as radiomics and deep learning have shown strong potential in extracting high-dimensional imaging features to characterize tumors and liver parenchyma, predict treatment outcomes, and support prognostic stratification. These advances contribute to more individualized and precise treatment planning. However, challenges remain in model generalizability, interpretability, and clinical integration, highlighting the need for standardized imaging datasets and multi-omics fusion to fully realize the potential of AI in personalized HCC care. Evidence level: 5. Technical efficacy: 4.

Automated Quantitative Evaluation of Age-Related Thymic Involution on Plain Chest CT.

Okamura YT, Endo K, Toriihara A, Fukuda I, Isogai J, Sato Y, Yasuoka K, Kagami SI

pubmed logopapersJul 19 2025
The thymus is an important immune organ involved in T-cell generation. Age-related involution of the thymus has been linked to various age-related pathologies in recent studies. However, there has been no method proposed to quantify age-related thymic involution based on a clinical image. The purpose of this study was to establish an objective and automatic method to quantify age-related thymic involution based on plain chest computed tomography (CT) images. We newly defined the thymic region for quantification (TRQ) as the target anatomical region. We manually segmented the TRQ in 135 CT studies, followed by construction of segmentation neural network (NN) models using the data. We developed the estimator of thymic volume (ETV), a quantitative indicator of the thymic tissue volume inside the segmented TRQ, based on simple mathematical modeling. The Hounsfield unit (HU) value and volume of the NN-segmented TRQ were measured, and the ETV was calculated in each CT study from 853 healthy subjects. We investigated how these measures were related to age and sex using quantile additive regression models. A significant correlation between the NN-segmented and manually segmented TRQ was seen for both the HU value and volume (r = 0.996 and r = 0.986, respectively). ETV declined exponentially with age (p < 0.001), consistent with age-related decline in the thymic tissue volume. In conclusion, our method enabled robust quantification of age-related thymic involution. Our method may aid in the prediction and risk classification of pathologies related to thymic involution.
Page 160 of 3563559 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.