Sort by:
Page 97 of 3513502 results

Unified and Semantically Grounded Domain Adaptation for Medical Image Segmentation

Xin Wang, Yin Guo, Jiamin Xia, Kaiyu Zhang, Niranjan Balu, Mahmud Mossa-Basha, Linda Shapiro, Chun Yuan

arxiv logopreprintAug 12 2025
Most prior unsupervised domain adaptation approaches for medical image segmentation are narrowly tailored to either the source-accessible setting, where adaptation is guided by source-target alignment, or the source-free setting, which typically resorts to implicit supervision mechanisms such as pseudo-labeling and model distillation. This substantial divergence in methodological designs between the two settings reveals an inherent flaw: the lack of an explicit, structured construction of anatomical knowledge that naturally generalizes across domains and settings. To bridge this longstanding divide, we introduce a unified, semantically grounded framework that supports both source-accessible and source-free adaptation. Fundamentally distinct from all prior works, our framework's adaptability emerges naturally as a direct consequence of the model architecture, without the need for any handcrafted adaptation strategies. Specifically, our model learns a domain-agnostic probabilistic manifold as a global space of anatomical regularities, mirroring how humans establish visual understanding. Thus, the structural content in each image can be interpreted as a canonical anatomy retrieved from the manifold and a spatial transformation capturing individual-specific geometry. This disentangled, interpretable formulation enables semantically meaningful prediction with intrinsic adaptability. Extensive experiments on challenging cardiac and abdominal datasets show that our framework achieves state-of-the-art results in both settings, with source-free performance closely approaching its source-accessible counterpart, a level of consistency rarely observed in prior works. Beyond quantitative improvement, we demonstrate strong interpretability of the proposed framework via manifold traversal for smooth shape manipulation.

MMIF-AMIN: Adaptive Loss-Driven Multi-Scale Invertible Dense Network for Multimodal Medical Image Fusion

Tao Luo, Weihua Xu

arxiv logopreprintAug 12 2025
Multimodal medical image fusion (MMIF) aims to integrate images from different modalities to produce a comprehensive image that enhances medical diagnosis by accurately depicting organ structures, tissue textures, and metabolic information. Capturing both the unique and complementary information across multiple modalities simultaneously is a key research challenge in MMIF. To address this challenge, this paper proposes a novel image fusion method, MMIF-AMIN, which features a new architecture that can effectively extract these unique and complementary features. Specifically, an Invertible Dense Network (IDN) is employed for lossless feature extraction from individual modalities. To extract complementary information between modalities, a Multi-scale Complementary Feature Extraction Module (MCFEM) is designed, which incorporates a hybrid attention mechanism, convolutional layers of varying sizes, and Transformers. An adaptive loss function is introduced to guide model learning, addressing the limitations of traditional manually-designed loss functions and enhancing the depth of data mining. Extensive experiments demonstrate that MMIF-AMIN outperforms nine state-of-the-art MMIF methods, delivering superior results in both quantitative and qualitative analyses. Ablation experiments confirm the effectiveness of each component of the proposed method. Additionally, extending MMIF-AMIN to other image fusion tasks also achieves promising performance.

Artificial Intelligence quantified prostate specific membrane antigen imaging in metastatic castrate-resistant prostate cancer patients treated with Lutetium-177-PSMA-617

Yu, S. L., Wang, X., Wen, S., Holler, S., Bodkin, M., Kolodney, J., Najeeb, S., Hogan, T.

medrxiv logopreprintAug 12 2025
PURPOSEThe VISION study1 found that Lutetium-177 (177Lu)-PSMA-617 ("Lu-177") improved overall survival in metastatic castrate resistant prostate cancer (mCRPC). We assessed whether artificial intelligence enhanced PSMA imaging in mCRPC patients starting Lu-177 could identify those with better treatment outcomes. PATIENTS AND METHODSWe conducted a single site, tertiary center, retrospective cohort study in 51 consecutive mCRPC patients treated 2022-2024 with Lu-177. These patients had received most standard treatments, with disease progression. Planned treatment was Lu-177 every 6 weeks while continuing androgen deprivation therapy. Before starting treatment, PSMA images were analyzed for SUVmax and quantified tumor volume using artificial intelligence software (aPROMISE, Exinni Inc.). RESULTSFifty-one mCRPC patients were treated with Lu-177; 33 (65%) received 4 or more treatment cycles and these 33 had Kaplan-Meier median overall survival (OS) of 19.3 months and 23 (70%) surviving at 24 month data analysis. At first cycle Lu-177, these 33 had significantly more favorable levels of serum albumin, alkaline phosphatase, calcium, glucose, prostate specific antigen (PSA), ECOG performance status, and F18 PSMA imaging SUV-maximum values - reflecting PSMA "target expression". In a "protocol-eligibility" analysis, 30 of the 51 patients (59%) were considered "protocol-eligible" and 21 (41%) "protocol-ineligible" based on initial clinical parameters, as defined in Methods. "Protocol-eligible" patients had OS of 14.6 mo and 63% survival at 24 months. AI-enhanced F18 PSMA quantified imaging found "protocol-eligible" tumor volume in mL to be only 39% of the volume in "ineligible" patients. CONCLUSIONIn this cohort of mCRPC patients receiving Lu-177, pre-treatment AI-assisted F18 PSMA imaging finding higher PSMA SUV / lower tumor volume associated with the patients ability to have four or more treatment cycles, protocol eligibility, and better overall survival. KEY POINTSO_ST_ABSQuestionC_ST_ABSIn mCRPC patients initiating Lu-177 therapy, can AI-assisted F18 PSMA imaging identify patients who have better treatment outomes? Findings33 (65%) of a 51 consecutive patient mCRPC cohort were able to receive 4 or more cycles Lu-177. These patients had significantly more favorable serum albumin, alkaline phosphatase, calcium, glucose, PSA, performance status, and higher AI-PSMA scan SUV-maximum values, with a trend toward lower PSMA tumor volumes in mL. They had Kaplan-Meier median OS of 19.3 months and 70% survived at 24 months. AI-enhanced PSMA tumor volumes (mL) in "protocol eligible" patients were significantly lower - only 40% - than tumor volumes of "protocol ineligible" patients. MeaningIn this cohort of mCRPC patients receiving Lu-177, pre-treatment AI-assisted F18 PSMA imaging finding higher PSMA SUV / lower tumor volume associated with the patients ability to have four or more treatment cycles, protocol eligibility, and better overall survival.

Predicting coronary artery abnormalities in Kawasaki disease: Model development and external validation

Wang, Q., Kimura, Y., Oba, J., Ishikawa, T., Ohnishi, T., Akahoshi, S., Iio, K., Morikawa, Y., Sakurada, K., Kobayashi, T., Miura, M.

medrxiv logopreprintAug 12 2025
BackgroundKawasaki disease (KD) is an acute, pediatric vasculitis associated with coronary artery abnormality (CAA) development. Echocardiography at month 1 post-diagnosis remains the standard for CAA surveillance despite limitations, including patient distress and increased healthcare burden. With declining CAA incidence due to improved treatment, the need for routine follow-up imaging is being reconsidered. This study aimed to develop and externally validate models for predicting CAA development and guide the need for echocardiography. MethodsThis study used two prospective multicenter Japanese registries: PEACOCK for model development and internal validation, and Post-RAISE for external validation. The primary outcome was CAA at the month 1 follow-up, defined as a maximum coronary artery Z score (Zmax) [≥] 2. Twenty-nine clinical, laboratory, echocardiographic, and treatment-related variables obtained within one week of diagnosis were selected as predictors. The models included simple models using the previous Zmax as a single predictor, logistic regression models, and machine learning models (LightGBM and XGBoost). Their discrimination, calibration, and clinical utility were assessed. ResultsAfter excluding patients without outcome data, 4,973 and 2,438 patients from PEACOCK and Post-RAISE, respectively, were included. The CAA incidence at month 1 was 5.5% and 6.8% for the respective group. For external validation, a simple model using the Zmax at week 1 produced an area under the curve of 0.79, which failed to improve by more than 0.02 after other variables were added or more complex models were used. Even the best-performing models with a highly sensitive threshold failed to reduce the need for echocardiography at month 1 by more than 30% while maintaining the number of undiagnosed CAA cases to less than ten. The predictive performance declined considerably when the Zmax was omitted from the multivariable models. ConclusionsThe Zmax at week 1 was the strongest predictor of CAA at month 1 post-diagnosis. Even advanced models incorporating additional variables failed to achieve a clinically acceptable trade-off between reducing the need for echocardiography and reducing the number of undiagnosed CAA cases. Until superior predictors are identified, echocardiography at month 1 should remain the standard practice. Clinical PerspectiveO_ST_ABSWhat Is New?C_ST_ABSO_LIThe maximum Z score on echocardiography one week after diagnosis was the strongest of 29 variables for predicting coronary artery abnormalities (CAA) in patients with Kawasaki disease. C_LIO_LIEven the most sensitive models had a suboptimal ability to predict CAA development and reduce the need for imaging studies, suggesting they have limited utility in clinical decision-making. C_LI What Are the Clinical Implications?O_LIUntil more accurate predictors are found or imaging strategies are optimized, performing echocardiography at one-month follow-up should remain the standard of care. C_LI

Multimodal Deep Learning for ARDS Detection

Broecker, S., Adams, J. Y., Kumar, G., Callcut, R., Ni, Y., Strohmer, T.

medrxiv logopreprintAug 12 2025
ObjectivePoor outcomes in acute respiratory distress syndrome (ARDS) can be alleviated with tools that support early diagnosis. Current machine learning methods for detecting ARDS do not take full advantage of the multimodality of ARDS pathophysiology. We developed a multimodal deep learning model that uses imaging data, continuously collected ventilation data, and tabular data derived from a patients electronic health record (EHR) to make ARDS predictions. Materials and MethodsA chest radiograph (x-ray), at least two hours of ventilator waveform (VWD) data within the first 24 hours of intubation, and EHR-derived tabular data were used from 220 patients admitted to the ICU to train a deep learning model. The model uses pretrained encoders for the x-rays and ventilation data and trains a feature extractor on tabular data. Encoded features for a patient are combined to make a single ARDS prediction. Ablation studies for each modality assessed their effect on the models predictive capability. ResultsThe trimodal model achieved an area under the receiver operator curve (AUROC) of 0.86 with a 95% confidence interval of 0.01. This was a statistically significant improvement (p<0.05) over single modality models and bimodal models trained on VWD+tabular and VWD+x-ray data. Discussion and ConclusionOur results demonstrate the potential utility of using deep learning to address complex conditions with heterogeneous data. More work is needed to determine the additive effect of modalities on ARDS detection. Our framework can serve as a blueprint for building performant multimodal deep learning models for conditions with small, heterogeneous datasets.

PADReg: Physics-Aware Deformable Registration Guided by Contact Force for Ultrasound Sequences

Yimeng Geng, Mingyang Zhao, Fan Xu, Guanglin Cao, Gaofeng Meng, Hongbin Liu

arxiv logopreprintAug 12 2025
Ultrasound deformable registration estimates spatial transformations between pairs of deformed ultrasound images, which is crucial for capturing biomechanical properties and enhancing diagnostic accuracy in diseases such as thyroid nodules and breast cancer. However, ultrasound deformable registration remains highly challenging, especially under large deformation. The inherently low contrast, heavy noise and ambiguous tissue boundaries in ultrasound images severely hinder reliable feature extraction and correspondence matching. Existing methods often suffer from poor anatomical alignment and lack physical interpretability. To address the problem, we propose PADReg, a physics-aware deformable registration framework guided by contact force. PADReg leverages synchronized contact force measured by robotic ultrasound systems as a physical prior to constrain the registration. Specifically, instead of directly predicting deformation fields, we first construct a pixel-wise stiffness map utilizing the multi-modal information from contact force and ultrasound images. The stiffness map is then combined with force data to estimate a dense deformation field, through a lightweight physics-aware module inspired by Hooke's law. This design enables PADReg to achieve physically plausible registration with better anatomical alignment than previous methods relying solely on image similarity. Experiments on in-vivo datasets demonstrate that it attains a HD95 of 12.90, which is 21.34\% better than state-of-the-art methods. The source code is available at https://github.com/evelynskip/PADReg.

Switchable Deep Beamformer for High-quality and Real-time Passive Acoustic Mapping.

Zeng Y, Li J, Zhu H, Lu S, Li J, Cai X

pubmed logopapersAug 12 2025
Passive acoustic mapping (PAM) is a promising tool for monitoring acoustic cavitation activities in the applications of ultrasound therapy. Data-adaptive beamformers for PAM have better image quality compared with time exposure acoustics (TEA) algorithms. However, the computational cost of data-adaptive beamformers is considerably expensive. In this work, we develop a deep beamformer based on a generative adversarial network that can switch between different transducer arrays and reconstruct high-quality PAM images directly from radiofrequency ultrasound signals with low computational cost. The deep beamformer was trained on a dataset consisting of simulated and experimental cavitation signals of single and multiple microbubble clouds measured by different (linear and phased) arrays covering 1-15 MHz. We compared the performance of the deep beamformer to TEA and three different data-adaptive beamformers using simulated and experimental test dataset. Compared with TEA, the deep beamformer reduced the energy spread area by 27.3%-77.8% and improved the image signal-to-noise ratio by 13.9-25.1 dB on average for the different arrays in our data. Compared with the data-adaptive beamformers, the deep beamformer reduced the computational cost by three orders of magnitude achieving 10.5 ms image reconstruction speed in our data, while the image quality was as good as that of the data-adaptive beamformers. These results demonstrate the potential of the deep beamformer for high-resolution monitoring of microbubble cavitation activities for ultrasound therapy.

Unified and Semantically Grounded Domain Adaptation for Medical Image Segmentation

Xin Wang, Yin Guo, Jiamin Xia, Kaiyu Zhang, Niranjan Balu, Mahmud Mossa-Basha, Linda Shapiro, Chun Yuan

arxiv logopreprintAug 12 2025
Most prior unsupervised domain adaptation approaches for medical image segmentation are narrowly tailored to either the source-accessible setting, where adaptation is guided by source-target alignment, or the source-free setting, which typically resorts to implicit supervision mechanisms such as pseudo-labeling and model distillation. This substantial divergence in methodological designs between the two settings reveals an inherent flaw: the lack of an explicit, structured construction of anatomical knowledge that naturally generalizes across domains and settings. To bridge this longstanding divide, we introduce a unified, semantically grounded framework that supports both source-accessible and source-free adaptation. Fundamentally distinct from all prior works, our framework's adaptability emerges naturally as a direct consequence of the model architecture, without the need for any handcrafted adaptation strategies. Specifically, our model learns a domain-agnostic probabilistic manifold as a global space of anatomical regularities, mirroring how humans establish visual understanding. Thus, the structural content in each image can be interpreted as a canonical anatomy retrieved from the manifold and a spatial transformation capturing individual-specific geometry. This disentangled, interpretable formulation enables semantically meaningful prediction with intrinsic adaptability. Extensive experiments on challenging cardiac and abdominal datasets show that our framework achieves state-of-the-art results in both settings, with source-free performance closely approaching its source-accessible counterpart, a level of consistency rarely observed in prior works. Beyond quantitative improvement, we demonstrate strong interpretability of the proposed framework via manifold traversal for smooth shape manipulation.

Dynamic Survival Prediction using Longitudinal Images based on Transformer

Bingfan Liu, Haolun Shi, Jiguo Cao

arxiv logopreprintAug 12 2025
Survival analysis utilizing multiple longitudinal medical images plays a pivotal role in the early detection and prognosis of diseases by providing insight beyond single-image evaluations. However, current methodologies often inadequately utilize censored data, overlook correlations among longitudinal images measured over multiple time points, and lack interpretability. We introduce SurLonFormer, a novel Transformer-based neural network that integrates longitudinal medical imaging with structured data for survival prediction. Our architecture comprises three key components: a Vision Encoder for extracting spatial features, a Sequence Encoder for aggregating temporal information, and a Survival Encoder based on the Cox proportional hazards model. This framework effectively incorporates censored data, addresses scalability issues, and enhances interpretability through occlusion sensitivity analysis and dynamic survival prediction. Extensive simulations and a real-world application in Alzheimer's disease analysis demonstrate that SurLonFormer achieves superior predictive performance and successfully identifies disease-related imaging biomarkers.

Lung-DDPM+: Efficient Thoracic CT Image Synthesis using Diffusion Probabilistic Model

Yifan Jiang, Ahmad Shariftabrizi, Venkata SK. Manem

arxiv logopreprintAug 12 2025
Generative artificial intelligence (AI) has been playing an important role in various domains. Leveraging its high capability to generate high-fidelity and diverse synthetic data, generative AI is widely applied in diagnostic tasks, such as lung cancer diagnosis using computed tomography (CT). However, existing generative models for lung cancer diagnosis suffer from low efficiency and anatomical imprecision, which limit their clinical applicability. To address these drawbacks, we propose Lung-DDPM+, an improved version of our previous model, Lung-DDPM. This novel approach is a denoising diffusion probabilistic model (DDPM) guided by nodule semantic layouts and accelerated by a pulmonary DPM-solver, enabling the method to focus on lesion areas while achieving a better trade-off between sampling efficiency and quality. Evaluation results on the public LIDC-IDRI dataset suggest that the proposed method achieves 8$\times$ fewer FLOPs (floating point operations per second), 6.8$\times$ lower GPU memory consumption, and 14$\times$ faster sampling compared to Lung-DDPM. Moreover, it maintains comparable sample quality to both Lung-DDPM and other state-of-the-art (SOTA) generative models in two downstream segmentation tasks. We also conducted a Visual Turing Test by an experienced radiologist, showing the advanced quality and fidelity of synthetic samples generated by the proposed method. These experimental results demonstrate that Lung-DDPM+ can effectively generate high-quality thoracic CT images with lung nodules, highlighting its potential for broader applications, such as general tumor synthesis and lesion generation in medical imaging. The code and pretrained models are available at https://github.com/Manem-Lab/Lung-DDPM-PLUS.
Page 97 of 3513502 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.