Sort by:
Page 136 of 6476462 results

Lu Z, Liang H, Lu M, Martin D, Hardy BM, Dawant BM, Wang X, Yan X, Huo Y

pubmed logopapersSep 25 2025
Accurate and individualized human head models are becoming increasingly important for electromagnetic (EM) simulations. These simulations depend on precise anatomical representations to realistically model electric and magnetic field distributions, particularly when evaluating Specific Absorption Rate (SAR) within safety guidelines. State of the art simulations use the Virtual Population due to limited public resources and the impracticality of manually annotating patient data at scale. This paper introduces Personalized Head-based Automatic Simulation for EM properties (PHASE), an automated open-source toolbox that generates high-resolution, patient-specific head models for EM simulations using paired T1-weighted (T1w) magnetic resonance imaging (MRI) and computed tomography (CT) scans with 14 tissue labels. To evaluate the performance of PHASE models, we conduct semi-automated segmentation and EM simulations on 15 real human patients, serving as the gold standard reference. The PHASE model achieved comparable global SAR and localized SAR averaged over 10 grams of tissue (SAR-10 g), demonstrating its potential as a promising tool for generating large-scale human model datasets in the future. The code and models of PHASE toolbox have been made publicly available: https://github.com/hrlblab/PHASE.

Mafalda Malafaia, Peter A. N. Bosman, Coen Rasch, Tanja Alderliesten

arxiv logopreprintSep 25 2025
Accurate and interpretable survival analysis remains a core challenge in oncology. With growing multimodal data and the clinical need for transparent models to support validation and trust, this challenge increases in complexity. We propose an interpretable multimodal AI framework to automate survival analysis by integrating clinical variables and computed tomography imaging. Our MultiFIX-based framework uses deep learning to infer survival-relevant features that are further explained: imaging features are interpreted via Grad-CAM, while clinical variables are modeled as symbolic expressions through genetic programming. Risk estimation employs a transparent Cox regression, enabling stratification into groups with distinct survival outcomes. Using the open-source RADCURE dataset for head and neck cancer, MultiFIX achieves a C-index of 0.838 (prediction) and 0.826 (stratification), outperforming the clinical and academic baseline approaches and aligning with known prognostic markers. These results highlight the promise of interpretable multimodal AI for precision oncology with MultiFIX.

Rohan Sanda, Asad Aali, Andrew Johnston, Eduardo Reis, Jonathan Singh, Gordon Wetzstein, Sara Fridovich-Keil

arxiv logopreprintSep 25 2025
Magnetic resonance imaging (MRI) requires long acquisition times, raising costs, reducing accessibility, and making scans more susceptible to motion artifacts. Diffusion probabilistic models that learn data-driven priors can potentially assist in reducing acquisition time. However, they typically require large training datasets that can be prohibitively expensive to collect. Patch-based diffusion models have shown promise in learning effective data-driven priors over small real-valued datasets, but have not yet demonstrated clinical value in MRI. We extend the Patch-based Diffusion Inverse Solver (PaDIS) to complex-valued, multi-coil MRI reconstruction, and compare it against a state-of-the-art whole-image diffusion baseline (FastMRI-EDM) for 7x undersampled MRI reconstruction on the FastMRI brain dataset. We show that PaDIS-MRI models trained on small datasets of as few as 25 k-space images outperform FastMRI-EDM on image quality metrics (PSNR, SSIM, NRMSE), pixel-level uncertainty, cross-contrast generalization, and robustness to severe k-space undersampling. In a blinded study with three radiologists, PaDIS-MRI reconstructions were chosen as diagnostically superior in 91.7% of cases, compared to baselines (i) FastMRI-EDM and (ii) classical convex reconstruction with wavelet sparsity. These findings highlight the potential of patch-based diffusion priors for high-fidelity MRI reconstruction in data-scarce clinical settings where diagnostic confidence matters.

Xie H, Huang Z, Zuo Y, Ju Y, Leung FHF, Law NF, Lam KM, Zheng YP, Ling SH

pubmed logopapersSep 25 2025
Spine segmentation, based on ultrasound volume projection imaging (VPI), plays a vital role for intelligent scoliosis diagnosis in clinical applications. However, this task faces several significant challenges. Firstly, the global contextual knowledge of spines may not be well-learned if we neglect the high spatial correlation of different bone features. Secondly, the spine bones contain rich structural knowledge regarding their shapes and positions, which deserves to be encoded into the segmentation process. To address these challenges, we propose a novel scale-adaptive structure-aware network (SA<sup>2</sup>Net) for effective spine segmentation. First, we propose a scale-adaptive complementary strategy to learn the cross-dimensional long-distance correlation features for spinal images. Second, motivated by the consistency between multi-head self-attention in Transformers and semantic level affinity, we propose structure-affinity transformation to transform semantic features with class-specific affinity and combine it with a Transformer decoder for structure-aware reasoning. In addition, we adopt a feature mixing loss aggregation method to enhance model training. This method improves the robustness and accuracy of the segmentation process. The experimental results demonstrate that our SA<sup>2</sup>Net achieves superior segmentation performance compared to other state-of-the-art methods. Moreover, the adaptability of SA<sup>2</sup>Net to various backbones enhances its potential as a promising tool for advanced scoliosis diagnosis using intelligent spinal image analysis.

Merve Gülle, Junno Yun, Yaşar Utku Alçalar, Mehmet Akçakaya

arxiv logopreprintSep 25 2025
Diffusion models have found extensive use in solving numerous inverse problems. Such diffusion inverse problem solvers aim to sample from the posterior distribution of data given the measurements, using a combination of the unconditional score function and an approximation of the posterior related to the forward process. Recently, consistency models (CMs) have been proposed to directly predict the final output from any point on the diffusion ODE trajectory, enabling high-quality sampling in just a few NFEs. CMs have also been utilized for inverse problems, but existing CM-based solvers either require additional task-specific training or utilize data fidelity operations with slow convergence, not amenable to large-scale problems. In this work, we reinterpret CMs as proximal operators of a prior, enabling their integration into plug-and-play (PnP) frameworks. We propose a solver based on PnP-ADMM, which enables us to leverage the fast convergence of conjugate gradient method. We further accelerate this with noise injection and momentum, dubbed PnP-CM, and show it maintains the convergence properties of the baseline PnP-ADMM. We evaluate our approach on a variety of inverse problems, including inpainting, super-resolution, Gaussian deblurring, and magnetic resonance imaging (MRI) reconstruction. To the best of our knowledge, this is the first CM trained for MRI datasets. Our results show that PnP-CM achieves high-quality reconstructions in as few as 4 NFEs, and can produce meaningful results in 2 steps, highlighting its effectiveness in real-world inverse problems while outperforming comparable CM-based approaches.

Rajamohan, H. R., Xu, Y., Zhu, W., Kijowski, R., Cho, K., Geras, K., Razavian, N., Deniz, C. M.

medrxiv logopreprintSep 25 2025
Accurate disease prognosis is essential for patient care but is often hindered by the lack of long-term data. This study explores deep learning training strategies that utilize large, accessible diagnostic datasets to pretrain models aimed at predicting future disease progression in knee osteoarthritis (OA), Alzheimers disease (AD), and breast cancer (BC). While diagnostic pretraining improves prognostic task performance, naive fine-tuning for prognosis can cause catastrophic forgetting, where the models original diagnostic accuracy degrades, a significant patient safety concern in real-world settings. To address this, we propose a sequential learning strategy with experience replay. We used cohorts with knee radiographs, brain MRIs, and digital mammograms to predict 4-year structural worsening in OA, 2-year cognitive decline in AD, and 5-year cancer diagnosis in BC. Our results showed that diagnostic pretraining on larger datasets improved prognosis model performance compared to standard baselines, boosting both the Area Under the Receiver Operating Characteristic curve (AUROC) (e.g., Knee OA external: 0.77 vs 0.747; Breast Cancer: 0.874 vs 0.848) and the Area Under the Precision-Recall Curve (AUPRC) (e.g., Alzheimers Disease: 0.752 vs 0.683). Additionally, a sequential learning approach with experience replay achieved prognostic performance comparable to dedicated single-task models (e.g., Breast Cancer AUROC 0.876 vs 0.874) while also preserving diagnostic ability. This method maintained high diagnostic accuracy (e.g., Breast Cancer Balanced Accuracy 50.4% vs 50.9% for a dedicated diagnostic model), unlike simpler multitask methods prone to catastrophic forgetting (e.g., 37.7%). Our findings show that leveraging large diagnostic datasets is a reliable and data-efficient way to enhance prognostic models while maintaining essential diagnostic skills. Author SummaryIn our research, we addressed a common problem in medical AI: how to accurately predict the future course of a disease when long-term patient data is rare. We focused on knee osteoarthritis, Alzheimers disease, and breast cancer. We found that we could significantly improve a models ability to predict disease progression by first training it on a much larger, more common type of data - diagnostic images used to assess a patients current disease state. We then developed a specialized training method that allows a single AI model to perform both diagnosis and prognosis tasks effectively. A key challenge is that models often "forget" their original diagnostic skills when they learn a new prognostic task. In a clinical setting, this poses a safety risk, as it could lead to missed diagnoses. We utilize experience replay to overcome this by continually refreshing the models diagnostic knowledge. This creates a more robust and efficient model that mirrors a clinicians workflow, offering the potential to improve patient care with limited amount of hard-to-get longitudinal data.

Li X, Li L, Li M, Yan P, Feng T, Luo H, Zhao Y, Yin S

pubmed logopapersSep 25 2025
Knowledge Distillation (KD) is a technique to transfer the knowledge from a complex model to a simplified model. It has been widely used in natural language processing and computer vision and has achieved advanced results. Recently, the research of KD in medical image analysis has grown rapidly. The definition of knowledge has been further expanded by combining with the medical field, and its role is not limited to simplifying the model. This paper attempts to comprehensively review the development and application of KD in the medical imaging field. Specifically, we first introduce the basic principles, explain the definition of knowledge and the classical teacher-student network framework. Then, the research progress in medical image classification, segmentation, detection, reconstruction, registration, radiology report generation, privacy protection and other application scenarios is presented. In particular, the introduction of application scenarios is based on the role of KD. We summarize eight main roles of KD techniques in medical image analysis, including model compression, semi-supervised method, weakly supervised method, class balancing, etc. The performance of these roles in all application scenarios is analyzed. Finally, we discuss the challenges in this field and propose potential solutions. KD is still in a rapid development stage in the medical imaging field, we give five potential development directions and research hotspots. A comprehensive literature list of this survey is available at https://github.com/XiangQA-Q/KD-in-MIA.

Wang H, Zou W, Wang J, Li J, Zhang B

pubmed logopapersSep 24 2025
<i>Objective</i>. Integrated PET/CT imaging plays a vital role in tumor diagnosis by offering both anatomical and functional information. However, the high cost, limited accessibility of PET imaging and concerns about cumulative radiation exposure in repeated scans may restrict its clinical use. This study aims to develop a cross-modal medical image synthesis method for generating PET images from CT scans, with a particular focus on accurately synthesizing lesion regions.&#xD;<i>Approach</i>. We propose a two-stage Generative Adversarial Network termed MMF-PAE-GAN (Multi-modal Fusion Pre-trained AutoEncoder GAN) that integrates pre-GAN and post-GAN in terms of a Pre-trained AutoEncoder (PAE). The pre-GAN produces an initial pseudo PET image and provides the post-GAN with PET related multi-scale features. Unlike traditional Sample Adaptive Encoder (SAE), the PAE enhances sample-specific representation by extracting multi-scale contextual features. To capture both lesion-related and non-lesion-related anatomical information, two CT scans processed under different window settings are fed into the post-GAN. Furthermore, a Multi-modal Weighted Feature Fusion Module (MMWFFM) is introduced to dynamically highlight informative cross-modal features while suppress redundancies. A Perceptual Loss (PL), computed based on the PAE, is also used to impose constraints in feature-space and improve the fidelity lesion synthesis. &#xD;<i>Main results</i>. On the AutoPET dataset, our method achieved a PSNR of 29.1781 dB, MAE of 0.0094, SSIM of 0.9217, NMSE of 0.3651 for pixel-level metrics, along with a Sensitivity of 85.31\%, Specificity of 97.02\% and Accuracy of 95.97\% for slice-level classification metrics. On the FAHSU dataset, these two metrics amount to a PSNR of 29.1506 dB, MAE of 0.0095, SSIM of 0.9193, NMSE of 0.3663, Sensitivity of 84.51\%, Specificity of 96.82\% and Accuracy of 95.71\%.&#xD;<i>Significance</i>. The proposed MMF-PAE-GAN can generate high-quality PET images directly from CT scans without the need for radioactive tracers, which potentially improves accessibility of functional imaging and reduces costs in clinical scenarios where PET acquisition is limited or repeated scans are not feasible.

Yasui K, Kasugai Y, Morishita M, Saito Y, Shimizu H, Uezono H, Hayashi N

pubmed logopapersSep 24 2025
To quantify radiation dose reduction in radiotherapy treatment-planning CT (RTCT) using a deep learning-based reconstruction (DLR; AiCE) algorithm compared with adaptive iterative dose reduction (IR; AIDR). To evaluate its potential to inform RTCT-specific diagnostic reference levels (DRLs). In this single-institution retrospective study, 4-part RTCT scans (head, head and neck, lung, and pelvis) were acquired on a large-bore CT. Scans reconstructed with IR (n = 820) and DLR (n = 854) were compared. The 75th-percentile CTDI<sub>vol</sub> and DLP (CTDI<sub>IR</sub>, DLP<sub>IR</sub> vs. CTDI<sub>DLR</sub>, DLP<sub>DLR</sub>) were determined per site. Dose reduction rates were calculated as (CTDI<sub>DLR</sub> - CTDI<sub>IR</sub>)/CTDI<sub>IR</sub> × 100% and similarly for DLP. Statistical significance was assessed by the Mann-Whitney U-test. DLR yielded CTDI<sub>vol</sub> reductions of 30.4-75.4% and DLP reductions of 23.1-73.5% across sites (p < 0.001), with the greatest reductions in head and neck RTCT (CTDI<sub>vol</sub>: 75.4%; DLP: 73.5%). Variability also narrowed. Compared with published national DRLs, DLR achieved 34.8 mGy and 18.8 mGy lower CTDI<sub>vol</sub> for head and neck versus UK-DRLs and Japanese multi-institutional data, respectively. DLR substantially lowers RTCT dose indices, providing quantitative data to guide RTCT-specific DRLs and optimize clinical workflows.

Dey SK, Howlader A, Haider MS, Saha T, Setu DM, Islam T, Siddiqi UR, Rahman MM

pubmed logopapersSep 24 2025
The study aims to improve the classification of fetal anatomical planes using Deep Learning (DL) methods to enhance the accuracy of fetal ultrasound interpretation. Five Convolutional Neural Network (CNN) architectures, such as VGG16, ResNet50, InceptionV3, DenseNet169, and MobileNetV2, are evaluated on a large-scale, clinically validated dataset of 12,400 ultrasound images from 1,792 patients. Preprocessing methods, including scaling, normalization, label encoding, and augmentation, are applied to the dataset, and the dataset is split into 80 % for training and 20 % for testing. Each model was fine-tuned and evaluated based on its classification accuracy for comparison. DenseNet169 achieved the highest classification accuracy of 92 % among all the tested models. The study shows that CNN-based models, particularly DenseNet169, significantly improve diagnostic accuracy in fetal ultrasound interpretation. This advancement reduces error rates and provides support for clinical decision-making in prenatal care.
Page 136 of 6476462 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.